Our
methodology
Research.com adheres to high standards and transparent procedures based on well-established metrics in order to produce a wide range of rankings for the research community in a variety of disciplines.
Research.com adheres to high standards and transparent procedures based on well-established metrics in order to produce a wide range of rankings for the research community in a variety of disciplines.
In Research.com, we wanted our rankings to be suited to our recipients from the very beginning. To this end, we designed our methodology in such a way that it answers the most common students' needs. In particular, we asked ourselves:
There is no simple answer. A common reason for attending higher education class is purely intellectual—gaining knowledge, specializing in field of interest, or eventually taking part in cutting-edge research. Another reason is sociological–meeting new people from all around the world, taking part in various campus activities, or the esteem of the college. Eventually, the colleges are a way for obtaining and elevating the skills needed for a well-paid job.
None of the motive for higher education is better than the other. Therefore, contrary to other rankings, while building our ranking, we did not assign the strict weights to each category. Instead, we followed a scientific way of calculating the optimal weights. We believe, that such a way not only removes the arbitrariness of the rankings, but also removes the human-induced biases between scored categories, making our ranking more general and natural.
In our rankings we don’t rely on opinion of a group of selected scientist, nor send our surveys, as such data might be inaccurate. Instead, we build our rankings based on hard, numerical data, either publicly available or from well-established organizations with years of experience in data gathering. Moreover, prior to final publication, we ask the institutions to check their profiles and meticulously introduce all the remarks sent.
The general information about the college, the admissions, graduations, campus facilities etc. follow from governmental newest editions of IPEDS database (https://nces.ed.gov/ipeds/) as well as Peterson's database (https://petersonsdata.com/). The information about the salaries is taken from College Scorecard database (https://collegescorecard.ed.gov/data/), while the security data (crimes, offences, etc.) is taken from CSS - Campus Safety and Security database (https://ope.ed.gov/campussafety). The data about the research activity of the institution is taken from OpenAlex Database (https://openalex.org/).
In our ranking we aim to score 4 different areas of the college activity: research, teaching, faculty, and campus.
The research area describes the research activity of the college. We take into account the number of research articles published by the college, their 2-year mean citation number, and the total grant amount per faculty. The number of research articles describes roughly the research productivity of the employees. The 2-year mean citation number corresponds to the importance of the work being done. In particular, the papers having larger impact on the field will be cited more often, elevating the mean citation number. Finally, the total grant amount is the financial aid from government or private institutions. Higher amount indicates that more valuable projects are being done in the institution. As this value scales with the number of research faculty (the more employees, the more projects can be done), we divide the total grant amount by the number of faculty emploees. All of the data are taken from OpenAlex Database.
The teaching area describes the opportunities for the students, their level, and their chances. To quantify the teaching potential, we include three values: the entry exams, the acceptance rate and the retention rate. The entry exam score is the median of points from the entry exams (SAT or ACT) needed to be admitted to the institution (scalled to the possible maximum). The higher the value, the better, on average freshmen students, and therefore the higher the teaching quality can be. In hand with this value commes the retention rate after the first year of studies. High values indicates, that there is small number of transfer-out students, which indicates also, that the students appreciate the teaching quality. On the other hand, high retention rate shows, that the knowledge is transfered in the manner understandable for the student. Finally, the acceptance rate indicates the college's level of selectivity and competitiveness. A lower acceptance rate typically indicates that the institution is more selective and prestigious, while a higher acceptance rate suggests a more open-admission policy. All the data in this area are taken from IPEDS database.
The faculty area describes the faculty stuff and the prestige of the college as a whole. In particular, we measure the faculty compensation and faculty-to-student ratio. The compensation of a university's faculty is a significant indicator of the institution's overall financial health and resources. Higher faculty compensation suggests that the university can better attract and retain top talent, which can positively impact the quality of education. High faculty-to-student ratio means, that the faculty employees can spend more time on a student on average, and therefore the interaction between the faculty and student becomes more personal. The faculty data are taken from Peterson's database.
Finally, the campus area describes the various factors increasing the quality of life on campus. this can include amenities such as theaters, cinemas, and other leisure activities. We also score the student support, including the access to health clinic, legal service, psychological counselling or women's center available on campus. Another factor of importance is the safety on the campus. All of these data are taken from Peterson's database.
The values selected from the databases are scaled linearly to the interval 0-10, where 10 is the best. In case of acceptance rate and indebtedness, we assign 10 points for the smallest values, while in other cases the larger the better.
The scaled scores are then used to assign weights for each category. In contrary to other rankings, we do not used a once-defined set of weights, but we assign the highest weight to the categories diversifying the institutions most. In particular, we use an Entropy algorithm1 - the science-based method from Decision Making theory. Because of this decision, on one hand we free ourselves from assigning the weights arbitrary, which may be subjected to subconscious human manipulation. On the other, as the data define the weights, the weights are not set constant, and in each edition they change slightly. Current approximate weights in each area are as follows:
Research: 0.550
Teaching: 0.134
Faculty: 0.030
Campus: 0.285
Then, for each institution the weighted sum of scaled scores with the weights defined by Entropy algorithm is calculated, giving rise to the total score. The institutions are ranked according to the total score to give the final position in the ranking.
1Zou, Z. H., Yi, Y., & Sun, J. N. (2006). Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. Journal of Environmental sciences, 18(5), 1020-1023.
Apart from the Best College ranking, we also created other rankings, like:
Best Private College and Best Public College rankings are subrankings of our main ranking, subjected to private and public schools respectively.
Best Value College ranking classifies the institutions based on the return-of-invest (ROI) value, which is the ratio of mean salary in the first six months after graduation to the tuition and fees required to be paid. The ranking includes only the private institutions.
Most Popular College ranking classifies the institution based on the number of applicants per place. This ranking includes both private and public institutions. Please note, that the popularity is reciprocal of the acceptance rate – the more popular the school is, the harder it is to get into.
Most Affordable College ranking is our proposal for students, who look for good quality of education for a reasonable price. To build this ranking we select the best institutions (based on our Best Colleges ranking) in the requested location (country, region, state, city) and sort them according to rising costs. As a result, the institutions on the top guarantee the good educational experience, with the lowest cost possible.
When judging the best stationary programs, we include data which is program specific. Therefore, we do not include scores for campus amenities, or safety, as it does not depend on the program. However, the student well-being is still a cogent factor in the program. Therefore, we include the raw university score as one of the factors. We consider the historic trends and allow the data to dictate the weights of each category by the Entropy algorithm1. Apart from this, we score the program in “student”, “cost” and “research” areas.
1Zou, Z. H., Yi, Y., & Sun, J. N. (2006). Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. Journal of Environmental sciences, 18(5), 1020-1023.
In the “student” area, apart from the entry exams and retaining scores mentioned earlier, we include the hallmark of program popularity – the number of students selecting the program, normalized by the total number of students selecting programs on the same level and in the same discipline. We also take into account the program appreciation by the students, measured by how many students decide to continue the education, based on the number of awarded Bachelor, Master, and PhD degrees.
To accurately judge the costs, we consider three factors. One is the median salary after the course which is Institution- and discipline-specific. The higher the value, the better. The other two factors are the total cost and the cost trend. The high cost may block the possibility to study in a given Institution. Also, the cost trend describes the predictability (stability) of the cost, which is an important factor, when selecting the program. We do not take into account the possible financial help the student may obtain, as it is Institution- not program-specific, and therefore was already included in the total Institution score.
The research performed in the Institution may be easily divided based on the area it concerns. In particular, we score the number of citations, and the total number of papers published in a given discipline in the last 5 years.
As earlier, the score for each area is data-dependent. The current weights for the best stationary programs are as follows:
Apart from the “best” stationary programs, we publish other types of rankings as well:
When judging the online programs, we follow a similar methodology to stationary programs. In particular, all the data are linearly scaled to [0,10] intervals, merged into category scores, which then are combined to get a total score. We consider the historic trends and allow the data to dictate the weights of each category by the Entropy algorithm1. In the online programs we consider four main categories of scores: Program, Length, Student Satisfaction, and Costs and Earnings.
1Zou, Z. H., Yi, Y., & Sun, J. N. (2006). Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. Journal of Environmental sciences, 18(5), 1020-1023.
Good online program may be thought of meeting the same standards as a good stationary program. Therefore, as one factor we include the total score of an Institution's respective stationary program in a given discipline.
In the case of online programs, their length can vary, but we believe that one of the key aims of program participants is to quickly obtain their degree. Therefore, we included the program length as a factor, estimating it as the shorter, the better.
As in the case of stationary programs, we believe that high-quality online courses are more likely to be recommended among the students resulting in the increase of the number of enrolled students. Therefore, as a measure of student satisfaction, we take into account the number of enrolled students and its historic trend. We also factor in additional points for the services (like common graduation) and resources (such as access to tutoring) provided by the Institution.
Last but not least is the score awarded for the cost and earnings. Similarly to stationary programs, we take into account the actual cost and the historic cost trend on one hand (the lower the better), and the medium salary 1 year after completion of the course.
The current weights for online ranking areas are:
Apart from the “best” online programs, we publish other types of rankings as well:
For the ranking of top scientists (launched initially in 2014), the inclusion criteria for scholars to be considered into the ranking are based on their Discipline H-index (D-index), proportion of their contributions made within a given discipline as well as the awards and achievements of a scientist in specific areas. The D-Index is used to rank scholars in descending order combined with the total number of citations.
What is D-index?
The H-Index is an indicative measure which reflects the number of influential documents authored by scientists. It is computed as the number h of papers receiving at least h citations [3]. The H-index and citation data we use are obtained from various bibliometric data sources. The Discipline H-index (D-index) is calculated by considering only the publications and their citation values deemed to belong to an examined discipline.
To ensure fair ranking in all disciplines in addition to D-index, we also consider the number of publications in journals and conferences ranked and classified in an examined discipline. Scientists should have a consistent ratio of publications in discipline-ranked venues against their D-index. The bibliometric ratio gives an indication of contributions pertinent to a given discipline.
Besides the use of citation-based metrics, we also conduct rigorous searches for each scientist to inspect and include the awards, fellowships and academic recognitions they have received from leading research institutions and government agencies.
The D-index threshold for accepting a scientist to be listed is set as an increment of 10 depending on the total number of scientists estimated for each discipline. The D-index threshold ensures that the top 1% of leading scientists are considered into the ranking among all scholars and researchers belonging to the discipline. There should be a proximity of 30% or less between the scientist global H-index and their D-index.
Without doubt, numbers are never meant to be an obsolete measure to quantify the precious contributions of scientists, but a threshold value of 40 is a recommendation reported by J. E. Hirsch in his h-index paper where he suggests that a h-index of 40 characterizes outstanding scientists [1].
Due to the need to establish a transparent framework for ranking universities based on objective and well-established metrics for different disciplines, Research.com is the only ranking platform that capitalizes the human as a valuable asset to research and educational institutions offering its data and procedures publicly in a completely transparent way.
The ranking provided for universities by Research.com is based purely on the reputation of its scholars. We highly believe that companies should be valued based on the talents and reputation of its staff.
Because the ranking problem is compound by subjectivity between different experts on defining and perceiving the quality and impact of educational institutions, major companies providing mainstream rankings do not fully elaborate their ranking procedures nor do they offer their raw data. Besides, existing university rankings rely mostly on declarative and subjective analysis of data.
The first edition of the Research.com university ranking was released in 2020 covering over 591 research institutions and was limited to the area of computer science. In 2022 university rankings for all major scientific disciplines were released. The ranking of universities across research disciplines is based on simple metrics highly related to the reputation of academic staff in addition to research outputs as elaborated above.
Based on cross-matching analysis, the ranking of universities provided by Research.com correlates consistently for the case of top universities with mainstream rankings maintained by leading companies with decades of experience in the field. This includes QS, USNews and TimesHigherEducation (THE).
Research.com offers a list of the best journals for various disciplines that are selectively reviewed annually based on a number of indicators related to the quality of accepted papers and reputation of the journal.
As there is no panacea in finding the magical metric or the analytical tool to produce a consensual score to please all experts from different disciplines, Research.com adopted a strict policy for indexing journals based on the following metrics.
is a novel bibliometric indicator that quantifies the endorsement level of the best and well-respected scientists for a given journal. The score is estimated using two factors for data published during the last four years using the Microsoft Academic data:
H-IndexValue: Estimated
h-index from publications made solely by the best
scientists.
NumberTopScientists: Number of scientists who have published in the
journal and have contributed to the H-indexValue.
is a measure that reflects the average number of citations an article in a particular journal would receive per year. It is computed as the sum of citations for all journal documents published during the two preceding years divided by the total number of articles published during the same period. This is elaborated in the following equation:
The impact factor is computed by the Web of Science and updated annually [5].
is a metric developed by SCIMAGO research laboratory using Scopus data [6]. The measure indicates the scientific influence or prestige of an academic journal based on two factors which are the number of citations in tandem with the source of where the citations come from.
Journals from a publishing house that requires an APC to be paid by authors to publish their research, are reviewed on a case by case basis to ensure that they are not predatory publishers or vanity press whose main goal is to make a profit at the expense of quality and research contributions [7].
The editor-in-chief and members of the editorial board for the journal are also inspected against different bibliometric sources to ensure that the journal is led by experts in the area of research related to the journal.
Research.com indexes major conferences in various disciplines of research. The ranking of the best conferences is based primarily on the Impact Score indicator computed using the number of endorsements by leading scientists. Further valuable indicators are considered whilst assessing conferences including the indexing, sponsoring bodies, number of editions and the profiles of its steering committees.
A novel metric called Impact Score is devised to rank conferences based on the number of contributing the best scientists in addition to the h-index estimated from the scientific papers published by the best scientists. The score is estimated using two factors for data published during the last four years using the Microsoft Academic data:
H-IndexValue: Estimated
h-index from publications made solely by the best scientists.
NumberTopScientists: Number
of scientists who have published in the journal and have
contributed to the H-indexValue.
For the technical sponsor and indexing of conference proceedings, the majority of the best conferences indexed by Research.com are sponsored or indexed by leading and well-respected publishers and academic organizations including IEEE, ACL, Springer, AAAI, USENIX, Elsevier, ACM, OSA and LIPIcs.
References