We knew it would be interesting to invite leaders from the top medical schools to discuss the impact of the rankings and how their schools respond to the rankings. And U.S. News graciously embraced that idea.
New York (PRWEB) November 07, 2011
Mount Sinai School of Medicine and U.S. News & World Report collaborated on October 27 to present an exclusive summit with prominent medical education experts to discuss how U.S. News ranks medical schools and the efficacy of its methodology. The event, which coincided with the 20th anniversary of U.S. News’ “Best Medical Schools” annual rankings, marked the first time U.S. News editors have discussed their ranking system in a public forum with leaders of institutions being ranked.
Entitled “The Impact and Future of Medical School Rankings,” the summit was hosted at Mount Sinai School of Medicine’s Manhattan campus and featured lively debate between U.S. News editors and the deans of several prominent medical schools. A transcript of the event is available. Video will be on http://www.usnews.com.
Medical school and academic participants included: Robert Alpern, MD, Dean, Yale University School of Medicine; Nancy C. Andrews, MD, PhD, Dean, Duke University School of Medicine; Dennis S. Charney, MD, Dean, Mount Sinai School of Medicine; Jules L. Dienstag, MD, Dean for Medical Education, Harvard Medical School; Robert N. Golden, MD, Dean, University of Wisconsin-Madison School of Medicine & Public Health; Lee Goldman, MD, MPH, Dean of the Faculties of Health Sciences and Medicine, Columbia University College of Physicians and Surgeons; Joseph P. Grande, MD, PhD, Associate Dean of Academic Affairs, Mayo Medical School; Allen M. Spiegel, MD, Dean, Albert Einstein College of Medicine; and R. Michael Tanner, PhD, Chief Academic Officer and VP, Association of Public & Land-Grant Universities.
Dr. Charney welcomed guests and explained how the extraordinary summit came about. “We knew it would be interesting to invite leaders from the top medical schools to discuss the impact of the rankings and how their schools respond to the rankings. And U.S. News graciously embraced that idea. I think it took a little courage to hear what the deans have to say.
Added Dr. Charney, “The rankings impact those of us who run medical schools. We know there is a correlation between the rankings and where students choose to go.”
The summit featured three panel discussions, moderated by U.S. News Editor Brian Kelly, designed to look at reputation scores, data analysis, and how the rankings affect end users. Kelly joined with Robert Morse, the U.S News Director of Data Research, to explain some of the ranking process.
“We’re not just in the rankings business,” said Kelly. “We are in the data business. Enumerating something one to 10 has its perils, and in some cases there’s an arbitrariness to it. [E]verybody wants to know who’s first and who’s second, even if the differentiation may be quite small. But the underlying data is also critical and is also something that people make a great deal of use of.”
“We are in the business of trying to help people gather information to make decisions. And we feel that if we do our job right, if we get the numbers right, if we are credible, accurate, transparent, we will have done our part of the job,” said Kelly.
The U.S. News medical school rankings, released annually in the U.S. News & World Report "Best Graduate Schools" issue, are based on statistical indicators that measure the quality of a school's faculty, research, and students. Information is obtained through surveys of program directors, academics, and professionals. Criteria used in the rankings include peer assessment surveys, research activity, grade point averages, MCAT scores, and NIH funding.
Morse explained that the methodology and size of the rankings have evolved over the years, beginning with rankings of just the top 10 or top 15 schools and eventually increasing to 50. Now U.S. News publishes rankings for all the schools that provide data.
Thirty-five percent of a medical school’s ranking is based on reputation scores, which are determined by surveying medical school deans and faculty nationwide. Dr. Robert Alpern of Yale noted that the reputation scores are not dynamic and are often based on outdated recollections.
“The problem is [the reputational scores] don’t move,” Dr. Alpern said. “And they’re really based on reputations that may be 20 or 30 years old. So it’s frustrating as a dean when you make all these improvements in your school and…it has no effect on the reputation score, which ends up being the largest percentage of the ranking.”
Dr. Nancy C. Andrews of Duke observed that medical schools often try to influence their reputational scores by delivering highly polished reports and other materials to her and to others who vote in the rankings.
“[Duke] doesn’t send out the glossy brochures that many schools do, but I get them from many places. They daily fill up my mail box. I think, ‘Boy, it must have cost a lot to produce these,’ and then I put them in recycling. So I’m not sure that those truly enhance reputation.”
Commenting on the relatively low rate of return of the surveys U.S. News sends out every year to assess reputation – about half of all medical school deans complete the survey while only around 19 percent of intern residency directors do – Dr. Robert N. Golden of University of Wisconsin-Madison cited the survey’s length as a factor.
“…[We] know from medical student surveys if you start asking more than five questions the response rate will plummet dramatically. And [U.S. News] has 15 pages or more listing every medical school in the country. Even the most esteemed deans are not aware of what’s going on in many medical schools across the country,” he said.
Dr. Golden also questioned whether medical school deans and faculty should be allowed to rank their own schools. “When the NIH reviews grants, if a grant comes up from your institution you have to leave the room. If reputational score is going to be given any credence, we’ve got to get rid of any kind of bias and conflict of interest,” he said.
Dr. David Muller, Dean of Medical Education at Mount Sinai School of Medicine, who was in the audience, pointed out that medical schools provide data to the Association of American Medical Colleges (AAMC) in an annual graduate questionnaire that may be useful to U.S. News. Robert Morse said that past requests by U.S. News to work with AAMC had been rebuffed, but simply knowing what data AAMC requests from medical schools would be a benefit.
“We’re aware that the AAMC conducts these student surveys, and the AAMC collects other accreditation data. We have never actually seen a blank [AAMC] survey. …[I]f we have the [AAMC] questions, we could at least try to gather [the same data] because it wouldn’t involve any more work by the medical schools,” Morse said.
UNDERSTANDING QUANTIFIABLE DATA
Another theme explored at the summit was the hard data and metrics U.S. News employs in its rankings, whether those data are accurate, reliable, and good indicators of medical education quality, and what new data could possibly be used to make the rankings more effective.
The conversation ranged wide and numerous suggestions for new data points were made, including: demographic measurements of student and faculty diversity; financial aid availability; the rate at which students pass board exams; and national residency match results. One participant suggested cross-referencing the popular U.S. News college rankings in order to determine the quality of the incoming student body at medical schools.
Dr. Charney from Mount Sinai thought that figuring diversity into the rankings could benefit minority student admissions. “[Y]our rankings drive behavior and disincentivize the acceptance of students who may come from a tough background, and are inspirational in what they’ve achieved, but still don’t have great academic metrics. [I]ncluding diversity would be a great value, incentivizing all of us to go out and recruit a diverse population of students who may not have the very best metrics,” Dr. Charney said.
Dr. Lee Goldman from Columbia stressed the importance of protecting the integrity of the rankings while also establishing uniform standards for measurement of diversity. “[W]e can’t have things that are too gameable. If you look at the diversity statistics published by many medical schools the definitions they used for minorities are all over the map. [Y]ou’d have to have much more precise definitions.”
Dr. Goldman also noted that while MCAT scores are taken into consideration by U.S. News, the most telling indicator of a medical school’s ranking was, rather, its NIH portfolio.
“[I]t’s interesting that for the research-intensive schools, by far the strongest correlation of ranking is with the size of the NIH research portfolio. If you look at [U.S. News] rankings of law schools, they rank almost precisely by their LSAT scores. Business schools rank almost as precisely by their GMAT scores. If you look at medical schools, they don’t correlate much at all with the MCAT scores. The real correlation is with the size of the NIH research portfolio,” he said.
Dr. Charney noted that there was the value of measuring NIH grants. However, it is not an ideal measure of the student experience. Most of the scientists that have substantive NIH funding are not teaching our medical students. They’re doing research in the lab or the clinic. They’re not full-time educators. So I think it should be part of the survey, but we need other measures to capture the quality of the student experience..”
Almost all the panelists agreed that measuring NIH funding per faculty penalized institutions with large clinical as opposed to research faculty. On the other hand, it was also agreed that faculty-student ratio was a poor indicator of medical school quality in part because it penalized institutions that emphasize research and lacked large clinical faculties.
“Most medical schools have more than one faculty per student,” said Dr. Goldman. “And there’s no evidence that three faculty per student is better than two faculty per student or that gradations above one make any difference whatsoever. So I would argue that’s a meaningless and irrelevant measure for medical schools.”
In response to a question about how to measure the student experience at medical schools, Brian Kelly of U.S. News entertained the idea of using social media. “[T]here’s a new world out there of communication and we are gingerly adopting it in other parts of what we do. But it’s not easy to get at,” he said.
Several of the participants questioned whether the rankings were best situated to help its key audience – students deciding where to go to medical school. Some noted that the rankings were less effective in differentiating among the top medical schools, and that in many cases students should disregard the small differences in scores among those more prominent schools. Dr. Golden of UW-Madison suggested a new method for listing the rankings:
“I do think it’s silly to look at the difference between the number five [school] and the number seven. But students sometimes do. [D]o away with the rankings and replace them with categorical quartiles or quintiles and let’s acknowledge that there is really no difference between the number nine and number six schools, but there is a big difference between number nine and number 49,” Dr. Golden said.
Dr. Jules L. Dienstag from Harvard urged prospective medical students to look elsewhere for guidance. “[T]he rankings are very nongranular and don’t help a student try to identify the best place for him or her….I think probably the best place a student can go is to their pre-medical advisors at their college.”
Among the panel participants, a consensus was that U.S. News should explore ways to reach out to students and quantify their experiences. Dr. Andrews of Duke said, “[S]omething to consider is having focus groups of medical students and of undergraduates who are thinking about medical school….I think we’re really just guessing in terms of what your customers want from the rankings.”
Dr. Dienstag of Harvard attempted to sum up the difficult task U.S. News faces each time it performs rankings.
“You’re trying to compare the incomparable – to measure the immeasurable – and you’re doing a pretty good job. Obviously you’ve been successful,” said Dienstag. “But each of our schools does things in a very, very different way. And when you try to measure us by the same criteria, it doesn’t capture the richness and the differences. And these differences are valuable.”
About The Mount Sinai Medical Center
The Mount Sinai Medical Center encompasses both The Mount Sinai Hospital and Mount Sinai School of Medicine. Established in 1968, Mount Sinai School of Medicine is one of the leading medical schools in the United States. The Medical School is noted for innovation in education, biomedical research, clinical care delivery, and local and global community service. It has more than 3,400 faculty in 32 departments and 14 research institutes, and ranks among the top 20 medical schools both in National Institutes of Health (NIH) funding and by U.S. News & World Report.
The Mount Sinai Hospital, founded in 1852, is a 1,171-bed tertiary- and quaternary-care teaching facility and one of the nation’s oldest, largest and most-respected voluntary hospitals. In 2011, U.S. News & World Report ranked The Mount Sinai Hospital 16th on its elite Honor Roll of the nation’s top hospitals based on reputation, safety, and other patient-care factors. Of the top 20 hospitals in the United States, Mount Sinai is one of 12 integrated academic medical centers whose medical school ranks among the top 20 in NIH funding and U.S. News & World Report and whose hospital is on the U.S. News & World Report Honor Roll. Nearly 60,000 people were treated at Mount Sinai as inpatients last year, and approximately 560,000 outpatient visits took place.
# # #