Online Physician Reviews May Be Misleading

Physician satisfaction scores on online third-party review sites tend to be skewed and can easily mislead patients, according to a new study by Cedars-Sinai investigators.  

Since research shows that patients largely trust these ratings as their sole source of information when choosing a physician, this distortion may have significant consequences.

 

“Patients put so much trust into ratings, and the stakes are much higher than simply choosing a restaurant,” said lead author Timothy J. Daskivich, MD, MSHPM, assistant professor and director of Health Services Research in the Department of Surgery at Cedars-Sinai. “It’s important to interpret this data correctly because selecting the right physician can have a serious impact on health and well-being.”

 

The researchers pulled reviews from October 2014 to March 2017 on Healthgrades, a consumer ratings website that ranks medical providers from 1-5 stars, and compared that data with providers listed in the U.S. Centers for Medicare & Medicaid Services’ Physician Compare tool. They narrowed the field to 212,933 providers, who had at least four reviews evaluating overall patient satisfaction. They grouped the providers by medical, surgical, and allied health specialties and performed a statistical analysis to examine the distribution of the providers’ average satisfaction scores.

 

The study, published in the Journal of Medical Internet Research, found that overall satisfaction ratings consistently skewed positively, fell within narrow ranges, and were different across specialties. As a result, scores that appear high might actually be comparatively average or low, effectively misleading patients. For example, if 90 percent of physicians in a specialty are rated higher than four stars, patients could be misled into thinking the physician they select is at the top of his or her field.

 

Providers’ satisfaction ratings also differed significantly by specialty group. Median scores for allied health providers (physical therapists, optometrists) were much higher than those of physicians in medical and surgical specialties.

 

Some of the differences in rankings between specialties might be related to the nature of the work. That may explain why psychiatrists, who field emotional trauma, get lower ratings than chiropractors, who provide physical relief and lots of one-on-one interaction, Daskivich said. But he and his co-authors think review sites should flag such statistical quirks and caveats.

 

Although there has been a proliferation of the third-party healthcare review sites (including Healthgrades, Zocdoc, and Yelp), they often present information based on a small number of reviews and incomplete or unverified information. As a result, many health systems (such as Stanford, Cleveland Clinic, and University of Utah) have begun posting more complete ratings and comments from their own outpatient satisfaction surveys. 

 

These hospital-driven tools measure satisfaction with providers for attributes such as communication, friendliness, and time spent with patients. The tools do not measure healthcare quality. Healthcare providers are giving consumers access to this information to offer greater transparency and visibility into feedback from real patients.

 

The authors of the ratings study suggest that third-party online review sites can do better by posting median star ratings for each medical provider and noting where they rank among peers in their specialty. The investigators also note in the study that they created an online tool, Compare My Doc, which uses a provider’s specialty and online rating to show how that provider compares to his or her colleagues. 

 

This article was adapted from information provided by Cedars-Sinai.

 

Exit mobile version