With a web browser or a cellphone, consumers today are making decisions about causes to fund, stocks to pick, movies to watch, restaurants to visit, products to buy, and music to hear partly based on the answer to a single question:
What does everyone else think?
Sites such as Yelp, Amazon, Rotten Tomatoes, and Kickstarter harness the collective wisdom of past consumers to guide future customers. But before those customers jump on the bandwagon and buy a dinner, a book, or a movie ticket, suppose there were a way to make the bandwagon better?
That’s the central question behind “Harnessing the Wisdom of Crowds,” a research paper by Olin’s Xing Huang published in the journal Management Science. Huang and Zhi Da of the University of Notre Dame used data from financial platform Estimize.com, where professional analysts, amateurs, and students provide quarterly earnings-per-share estimates for publicly traded companies.
The researchers found that the less each Estimize user knew about other users’ estimates, the more accurate the crowd’s average estimate became. In fact, the difference was profound: When Estimize users could see other users’ estimates, the consensus estimate beat the Wall Street consensus nearly 57 percent of the time. When they couldn’t, however, the consensus was more accurate 64 percent of the time.
“The problem with seeing others’ information is that people tend to herd with others,” Huang said. “That makes individual forecasts more accurate, but … reduces the consensus accuracy.”
The observed herding behavior was among the paper’s key takeaways. When individual users have access to forecasts from the community at large, they tend to “herd” along with other forecasts. But even further, herding behavior makes users “individually smarter, but collectively dumber.” The paper also noted that herding matters most when “influential users” make their forecasts early.
Results covering data from March 2012 to June 2015 were so stark, Estimize changed its platform by October 2015 to prevent users from seeing other users’ estimates before posting their own. “We were floored by the results,” the Estimize blog reported. “The ‘blind’ data set was unequivocally better.”
“We were also quite lucky to collaborate with Estimize to run experiments where we can randomize the information sets of users,” Huang said.
The researchers used data from 2,516 Estimize users who made estimates ahead of 2,147 earnings releases from 730 firms. But Huang said the results could be instructive for any site that aggregates crowd wisdom—including voting platforms, crowd-funding sites, or product review pages—if they can segregate individual views from those of the community at large.