The Earnings Whisperers

It seems that in a world where aggregated content is king, earnings estimates are now officially part of the craze. Meet Estimize, a startup that allows anyone to put up their own estimates come earnings season.

Obviously this has been around for awhile in the form of "whisper numbers" the unofficial but widely relied on earnings numbers by which companies are judged. Apple may have a blowout corner but if it doesn't meet the whispers... well.. hope you owned puts. Surprising or otherwise, it seems the crowd over at estimate are pretty good at what they do...


In the first four months of 2012, the site’s earnings forecasts for 160 reports were more accurate than those by Wall Street analysts 63 percent of the time. “We are able to put forth the guys who are actually the best analysts,” says Drogen, “vs. the guys who are just the loudest or have the biggest names.”

Obviously, this probably doesn't come as a surprise considering many of the people who sign up are professional analysts anyways. It does seem to be a continuation of a trend towards democratizing financial research and further minimizing the need to pay for sell side coverage (or at least wholly rely on it). What do you guys think? Could sell side analysts be going by the wayside? Do those of you currently in the industry scoff at the fact that aggregated opinions could possibly beat out all the hard work you put in?

http://www.businessweek.com/articles/2012-04-23/e…

 

It isnt really surprising they are more accurate, it is simply statistics. The larger the sample size the more likely them mean is closer to the true value. Especially if a firm is only covered by like 2 wall street firms vs. 500 ppl on that site, then the effect is gonna b huge. Question of course is about the quality of each estimate submitted to the site and whether or not they simply took some research reports estimate and added a few cent.

 
KingEastwood:
It isnt really surprising they are more accurate, it is simply statistics. The larger the sample size the more likely them mean is closer to the true value. Especially if a firm is only covered by like 2 wall street firms vs. 500 ppl on that site, then the effect is gonna b huge. Question of course is about the quality of each estimate submitted to the site and whether or not they simply took some research reports estimate and added a few cent.

But the sample is not unbiased. The site doesn't randomly call 1,000 people off of a list of phone numbers across the US and ask them for earnings estimates. 500 people self-selected and submitted their estimates. So what is surprising is that a large biased sample was more accurate than a small biased sample, though there is no reason (using simple statistics) it should be.

 
Best Response
Khansian:
KingEastwood:
It isnt really surprising they are more accurate, it is simply statistics. The larger the sample size the more likely them mean is closer to the true value. Especially if a firm is only covered by like 2 wall street firms vs. 500 ppl on that site, then the effect is gonna b huge. Question of course is about the quality of each estimate submitted to the site and whether or not they simply took some research reports estimate and added a few cent.

But the sample is not unbiased. The site doesn't randomly call 1,000 people off of a list of phone numbers across the US and ask them for earnings estimates. 500 people self-selected and submitted their estimates. So what is surprising is that a large biased sample was more accurate than a small biased sample, though there is no reason (using simple statistics) it should be.

By randomly selecting 1000 numbers across the US and asking them for earnings estimates introduces another bias -> stupidity (ok, I'll be nice and say ignorance of the knowledge domain being questioned, but it's simpler to call someone stupid if they don't know something). Not to mention time of day contact is made, whether home/business/mobile numbers are used, age of people that have the type of phone connection you decide to call, questioning ability of the interviewer, and on the rare chance the person who answered the phone has just been hassled by his step-daughter's first-cousin's second-best-friend about why said step-daughter can't go to a party and therefore he is more likely to decide to provide erroneous data to you because if someone else has ruined his day why should he not do the same to you?

Random selection without first confining your domain in question (i.e. knowledge of macro-economics, business strategy, industry movements ... blah blah list goes on) will introduce bullshit data. May as well randomly select 1000 people and ask them their average period pain on a scale, even the men, if you think random selection should preclude constraining your research domain.

So of course a larger sample of people in the knowledge domain is going to be more accurate than the smaller sample from same knowledge domain due the law of large numbers. So I would have to disagree, it is not surprising that it is more accurate.

And before there is rant on about bias introduced from internet users only being included (because lets face it the interweb and the google machine are incredibly difficult to master), remember using a phone based survey introduces its own set of bias that must be overcome for it to be considered 'random'.

Please don't follow the Infinite monkey theorem when attempting to describe randomness for sampling.

 
iValueValue:
Khansian:
KingEastwood:
It isnt really surprising they are more accurate, it is simply statistics. The larger the sample size the more likely them mean is closer to the true value. Especially if a firm is only covered by like 2 wall street firms vs. 500 ppl on that site, then the effect is gonna b huge. Question of course is about the quality of each estimate submitted to the site and whether or not they simply took some research reports estimate and added a few cent.

But the sample is not unbiased. The site doesn't randomly call 1,000 people off of a list of phone numbers across the US and ask them for earnings estimates. 500 people self-selected and submitted their estimates. So what is surprising is that a large biased sample was more accurate than a small biased sample, though there is no reason (using simple statistics) it should be.

By randomly selecting 1000 numbers across the US and asking them for earnings estimates introduces another bias -> stupidity (ok, I'll be nice and say ignorance of the knowledge domain being questioned, but it's simpler to call someone stupid if they don't know something). Not to mention time of day contact is made, whether home/business/mobile numbers are used, age of people that have the type of phone connection you decide to call, questioning ability of the interviewer, and on the rare chance the person who answered the phone has just been hassled by his step-daughter's first-cousin's second-best-friend about why said step-daughter can't go to a party and therefore he is more likely to decide to provide erroneous data to you because if someone else has ruined his day why should he not do the same to you?

Random selection without first confining your domain in question (i.e. knowledge of macro-economics, business strategy, industry movements ... blah blah list goes on) will introduce bullshit data. May as well randomly select 1000 people and ask them their average period pain on a scale, even the men, if you think random selection should preclude constraining your research domain.

So of course a larger sample of people in the knowledge domain is going to be more accurate than the smaller sample from same knowledge domain due the law of large numbers. So I would have to disagree, it is not surprising that it is more accurate.

And before there is rant on about bias introduced from internet users only being included (because lets face it the interweb and the google machine are incredibly difficult to master), remember using a phone based survey introduces its own set of bias that must be overcome for it to be considered 'random'.

Please don't follow the Infinite monkey theorem when attempting to describe randomness for sampling.

Ok, you're putting way too much emphasis on my [admittedly] simplistic description of a random sampling. I am aware of those issues with sampling and was not suggesting that calling numbers off a list was a true random sample. I was just drawing contrast between the self-selected sample of analysts and that relatively unbiased sample.

And no one is saying that drawing consensus from experts is stupid. I never suggested that asking nonexperts was a better strategy. My point was that your 'this is is simply statistics, blah blah blah law of large numbers blah' explanation did not apply.

And there are studies on expert consensus that have shown that the quality of the advice really stops improving and can even deteriorate in certain instances when you get upward of 20 experts. Of course it depends on the subject we're talking about and a whole bunch of other things, but your point about Law of Large Numbers applying to expert consensus is simply wrong. And it is even more wrong in this case where you have absolutely no clue as to who these analysts are.

I mean, what mean exactly are the estimates tending to as N gets large? The average estimate of that group of analysts. Was that group randomly selected from the larger group of analysts? NO!

And you're assuming that the estimates of analysts as a whole are normally distributed around the true value, which is not necessarily the case. Think back to high school chem and how you probably talked about the difference between accuracy and precision in experimentation. You can run a replicable experiment a bazillion times and have good precision, getting close to the same result again and again with some error. Does that mean that is the 'true result'? No. Your assumptions and the very questions you're asking may be wrong.

The democratization of expertise is well documented, and we're seeing it successful in a lot of different areas. Open source computing is a related phenomenon that creates amazing things. But to talk about it as being a result of the Law of Large Numbers is ridiculous. As if every programmer's work is simply a data point with some random error about the true mean of a perfect software, and Linux allowed tons of programmers come together and is now approaching that 'true mean' of perfection.

Rather, open source and the democratization of knowledge expose us to more creativity, diverse knowledge and viewpoints. These can unlock certain tidbits of info that were unknown by the small group of hotshot experts. It does not mean that everyone has the best idea of true knowledge but is just off by a random error, and aggregating opinions and expertise cancels out that error. In some cases that may be true, but definitely not here.

 

Consectetur voluptas inventore soluta corporis fugit ut dignissimos. Voluptatem consequuntur modi quia ex.

Quasi unde deserunt quisquam aut exercitationem ut praesentium et. Ut odio sunt voluptatem sit incidunt corporis illum. Laborum eum sit quo qui. Non delectus nobis tenetur at quis. Minima eveniet delectus expedita. Reiciendis itaque quos est voluptatem doloribus. Omnis expedita aliquam quo tempora qui ipsam ducimus.

Est ut vel nesciunt nostrum officia. Et illo quam aperiam asperiores quia. Qui non sunt et odit.

Career Advancement Opportunities

April 2024 Investment Banking

  • Jefferies & Company 02 99.4%
  • Goldman Sachs 19 98.8%
  • Harris Williams & Co. New 98.3%
  • Lazard Freres 02 97.7%
  • JPMorgan Chase 03 97.1%

Overall Employee Satisfaction

April 2024 Investment Banking

  • Harris Williams & Co. 18 99.4%
  • JPMorgan Chase 10 98.8%
  • Lazard Freres 05 98.3%
  • Morgan Stanley 07 97.7%
  • William Blair 03 97.1%

Professional Growth Opportunities

April 2024 Investment Banking

  • Lazard Freres 01 99.4%
  • Jefferies & Company 02 98.8%
  • Goldman Sachs 17 98.3%
  • Moelis & Company 07 97.7%
  • JPMorgan Chase 05 97.1%

Total Avg Compensation

April 2024 Investment Banking

  • Director/MD (5) $648
  • Vice President (19) $385
  • Associates (86) $261
  • 3rd+ Year Analyst (13) $181
  • Intern/Summer Associate (33) $170
  • 2nd Year Analyst (66) $168
  • 1st Year Analyst (205) $159
  • Intern/Summer Analyst (145) $101
notes
16 IB Interviews Notes

“... there’s no excuse to not take advantage of the resources out there available to you. Best value for your $ are the...”

Leaderboard

success
From 10 rejections to 1 dream investment banking internship

“... I believe it was the single biggest reason why I ended up with an offer...”