Machine Learning Taking over HF research analyst roles in near future?

thehalifaxgroup's picture
Rank: King Kong | 1,484

I read an interesting article recently regarding machine learning taking over research analyst roles over the next 5 years or so (
I also spoke to the CFO of a multi-billion dollar pension fund recently, and he also supported this idea, that in the next 5 years, a majority of research analyst positions on Wall Street would simply be taken over by computers, putting them out of work (at least in public securities). What is your perspective on this? Machines can obviously crunch numbers faster and spot patterns/trends far more effectively.

Do you believe that the era of the research analyst will come to an end by 2030? Does it make any sense at all now to enter asset management or the hedge fund world (on the public securities side)? If so, where should someone interested in fundamental research look into for opportunity?

Comments (38)

Mar 10, 2018


    • 2
Mar 11, 2018

How long do you feel before machine learning make the research analyst irrelevant?

Or, put another way, at what point do you think ML will reach a point to cull the analyst pool down by half or even a third?

Mar 14, 2018

Great post, I'm going to be honest though... I had to Google orthogonal

Mar 11, 2018


Mar 11, 2018

I see this stuff as 99.9% breathless hype from people who know neither finance nor comp sci. "Analyst" isn't some single skill that can be brute forced. A computer that could perform the wide variety of tasks required would need to be near generalized AI, and that is nowhere close

Mar 12, 2018

+1SB. How do you feel about the neural network? Essentially, this is hundreds of layers of reasoning and connections in a machine that is designed to simulate the complexity of the human brain.

Do you think that new research analysts can fully make a career out of AM/HF in public equities for the next 30 years, at least until 2050? (I understand this question has some challenges to answer)

Mar 14, 2018

Neural networks are not "like the human brain". Would be hard to make this assertion b/c ... we really don't know how the human brain works.

I can't possibly answer that last question so wont even try

Mar 12, 2018

No, it will not make them irrelevant. It may make certain types of strategies irrelevant (or at least less relevant) but I am fully confident that human investors will learn and adapt faster than any AI we will create by the year 2050.

    • 1
Mar 13, 2018

+1 SB, thanks for responding. What kind of strategies do you feel will be less relevant? Do you think fundamental value, activism, or long-term GARP strategies will be diminished?

Mar 13, 2018

At most the roles will change. New technology will replace/augment certain aspects of the research analyst role. It'll just make researchers more efficient. I think AI could help with reducing the number of companies to investigate.

Mar 14, 2018

I'm a data scientist at a BB and I write algorithms . . . that's all I have to say about that . . .

I think that the general perceptions that are being expressed here are true. Algorithms may not be as advanced as people make them out to be and there's definitely all kinds of gaps (knowledge gaps, data gaps, hardware gaps, etc).

However, where all of you should be concerned is the speed at which ML can learn and adapt to new information. THAT is where algorithms will truly take off and make an impact in all kinds of industries. The key to any algorithm is the depth & variety of data that you can feed into it. Sparse data leads to weak learning, deep data leads to better learning. And when an algorithm has a lot of data, watch out, because it can learn faster than any human can hope to achieve.

If you really want to get your mind around what ML can do, go find the whitepaper written by the Google researchers that solved the game of Go last year. No, don't read the fucking CNN article on it, read the actual whitepaper. The researchers talk about how they gave their algorithm very little seed information and then set it loose on Google hardware to play millions & millions of games in very little time. This kind of ML development is where the world is headed - small algorithms, given enough time and data to learn, become big algorithms.

So, coming back to the OP, it may be easy to poo-poo algorithms in their current state, I get that. But once they have enough data, they will grow and become something huge almost literally overnight.

    • 2
Mar 14, 2018

Ok yes Google creating an algorithm that beats the best Go players is nothing short of amazing. But there is a HUGE caveat here that you are aware of I'm sure and that's the fact that Go is a perfect information game. What is a perfect information game? It is a game where you have all of the relevant information directly in front of you that you need to make the optimal move or decision.

Unfortunately, real life isn't like board games. The information you have is imperfect and you don't have all the information relevant to you. This is where ML/AI truly falls apart in its current state. The human brain (and other animal brains too) are phenomenal at modeling unknown/hidden information as well as imperfect information and making accurate inferences nonetheless.

It is super cool that the researchers at Google gave their algorithm very little a priori information on how to play the game (unlike IBM's Deep Blue in the 90s). However, the fact that Go and most other board games are perfect information games makes the feat pretty much incomparable to the problems that humans solve on a daily basis at work.

    • 3
Mar 14, 2018

That's only partially true. Other games are being beaten too, like poker. Heads-up no-limit poker is basically solved. Limit, multi-hand poker is also effectively solved. Next up will be no-limit multi-hand poker - I expect that to be solved in the next 2 years or so.

But really, its not a point of perfect/imperfect games. Go was solved because it could create its own database of games/knowledge, in literally a blink of an eye. Someone in this thread mentioned that humans are still used for quant interviews - a deliciously ironic situation, I get it. But that's a problem that just lacks data, the depth just isn't there. But when it is there? Humans will be eliminated from that process overnight. The same thing will happen across other industries.

Mar 14, 2018

I find it quite amazing how some journalist with near no experience in Finance / CompSci is so confident in himself that he can say that ML will make research analyst roles obsolete. Come on, there are industry professionals, that have far more inside information than you could possibly imagine, who take less binary stances on topics such as these. (I understand a lot of it has to due with "clickbaiting" titles of articles to generate site traffic and whatnot)

Will ML change / effect the role of an analyst in the next few years / decades? Perhaps

Could it make an analysts role obsolete by 2030? Again, possibly. But is that really going to make you completely reconsider your career trajectory? Doing so every time someone wrote an article like this would be crazy.

On a less "complainy" note I strongly agree with dazedmonk in that what the author keeps referring to sounds less like ML and more like AI. Plus I couldn't imagine anyone would feel comfortable with a computer making investment decisions (sort of hard to assign blame / responsibility).

    • 1
Mar 15, 2018

It will be a tool. But a tool used for decision making I think. It removes cognitive biases which leads to poor decision making by humans

A.I. already showing promise at diagnosing disease better than "experts." Probably because decisions are based on objective stats rather than some convoluted human mind filled with past experiences.
Same can apply to investment decisions. Will need a human touch for some data points that aren't as "hard" though.

Mar 15, 2018

Yeah I definitely agree that the most likely outcome is that ML becomes a useful tool to blaze through the more mechanistic analyses.

That article is really interesting. I do believe that diagnosing disease is far different than actually treating the problem / doing surgery / acting on it. Comparatively we could consider the "disease" in banking to be the research. Sure the ML / AI can do the research but at the end of the day I believe we will need humans to actually "do the surgery".

And yeah, as you and others have pointed out above there are definitely a lot of soft skills necessary in this line of work as well.

Jun 28, 2018

your last sentence is very significant and further explained here
also not to split hairs but ml and ai are the same

Jun 28, 2018

Interesting read. Totally agree that for AI to really shine it needs to be able to explain its decisions. I've seen Elon Musk come out and "explain" what caused the AI to fuck up in the autopilot accidents. Whether that is the AI itself explaining its actions or analysts looking at the recordings of the accident and coming to their own conclusion that XYZ happened seems to be unclear.

Mar 16, 2018

The big difference between Go or poker and the markets is that in the markets the rules change based on how the participants act. If there was some master AI strategy that always produced alpha every year AND could have massive capacity, it would take over the market and by definition stop working once it gets too big that it has no more dumb money to trade against. If everyone in the market is quant/AI that still doesnt change that its a zero sum game and after fees most of these players will underperform the index. There could be a scenario where quant strategies consistently beat humans, in this world they will take fund flows until a point where they are too big and can no longer outperform and the pendulum will swing back towards human fundamental strategies (again because no one can run a capacity unconstrained strategy that consistently outperforms in any market). To say that 100% of the market goes quant means nothing because it doesnt change the amount of alpha out there, there will always be losers to offset winners.

Best Response
Mar 18, 2018

I generally agree with what has been posted here. There's a ton of hype about machine learning, but I believe it's highly unlikely algos will "take over" investing.

The issue is the overwhelming scarcity of useful data for a computer to use. The reason machine learning dominates short-term stock-market trading (HFT, intra-day, etc.), is because the relevant data for figuring out what moves stocks over short time horizons are things like stock momentum, correlations across stocks, order book data, trade sizes, etc. There are billions (trillions?) of relevant datapoints available, in well-formatted data-sets, for quants to use to develop trading strategies around. It's a near-ideal environment for machine learning approaches to investment. But it does not generalize at all to longer time horizons. And even in short-term trading, approaches tend to be about gaining "51/49-style" edges on the next move in a stock, not predicting big winners/losers.

For longer-term investments, there are definitely useful quant signals, arbitrages, etc. (such as the classic "value" and "momentum" strategies), but most information that is actually relevant for how a stock should be valued/should perform over time is sparse and poorly structured. Think about what tends to matter for stock market investors (and what moves stocks): financial releases, earnings calls, meetings with management, management presentations at conferences, diligence calls with industry experts, etc. None of this data is well set up for machine learning. Firms release 4 (badly formatted, inconsistently presented) financial statements each year. Huge amounts of relevant data is not included in the actual financials, but in commentary on the earnings call, non-GAAP company-specific operating measures in the MD&A of the 10-Q/K or the press release. And then there is the way management's tone/attitude changes from meeting to meeting, the industry color you get from your diligence calls, things like that. And there are also only a couple thousand public companies releasing such data points, which isn't really all that much from a machine learning perspective.

One can imagine a computer synthesizing such information, yes, but you're getting into the realm of science fiction and true AI, not machine learning algorithms as they're seen today. That being said, there's lots of room for AI to simplify things for analysts: generating financial models from reported financials, highlighting changes to risk factor in the 10-K, creating a readable summary of the key portions of a 10-K, identifying "risk phrases" in earnings calls. A lot of this stuff already exists, though I can easily imagine improvements. But this isn't the sort of thing that replaces analysts in the foreseeable future.

Taking a step back here, there are near-constant articles in the press about how machine learning is going to take over everything. I have a framework I like to fall back on to quickly assess the plausibility of such claims.

  1. Does the relevant (key word: RELEVANT) data exist at all to predict/accomplish a task systematically?
  2. Is there a vast (and I do mean: VAST) amount of this data, and is it available in some sort of structured/digitize-able format? You'd be amazed at how much data is needed for machine learning algorithms to work.
  3. Can more relevant data be generated and processed with relative ease? In board games, this is easy: a computer can just keep "playing" the game with itself over time until there's a huge database of games and strategies can "emerge," if you will. And even for self-driving cars, by putting more cars with sensors on the road, and prototype vehicles into play, it's possible to generate very rich data over time.
  4. Does the data have a relatively manageable number of dimensions of relevant data? The more types of data that are likely to be highly relevant/necessary for analysis, the tougher it is to build a model.

If the answer to these 4 questions is yes, then there is reasonable likelihood that the task can be automated. But always keep in mind that what we call "machine learning" isn't at all like "AI" in the popular imagination. The algorithms tend to do something akin to optimized trial and error. They throw variations of predictive models at a huge data set until they come up with something that looks like it "works." It's "dumb learning," in some sense. To get any sort of result requires incredible amounts of data, and as the complexity of the problems rise, the amount of (relevant) data required for a useful model rises in a non-linear fashion (read: rises massively).

In college I did a fair amount of machine learning coursework, though by now I've forgotten pretty much all of it. However, I do remember one thing that struck me was that human intuitions of what is "hard" or "easy" to do don't line up well with what is "hard" or "easy" for a computer to do. And I've noticed that this tends to lead journalists, commentators, and hypesters to point toward some success of a computer at one task to suggest that another success is right around the corner, even if the logistical issues involved in gathering and using the relevant data are radically different.

Put another way: imagine if you gave a digital calculator from the 1970s to a man from the 18th century, and explained to him that it was a form of machine intelligence. Imagine his shock as it performed calculations instantly that would take him hours, and performed them flawlessly. What would his understanding of machine intelligence be? He might well assume that machines in our era could easily perform any tasks that humans could do, or that such a development must be right around the corner, given the impressive power of this machine. He would have no intuition at all about its limitations, or the limitations of the era's technology. Today's public commentary on machine learning feels a bit like this to me.

    • 10
Jun 29, 2018

Generally agree..

The AI techniques that we use today are specialised and siloed off tasks.. We're a very far, far cry away from generalised artificial intelligence which can combine all of these siloed anilities into one superhuman entity.

That's more the realm of "the singularity" than it is to do with today's reality.

Mar 18, 2018

I think the consensus is that Machine Learning and other high level computation techniques will greatly aid in certain tasks. However, any claims that such technologies will totally overtake technical jobs are usually made by those with little to no understanding of what the underlying techniques are...

Mar 18, 2018

Too long to get into on WSO but I'm talking with a few quant funds now and the interesting thing is whether or not quant funds kill each other's returns based on the same pattern reading and trading, especially as AUM increases.

Even if machine learning was in full effect today, it would simply push active management to more illiquid names or activist strategies. It's near impossible for a computer to take a 2-5 year position on a conglomerate that is looking to break itself up. Or activist strategies which is where I see some active management moving as Page 1 and Page 2 shareholders get anxious given fee compression.

    • 1
Mar 18, 2018

Fundamental analysts will never go away given the structural complexities and messiness of financial data, information, companies, corporate events, people, etc. There will always be a need for analog thinking, where a purely digital approach wouldn't suffice

However, what I think will happen, and what is already happening, is that funds and strategies that use purely fundamental perspectives and techniques will be at a distinct disadvantage. The fundamental space will become more quantamental, as computing power can be used as a powerful tool to augment fundamental analysis

I think it was Garry Kasparov who said that the best chess player isn't going to be a computer or a person, but a combination of the two working hand in hand

    • 3
Jun 19, 2018

Machine learning is really making a difference in our tech space, and the machine is really adapting the property to make their own decisions just like humans. Computers are definitely making the difference and reducing the manpower in different fields, No doubt machines are much faster than humans. Well, the era of a research analyst is not supposed to end till machines can develop themselves by their own! there are numerous opportunities for research analyst no doubt.

CrediBLL is a kind of job platform which offers best jobs for people who love to work as a research analyst.

Jun 29, 2018

Spoke to a CFO at a 1B+ fund and he said that some analysts are relying on ML for data analysis, more like use it side by side to possibly aid some of their decisions. He said that although he doesnt see it fully taking over analyst roles, there def is a huge possibility for an intersection between the two in the future

Sep 6, 2019

so if you look at this, machine learning can definitely do some impressive things to support the work:
But coming up with questions and the business rationale is still a long way down the road from my perspecitve

  • Quant in HF - EquityHedge
Sep 8, 2019

I'm a data scientist at a MM that helps fundamental research analysts automate their work to the extent possible, as well as find alternative data sources that they could not get their hands on (or come in a form that is not compatible with their typical analysis tools). Some ML makes it into the picture occassionally but that's just part of the job.

"Augment" is the word here. That's pretty much all I can summarize it as. Things will change, but jobs aren't disappearing tomorrow or even in 5 years. Teams will be more data oriented and look a little different.

Just my own prediction so take with a grain of salt! Sample size of 1 ;)

    • 1
Sep 9, 2019