Could AI beat investors?

I was hanging around on Chat GPT when it appeared and it's truly interesting how well it can synthesize information, especially when you want to get an overview of a company, calculate an IRR, or even a DCF. So I went a little further and asked it, for example, "Evaluate ExxonMobil (stock symbol: XOM) as Warren Buffett would do, based on available public information on his investment philosophy, and decide if it's a buy or not", and with no surprise, it said "Error". 

So, I thought that at the moment, obviously, AI was useless in investing, but what about the future?

  • Arguments against: It can't internalize information to weigh its importance and decide how valuable it is (or not).
  • Arguments for: It can go through a lot of information at light speed. For example, if an academic paper says that "Usually, after spin-offs, stock prices tend to be undervalued", or "After negative news on the S&P, the price tends to be down for 3 days but then it stabilizes", etc., even if it's AI and doesn't understand investment psychology, it can read behavioral finance papers and apply what those studies concluded. Thus, overall, if AI can go through 10.000 papers on behavioral finance in seconds, then I see no way for someone to compete with that. And I'm giving an example of behavioral finance, because if AI can invest based on behavioral trends, then there's no point in even bringing up discussions about valuations or more mathematical or technical things.

For the chess aficionados out there, in my opinion, I see the current version of Chat GPT (and what competitors are also trying to develop) as the earliest version of Deep Blue which average people can still beat. But once more development and improvement are done, it could end up beating us without any chance of even going back (in the same way as no one now can beat a computer at chess).

Besides the behavioral finance thrown up there, assume that he also can interpret macroeconomic trends based on research papers, he can also invest and hedge based on game theory formulas, etc. Again, I see no way for someone to compete with all of that in case it ends up happening.

 

*Mr A2D has entered the chat*

They've already shown you can break and manipulate ChatGPT. Look up DAN and some of the other methods that've been proven. So that means if I were a competing ER/IR member I could throw a wrench in the cogs and get it to create a competing and antagonistic output to your generic query. Not to mention that if I'm a consumer of these reports and I'm at least halfway competent I'll start putting more and more skepticism on anything that comes from ChatGPT. And I'll start learning what to look for to see if it's actually hand written versus some AI drafting it. It'd be the financial version of having to become a Blade Runner, hunting replicants. Hell, we already have to do that when 2/3 of news stories are just copy pasted versions of an actual source so they dilute their credibility from the jump.

Of course, they can keep honing it as best they can. But my money's still on the human mind because an AI has to follow rules. You and I? We can pull a Joker and mess things up on purpose just to see everyone else scramble (don't go putting explosives in a schizo to blow up a police station though, please). And despite what some may say, we'd still be rational actors. Acting in what we think is our own best interest with recognition of possible consequences and outcomes.

Edit: Let's not forget that MSFT is a chief investor. And what happened last time they were involved in AI? It got trained into becoming a something that rhymes with yahtzee. 

The poster formerly known as theAudiophile. Just turned up to 11, like the stereo.
 

These jailbreaks are just in the early stages of the tech tho. Don't forget that ChatGPT is still a prototype, and not to mention the numerous stronger AIs that are in development to replace ChatGPT. The developers have already patched up many jailbreaks (I don't know if DAN is still available), it's become way more formalized. But that makes things more boring too

 

trying_my_best

These jailbreaks are just in the early stages of the tech tho. Don't forget that ChatGPT is still a prototype, and not to mention the numerous stronger AIs that are in development to replace ChatGPT. The developers have already patched up many jailbreaks (I don't know if DAN is still available), it's become way more formalized. But that makes things more boring too

Like I said, they can keep honing it and trying to improve it (and they should). But as things stand right now, that was my take on things. As far as becoming more boring, I think that speaks to some points our friend from Baltimore PD's The Wire mentioned below (seriously, you had to post that while I'm actually watching an episode of the show? I swear if this is you Stonks1990 trolling me...)

The poster formerly known as theAudiophile. Just turned up to 11, like the stereo.
 

I think AI is still a bit of a buzzword and there's a lot more hype/marketing around it than results. I'm not saying that what things like ChatGPT have accomplished aren't incredible and something I would have deemed plausible 10 years ago. At the same time though, I see AI and the like mainly as super powerful calculators that will enhance our understanding and allow us to perform calculations/information gathering extremely quickly, but it'll still be up to the human to have final judgment on what's worthwhile vs noise and what the end decision should be. I'm not an engineer or neuroscientist, but my understanding so far is that AI is only as powerful as the humans who train it, which means that if it's trained with noise/incorrect assumptions, AI will be useless regardless. For something as complex as investing, which has no set cookbook or one path to an "answer", I'd be extremely weary of relying on AI or funds that use AI to make picks. I think AI would be useful to gather data and glean some insights about said data, but that's where it would begin and end for me. 

Side note, but I recommend folks read this article. I think the author makes a pretty good case about the potential limitations of AI and what it'll mean to be "human" in a more automated world: https://ianleslie.substack.com/p/the-struggle-to-be-human 

 

AM will most probs compress to an efficient point where the only active managers left in the market are the ones that actually add value. This doesn't necessarily have to be measured by generating skill alpha or beating a Vanguard ETF, but I'd say generating the less clean sources of alpha like structural alpha would be a minimum to stay in the market.

Right now there's way too much motherfucking capital chasing too little returns, and we see some real idiots running mutual funds. The top mutual fund manager of 2022 deadass advocated for yolo-ing in a few concentrated positions and fuck diversification, forgot his name. Probs like 99% of investment products are shit lol

This technological trend combined with the sea change of the end of cheap money will likely force out the laggards and whip everyone into shape. Tide comes down, see who's been swimming naked

 
Most Helpful

I'm always shocked how few people in finance are aware of the quantitative finance industry. There are quants doing things 100x more advanced than ChatGPT, both with AI and without. The strategy OP identified about scraping academic papers? Quants did that in 2006. At this point, quant ML algos are essentially writing their own papers, identifying patterns in the market and trading on them automatically. In fact, when a new quantitative trading strategy is discovered, it only takes about 2-3 years (depending on the strategy) for other firms to find the same pattern and any alpha gets arbitraged away. So quants are always on a treadmill of algo improvement and data differentiation.

 

The strategy OP identified about scraping academic papers? Quants did that in 2006.

Source on this?  I am skeptical that an AI would be able to understand papers as well as a modern language model (and I'm not even sure if ChatGPT would even be able to do so) way back in 2006 and actually book trades based off it.

 

ChatGPT is a language model, and is not designed for investing. Further, it’s creators likely also put a manual filter to prevent it from giving investment advice. So using ChatGPT to judge AI performance in investing isn’t really fair. 
To stick with your chess analogy, ChatGPT may be the Deep Blue of language / chat models, but the Deep Blue and beyond equivalents for investing already exist within quant firms. Algorithmic trading has existed for a while. I’m not that close to the hedge fund space, but my understanding is that a pretty high percent of AUM is already being traded algorithmically.  

Now, there are certain things AI just can’t do yet that pertain to investing, such as qualitatively judge a management team or go to market strategy. To be honest, I suspect humans also largely overestimate their ability to do these things - the human mind is generally bad at predicting the future, but systematically overestimates its ability to do just that (read Thinking Fast and Slow and other work by Kahneman for the research behind this). But still, it’s a potential source of edge that humans have, but AI lacks.

I’m sure there are already methods for pairing human intuition about these things with AI-based evaluation. But the real question is, when can AI start to make “non-quant” judgements about investments at super human levels, without human assistance? And can a language model like ChatGPT help us get there? 

Personally, I don’t think it will take that long, and there may already be work like this going on in quant shops. It’s possible we are still far off from models that work in this manner. But I think anyone who thinks we will never get there is wrong.

Human thinking is all algorithmic at some level. Some domains - such as chess - rely heavily on computational ability, and therefore it’s comparatively easy to build AI algorithms that outperform humans in these domains. Other domains - judging a CEO’s leadership ability, for instance - utilize complex heuristic algorithms that the human brains can run only because they have been trained through millions of years of evolution to do so. It’s hard to recreate these types of human intuitions in AI currently, but there’s nothing fundamentally special about them that preclude AI from getting there eventually.

I find that this forum is generally overestimates how irreplaceable humans are in finance professions. 

 

Long-term investors over 3+yrs time frames, not today and likely not for the next few decades. One day it will I would guess, but I don't see this as a risk for some time yet affecting Analysts. Perhaps associates get affected more as their relatively lower value-add skills (modeling, earnings takeaways, etc) get more automated away but harder for LT-Analysts to be affected over N10Y. Even then, if you're among the top 50% of LO Analysts there's another leg of time that you've bought yourself as the bottom 50% will be culled first (with of course exceptions made for tenure / politics / etc so it won't be a perfect culling based on quality alone)

Let me put it this way -- I anticipate being able to do this profession until 2050. Anything beyond that is a bonus. Ideally 2055-2060 but I'm not sure if that'll be possible depending on how good exactly AI gets. Now it could be sooner as well, but regardless of profession I think you should try to max earnings (quality of life adjusted) and keep costs moderate / reasonable -- wouldn't go full cheapo and not enjoy life but I'd minimize the vanity purchases. Set yourself up to be as resilient as possible for the future in general but especially AI as we really don't know what that looks like LT

 

To be fair though, wouldn't the day that AI can consistently beat decent quality fundamental investors also spell the doom of all human jobs in general? Not to hype ourselves up but I feel like alpha generation through the differentiation is one of the most intuition heavy and creative jobs out there (investing is an art, not a science after all) and more mundane and process-oriented jobs will be automated long before our job does.

The question is when this AI singularity arrives and what will happen to the human race at that point in history. 

 

AI is a fancy word for 'optimization'. For example, ChatGPT is not an actual 'intelligence' in the sense that it has thoughts and answers to us based on those thoughts. No, what it actually does is that given a prompt, it calculates the most likely 'response' that should follow your prompt based on all the data it has inside of it. For an analogy of how this works, imagine that you as a person had no intelligence but you had access to google. I ask you 'Evaluate ExxonMobil as Warren Buffet would do'. You then go to google and perhaps you will google things like 'Evaluate like Warren Buffet'. When you googled that, you may get some text regarding how Warren has evaluated some companies in the past. The text you found may say 'I invested in PepsiCo because it has a P/E ratio of X which is Y points higher than the industry standard'. Then you would google 'What is the P/E ratio of Exxon Mobil' and then you substitute that value for 'X'. Then you google 'What is the average P/E ratio in the oil industry' and the substitute that for Y (after calcualting the difference). If you did that, you would find something that has a very high probability of being said by Warren Buffet. If instead you were lucky and found Warren Buffet's actual evaluation of Exxon Mobil, you may literally just use that as your response. However it is unlikely that you will have perfect data for all questions.

In other words, ChatGPT optimizes the probability that its response is what follows your prompt. If you think about it, this has nothing to do with investing. When you pitch investment ideas, you don't do this based solely on what the IC expects you to say. ChatGPT would do that, which is insane. However, that is because ChatGPT is optimizing language. If you go inside a hedge fund like Citadel, you may find that they have hundreds of thousands, probably millions, of different optimization algorithms that do optimize for financially relevant information. In other words, AI has been used to beat investors for decades now. What may ChatGPT change? Well, maybe you ask an engineer to 'fuse' ChatGPT with your other optimization algorithms and then ask ChatGPT questions like 'Explain your results' or 'How would your results change if tomorrow the S&P500 dropped by 10%'. But ChatGPT alone is NOT a financial tool. It is a language tool. It just turns out that we use language to communicate financial ideas, and as such ChatGPT appears to be able to provide financial analysis when in reality it is just doing some very skilled googling. 

 

Asked your very same question, and got:

Warren Buffett, the legendary investor and CEO of Berkshire Hathaway, has a well-known investment philosophy that focuses on investing in high-quality businesses with strong and durable competitive advantages, at reasonable prices.

When evaluating a potential investment like ExxonMobil (XOM), the first thing to look at is the company's business model. ExxonMobil is one of the largest integrated oil and gas companies in the world, with operations spanning the entire energy value chain, from upstream exploration and production to refining and marketing. The company has a long history of operations, and is one of the largest publicly traded companies in the world, with a market capitalization of over $200 billion as of my training data cut-off.

Next, it's important to look at the company's financial performance. Over the past few years, ExxonMobil has struggled with declining revenue and profits, due to lower oil and gas prices, increased competition, and investments in new energy technologies. The company's debt levels have also increased, which could be a concern for some investors.

However, ExxonMobil still has a strong competitive advantage in its integrated business model and scale, as well as a well-established brand and a large, diverse asset base. The company has a history of paying dividends to shareholders, and has a relatively stable cash flow, which could make it an attractive option for income-seeking investors.

In terms of valuation, ExxonMobil's stock price has underperformed compared to the overall market in recent years, and the company currently trades at a lower price-to-earnings (P/E) ratio than the S&P 500. This could be seen as a potential opportunity for value-oriented investors, who believe that the company is undervalued and has potential for future growth.

In conclusion, while ExxonMobil faces challenges and headwinds in its industry, the company's strong competitive advantages, established brand, and potential for future growth could make it an attractive investment opportunity for value-oriented investors who are looking for a long-term investment. However, investors should carefully consider the company's financial performance and industry risks before making a decision. Ultimately, the decision to buy or not buy a stock is a personal one and should be based on an individual's investment goals, risk tolerance, and overall financial situation.

 

I don't think that AI can beat investors, but in the investment process, AI and various technologies will definitely come in handy. By the way, for those who want to start investing, I can recommend the Koyfin platform. In general, they say that this is a great alternative to Bloomberg, and you can even compare Koyfin and Bloomberg features. For me, the first option definitely wins at least in prices. 

 

AI is hokum but will follow the Crypto path of the "next big thing", when building our fintech platform we modeled various AI platforms for the trade and investment analysis, showing there are numbers which are very difficult to find relating to context.

* The top supercomputers in the world can produce 1/240,000th contextual analysis of the human brain

* Quantum computing can produce 1/60th contextual analysis of the human brain

AI is the next level for doing repetitive known tasks over automation, basically replacing inefficient people, what it can't do is understand context very well which is how the financial markets flow money, hence why our $5mil platform has context built in to the algos using non-compute intensive methods.

Can AI beat investors, sure but they proved twice that monkeys can beat professional fund managers, all that will happen is the financial markets will adapt to make the context more complex which is exactly what happened the past few years, what does that mean to graduates, we tried training a few up by subsidising $100,000s in costs but they failed in the training leaving a very large subsidy, AI will just replace what people are doing today pushing them lower, simple economics.

 

I've done some internal work with machine learning and Ai. The problem with Ai and ML in its current state is its inability to handle edge cases. Lets take self driving cars for example. If the light is green, the ai knows it can go, and if it's red it knows it can't. The problem comes when you have something like a 4 way stop where one of the other cars has right of way, but is giving the ai car right of way. In this example, the computer has to make an edge case decision of whether to follow the traffic laws and wait for the person to go, or to go with permission. The Ai has an extremely hard time dealing with these cases, which is why you can gaslight gpt into thinking it was wrong all along; it thinks it could've made a mistake on an edge case. Ai in its current state can only really deal with pure facts and facts alone, which would make it good for things like building a dcf or using statistical analysis in investment. The problem comes when it has to make an edge case decision about something. Say for example you have a company like disney with their foray into streaming. The Ai can likely create a model that projects the revenue potential of disney+ based on potential market size and revenue per consumer. What the Ai can't account for is the factors that are more of an edge case. What if people get tired of streaming services? What if disney loses their reputation as a solid movie producer after pumping out crap movies? There are so many what ifs when analyzing a single company, and those what ifs are where an Ai will struggle. However, I still believe that Ai that is specifically trained in these specific edge cases will be able to outperform many investors, but that's still at least 10 years off. Ai should be considered (as of now) a tool, not a replacement. 

 

That's an interesting observation, particularly the example of gaslighting.

Reading it I thought about how some people could intentionally train the AI to give misleading thought patterns that consequently may give investors some bad information or increase their biases. This would benefit the one giving the misleading pattern because they would take the contrary position to those investors such as short selling it when others buy it, etc.

This may be hard to imagine if those models are strictly used by large HF, but I'm thinking about a scenario where AI is made available to the large public that are less sophisticated but due to their large pool of users can move considerably the price of an asset (for example Robinhood having an AI extension with a fee that helps in investing, etc.)

Buy land, 'cause God ain't making more of it.
 

I posted 9mo earlier on this, view today is nobody really knows BUT there's a lot of contextual understanding (how much to weight certain factors) that an AI simply can't replicate over a 1+yr time horizon. I'm not even sure if it can do it over a 1+qtr time horizon for a while (it can't today), there's too much to process that isn't just 'raw data'

That said, I think on the more obvious academically documented inefficiencies, AI might turn out to replace some of those. Alternatively, you could argue it will still be quanta-mental where it will surface a pool of potentially interesting ideas before a human looks at it fundamentally and picks the best 5-20% of ideas (which should outperform AI on its own)

THe bigger obvious threat to LO is passive...relative return active strategies on LO side are absolute BS in US (and one could argue similarly for global funds). Where it does shine is international and EM, though that can't absorb nearly as much AUM on active side as exists today.  Passive share of AUM is almost mid-50s on domestic side (much less ex US), that probably rises to 80-90% LT (as only 15% of domestic funds outperform the benchmarks). Ultimately the active side on a global basis probably gets cut in half (in % terms) over time. Would not recommend LO public as a career path knowing what I know today 

 

Qui ratione similique aut nostrum possimus et assumenda. Autem ea iusto dolor mollitia.

Facere id maxime aspernatur fuga ut. Eum iusto consectetur culpa molestiae voluptatem officia error. Harum ipsam in et alias et. Expedita voluptas culpa nulla modi voluptate blanditiis quibusdam. Maxime quaerat necessitatibus dolorem autem sint consequatur officia.

Quo ea aperiam fugiat. Repellat aut et et aut ut beatae. Non ad sint ipsum quas.

 

Mollitia dolore sunt nihil qui ex. Excepturi voluptatibus cumque dolores ut quam. Qui consequatur quidem laboriosam.

Voluptas beatae eos ut quia optio officiis eaque. Et et reprehenderit consectetur tempora. Atque tempore sed eum explicabo. Occaecati enim cum beatae in voluptatem. Debitis porro eos id vel.

Nesciunt sit itaque ea enim et qui modi. Quam magni consequatur magnam sed. Pariatur fuga ratione pariatur corporis. Debitis provident at vel. Minima eos vero iure rerum sunt.

"The obedient always think of themselves as virtuous rather than cowardly" - Robert A. Wilson | "If you don't have any enemies in life you have never stood up for anything" - Winston Churchill | "It's a testament to the sheer belligerence of the profession that people would rather argue about the 'risk-adjusted returns' of using inferior tooth cleaning methods." - kellycriterion

Career Advancement Opportunities

April 2024 Investment Banking

  • Jefferies & Company 02 99.4%
  • Goldman Sachs 19 98.8%
  • Harris Williams & Co. New 98.3%
  • Lazard Freres 02 97.7%
  • JPMorgan Chase 03 97.1%

Overall Employee Satisfaction

April 2024 Investment Banking

  • Harris Williams & Co. 18 99.4%
  • JPMorgan Chase 10 98.8%
  • Lazard Freres 05 98.3%
  • Morgan Stanley 07 97.7%
  • William Blair 03 97.1%

Professional Growth Opportunities

April 2024 Investment Banking

  • Lazard Freres 01 99.4%
  • Jefferies & Company 02 98.8%
  • Goldman Sachs 17 98.3%
  • Moelis & Company 07 97.7%
  • JPMorgan Chase 05 97.1%

Total Avg Compensation

April 2024 Investment Banking

  • Director/MD (5) $648
  • Vice President (19) $385
  • Associates (87) $260
  • 3rd+ Year Analyst (14) $181
  • Intern/Summer Associate (33) $170
  • 2nd Year Analyst (66) $168
  • 1st Year Analyst (205) $159
  • Intern/Summer Analyst (146) $101
notes
16 IB Interviews Notes

“... there’s no excuse to not take advantage of the resources out there available to you. Best value for your $ are the...”

Leaderboard

1
redever's picture
redever
99.2
2
BankonBanking's picture
BankonBanking
99.0
3
Betsy Massar's picture
Betsy Massar
99.0
4
Secyh62's picture
Secyh62
99.0
5
GameTheory's picture
GameTheory
98.9
6
CompBanker's picture
CompBanker
98.9
7
dosk17's picture
dosk17
98.9
8
kanon's picture
kanon
98.9
9
Linda Abraham's picture
Linda Abraham
98.8
10
numi's picture
numi
98.8
success
From 10 rejections to 1 dream investment banking internship

“... I believe it was the single biggest reason why I ended up with an offer...”