How soon will ChatGPT impact IB/ER

I recently ran across an article in the WaPo of two writers who had their careers destroyed by AI. Goldman is predicting the need for nearly one fifth of white collar jobs taken away by ChatGPT in the next five years with lawyers, software development, and finance the most at risk. Begs the question… how long do IB and Research departments have left? Seems there will always be a need for Senior Research analysts to give opinion, and same for MDs in IB, but for the lower level analysts/associates… is the end near?

No. The end isn’t near. AI today is like computers back in the 60s. Sure. They exist and were functional but they weren’t good. I don’t even think you’d be able to run any modern computer OS/program/literally anything on a computer from back in the 60s. It is going to be decades before AI gets good enough to actually replace a meaningful portion of lawyers/bankers/ER. Now, that will definitely happen at some point. Like with computers, they went from being barely functional to effectively unlimited and this will happen with AI too. But there will also be an associated boon somewhere else

Years, not decades. And only 2-5 more years, not more.

Automation is always “decades away” until it suddenly isn’t and you’re left in shock (i.e. look at videos of graphic designers reacting to Adobes new Photoshop feature).

Banks will certainly invest in AI to conduct research and create pitchbooks error-free (a lot of vc-style pitchbooks can already be created end-to-end by AI, just an fyi).

Here’s the key point: sure, not all Analysts will be outsourced to AI. But given that most of the work will be done by AI, then at the very least, pay will plummet.

Yes Yes. Just like how Elon is on year 11 of saying next year Tesla will achieve fully autonomous driving.

I am glad you know that automation doesn’t always take decades. Congrats. But like I said, the technology isn’t there and won’t be there for decades. It will be an augmentation tool long before it becomes a tool of replacement in the way that the original commentator I was responding to was asking about

Most Helpful

Not to be too reductive, but designing art in the commercial contexts where art is used (e.g. advertising) is a completely dissimilar economic model and not an appropriate comparison for the point at hand. Consider an advertising executive deciding whether to hire a human artist vs. replacing that with an AI (side note: human artists at ad agencies are currently use AI to storyboard etc. so this is a false dichotomy but I digress...). The human may feel better to work with and maybe the artwork produced is better when made by a human, but are those variables going to outweigh the significant savings to the organization (theoretically $$$ -> $0 at huge scale)?

Now consider a VP of Corp Dev or a Portfolio Manager etc. who is hiring an investment banker to advise on selling a company, structuring a trade, etc. You're not just buying a functional sale or transaction structure (e.g. the artwork), you're buying a team of advisors who have legal (sometimes fiduciary) and ethical responsibilities to you as a client and to their professional self-regulatory organizations as members. You are buying real people who feel obligated and pressured to perform for you. As the deal captain at the client, you are probably also staking your professional reputation and political capital on this advisor performing these duties for you successfully in both an economic and regulatory sense. Remember that this banker is someone selected through a bake off where you probably picked them (or had a significant hand in that decision) so you have to own that decision. 

You may say "but we're talking about analysts", but the same logic as above applies internally on sell-side deal teams as well. Each link from MD on down to analysts needs to trust the link beneath it for things to function optimally. There's not enough time in the day for any one person to do everything with a fine-tooth comb the way it needs to be done.

The question becomes whether you as the deal lead on the sell side would trust ChatGPT to replace your team, and whether you'd pass off its bullshit as your own thoughtful analytical work. In this instance, I mean "bullshit" in a literal sense btw - all ChatGPT is (or any AI, for that matter), at the end of the day, is a bullshit generator. In academic parlance, a Turing machine. A machine that is able to faithfully replicate something (an image, a fact, a point of view, etc.) closely enough to fool you into thinking it's been produced by "the real deal". But it's not real, because all AI does is sift through a corpus of training data consisting of real human inputs and use statistical methods to establish predictive relationships between input and output. The predictions made by this input/output machine (the answers to any question you ask it) may be accurate in a real or facile way, but they're still complete guesses. Likening the "intelligence" exhibited by AI to the "intelligence" of a human being is as naïve as saying that, like fish, submarines can "swim".

There are already cases today where lawyers have asked ChatGPT to write the brief of a legal case and while it made a stunningly compelling argument with 100% accurate citation style, the cases it cited are all completely nonexistent drivel. The lawyer didn't double check them all it seems - oops, disbarred. 

So you see, in the real world, all this hyperbolic noise about replacing IB/EB with AI is complete fiction. Like all technology, sure, it will change how we do things over time at scale but we are far away from any kind of world where someone wants to risk the fate of their multi-million/billion dollar LBO exit (upon which their carry $ is riding) on a Turing machine. Not even just a Turing machine, but a Turing machine that was likely built by some Silicon Valley type who has never worked a day of their life "in the trenches", and can't possibly grasp the subtleties of this work, which is fundamentally about not just knowing your product and your numbers, but also knowing people and transacting with them on the basis of trust that accumulates over the course of many, many deals that span an entire career.

I've had similar debates with other people about AI taking jobs, and I'm somewhere in the middle. I think some people are in denial about how fast AI is going to come for their jobs. Some people believe AI isn't going to be advanced enough to take people's jobs for a long, long time. I don't think it will happen tomorrow, but I don't think it's going to take long. Just think about how much the world has changed from 2003-2023, and imagine what we're going to see by 2043. If you're 18 now, you'll only be 38 in 2043. If you're 30 now, you'll only be 50 in 2043. AI will impact most people's careers unless they're close to retirement. Most people should take AI seriously enough to think about how it will impact their careers in the next ten years, be proactive, and make decisions with AI in mind because every major corporation and industry is making plans around AI, or they will get left behind. It's not a question of if AI will take or impact your job but when, and you don't want to get blindsided.

In regards to your question, personally, I believe analyst jobs are going to look completely different in the next 10-15 years. Analysts are monkeys; if a monkey can do your job, then AI is definitely coming for it. If a bank can save some money by getting rid of a bunch of overpaid monkeys, they will do that as soon as possible.

I’ve been using it to write subtitles, overview pages, and business profiles. The volume of comments received on those tasks has gone way down, so ChatGPT already thinks like MDs. Once we get AI tools to auto-select comps to walk back into the value the MDs want that’ll remove another big tedious task.

But not too worried about job security though, will worry when I see something that can read minds and format PPTs to the liking of each level of seniority.

I am sorry to say that if ChatGPT is close quality to selecting peer comps, or preparing a deck, maybe chatgpt is not the problem, the human work done is.

ML/NW are just full-on regression analysis and stats for anyone who has spent proper time learning it.

This is a pre 2012 view of AI 

Just destroy AI man, gosh. Who cares about the "bottom line" when there are no more people left to benefit?

I also think that AI will not replace analysts as a whole class. This is simply because when mistakes are made, IB teams want to have someone responsible. You cannot push the responsibility to a software as the MDs will look for someone to blame and fix the mistake as well. 

Nevertheless, AI will be used in so many workflows for productivity purposes, which might decrease the number of analysts being hired both in IB and PE. To that end, what are your pain points that you think AI can easily solve and save time for you right now? 

Many in this thread are looking at this problem without the lens of wisdom. Systems and industries collapse all the time. However, there is always a rhythm and cycle to the collapse before an industry is destroyed. It will become polarized between doers and thinkers. What’s likely to happen is the middle gets cut out. Analyst will take on some associate responsibilities because they have the bandwidth and feedback generated by  AI. Managing directors will be able to clarify their vision and refine communication by asking the AI questions and creating templates. Associates, and VPs are likely to not be needed. First the middleman/intermediaries get cut out. The sales guys and the grunts are the last to go. 

Get good at selling and creating logical frameworks as AI lattice. 

It is undeniable that advancements in AI technology have the potential to disrupt various industries, including finance and research. While senior-level roles that require critical thinking and expertise may still be in demand, the lower-level positions could face significant changes. It is crucial for individuals in these fields to adapt and acquire new skills to stay relevant in an evolving job market.

Repellendus commodi unde suscipit vitae molestias. Porro in quasi itaque eos tempore qui voluptas ut. Quae laborum voluptas quo nihil. Odio et voluptas ipsum exercitationem voluptas sint.

Tempora eveniet eos sunt quas quo sit. Ratione adipisci non quibusdam et. Optio aliquid quo porro eum. Et quod aut est autem omnis eum.

Excepturi id totam eos tenetur et facere. Reprehenderit earum voluptates rerum vel minus possimus. Accusamus iste sed magni veritatis. Pariatur fuga esse neque fuga est voluptates voluptates. Molestias in corrupti debitis ut natus molestiae.

[Comment removed by mod team]

Career Advancement Opportunities

September 2023 Investment Banking

  • Lazard Freres (++) 99.6%
  • Lincoln International (==) 99.1%
  • Jefferies & Company 02 98.7%
  • William Blair 12 98.2%
  • Financial Technology Partners 02 97.8%

Overall Employee Satisfaction

September 2023 Investment Banking

  • William Blair 04 99.6%
  • Lincoln International 11 99.1%
  • Canaccord Genuity 18 98.7%
  • Jefferies & Company 07 98.2%
  • Stephens Inc 11 97.8%

Professional Growth Opportunities

September 2023 Investment Banking

  • Lincoln International 01 99.6%
  • Lazard Freres 17 99.1%
  • Jefferies & Company 02 98.7%
  • Financial Technology Partners 06 98.2%
  • UBS AG 15 97.8%

Total Avg Compensation

September 2023 Investment Banking

  • Director/MD (6) $592
  • Vice President (33) $392
  • Associates (161) $261
  • 3rd+ Year Analyst (14) $187
  • 2nd Year Analyst (102) $169
  • 1st Year Analyst (309) $167
  • Intern/Summer Associate (48) $167
  • Intern/Summer Analyst (225) $94
16 IB Interviews Notes

“... there’s no excuse to not take advantage of the resources out there available to you. Best value for your $ are the...”

From 10 rejections to 1 dream investment banking internship

“... I believe it was the single biggest reason why I ended up with an offer...”