Will robots replace your consulting or financial career?

I was recently reading a couple of books – Rise of the Robot by Martin Ford and Humans Do Not Need Apply by Jerry Kaplan. Both books share insights on the demise of both the industrial and knowledge eras. Both books provide key themes and stark warnings about the future of automation and its potential impact on humanity as we currently know it.

Examples cited in the books:
Artificially intelligent systems double in capacity every two years. AI drives cars, runs restaurants, writes articles, creates original art, composes symphonies and builds houses – and that’s just the beginning.

In stock markets, “high-frequency trading” (HFT) systems skim vast profits, risk-free. These rapid HFT programs may fuel market chaos, as in the “Flash Crash” of 2010. Today’s “high-frequency trading” (HFT) systems complete transactions tens of thousands of times faster than humans can. In fractions of a second, HFT systems buy the stock low and sell it high, skimming the spread between prices. Some systems analyze market “sentiment,” finding subtle, fleeting correlations among diverse stocks. HFT systems may “smooth” the market, but they exacerbate the problem of wealth inequality.

A near-total replacement of the knowledge worker could happen by 2035 in addition to extinction of careers in manufacturing, transportation, and agriculture.

Some experts see opportunity – machines enhancing human intelligence and longevity, for example. Others worry about machines taking over.

What do you think?

 

AI will inevitably replace your job and if you believe the people who should know best (i.e. the ones that build the AI), this change will happen much earlier than most people expect.

Also your question about "What happens if nobody has job and to whom will companies sell to" misses the point. The concept of Supply & Demand requires scarcity of resources. With exponential advances in AI, we are quickly entering a new post-scarcity world, so the old concept of making money by selling your skills/products will no longer be valid.

As a result, there will most likely come a short period of great changes very soon. If AI takes over the low skilled jobs in lets say the next ten years (e.g. bus/taxi/truck drivers, waiters, call center operators, etc. - it's already happening today and the speed will only increase with those systems getting better and cheaper at an exponential rate) there will be millions of unemployed people who simply don't have the skills to get any other job - all jobs that they might be suitable for will also get automated in a matter of months after. Society in the US (or anywhere else for that matter) is simply not prepared for such an increase in unemployment. If you have let's say a 10-20% increase in unemployment in a short period of time, you can be very sure that no matter what currently thought safe you have, you will be heavily affected.

 
Artificially intelligent systems double in capacity every two years. AI drives cars, runs restaurants, writes articles, creates original art, composes symphonies and builds houses - and that's just the beginning.

This is where it gets wrong. AI isn't going to follow some Moore's Law kind of BS - it's going to plateau very soon. What will expand are new applications of AI. We'll see automation in lower pay-scale jobs.

That being said, AI cannot be the be all, end all. IMO from my observations of the work of some of the top universities and companies around the world working in deep learning, the kind of world envisioned by most of these engineers is not one in which robots coexist as equals with humanity, but rather as subservient husks without sentience. Yes, everybody agrees that machines should not be given sentience - everyone of them.

That would result in a world one notch below the one depicted in Asimov's I Robot. I'm guessing that Asimov's three laws will then prove to be the bedrock for AI boundaries, owing to their popularity amongst AI researchers.

I can't recall in which book, but one of the books gave an account of AI development discoveries at an IBM lab back in the 70s; the programmers saw the potential of this capability and how quickly it could evolve & adapt on its own - so much so that it apparently frightened the programmers. they feared their jobs becoming obsolete, so they purposefully created unstructured coding languages, making computer programs very prescriptive and finite in utility at the time. For example, code was written to solve a math problem or to play chess, but the program couldn't do anything else.

This is utter BS. AI research wasn't purposely stalled because some righteous researchers wanted to save humanity - it was because we hadn't developed the tech for the insanely high processing power required. But once people decided to utilize GPUs instead of CPUs (this is only one factor, but key), AI research resumed making intense breakthroughs. The difference was that in the 70s and 80s, research produced a huge trove of theoretical knowledge, and now it produces potential applications. OP, you need to revise your reading list - start with Asimov, Penrose and Minsky.

That being said, I'd be happy, not cautious, if a robot were around to help me when I'm old - can't really on the snot-faced progeny these days. I'm also sure that humans will still be at the top of the pyramid, so you can be assured that your consulting career is not going to be replaced by some metal hunk named Sonny.

GoldenCinderblock: "I keep spending all my money on exotic fish so my armor sucks. Is it possible to romance multiple females? I got with the blue chick so far but I am also interested in the electronic chick and the face mask chick."
 

What happens when AI creates its own version of drugs and liquor? Right now we're in a low-shit system, but once AI gets drunk and on drugs we'll be in a full blown shitcane. You've been warned...

I AM THE LIQUOR
 

That is the catch 23. Long stories get short, for the gooder of us all, we need to stop the worst case Ontario from happening. Otherwise, it will look like a tropical earthquake blew through here.

Only two sources I trust, Glenn Beck and singing woodland creatures.
 

It's a delicate knife. Used well, you can make a fine meal with it, clean it up, wipe it down, and put it back into the block. Used poorly, you can turn the streets red with mass homicide or end up falling on it yourself.

This is a hot topic that Elon Musk bans from discussing during hot tube parties along with talking about if this life is a simulation or not (Neil deGrasse Tyson also thinks this whole game of life is one grand simulation).

For a full background, we have to observe how John Maynard Keynes (1883-1946) thought about the future during his times. This Forbes article summarizes this pretty well.

According to Keynes, we are supposed to be doing the following:

"For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!"

Interested individuals can read his entire essay Economic Possibilities for our Grandchildren listed in the reference below.

TL;DR: We're supposed to be chilling on a 15-hour work week because we still need to be amused and challenged. Why the extra 25 hours (40hr week) or 85 hours (WSO typical) remains is a modern day mystery. The markets are brutally efficient that's why and consumption kept pace with every incremental gain we received by our extra production. If consumption were stifled and our production exceeded this consumption rate, it wouldn't be surprising that we receive days off from the office and factories.

School of Life's How to Make a Country Rich explains our present situation pretty well:

What do I think?: We need automation to keep our prices low so that we can be fat, dumb, and happy glued to our phones or VR headsets. I'm leaning towards a WALL-E end game.

Reference: Economic Possibilities for our Grandchildren (1930) - John M. Keynes

 
Best Response

Hi OP,

While a lot of the AI hype nowadays is justified, a lot of it isn't, as well.

Most people don't understand how AI works, which precludes their ability to understand its limitations. Let me explain:

While there are a wide array of AI algorithms out there currently, just about all of them rely upon a combination of a few things:

  1. A "Cost parameter", or parameters;

  2. A set of "tuning variables";

  3. An arbitrary structure (I know this sounds vague, but that's because it is) that determines how numbers are calculated within the model. This can be an optimization problem based on Euclidean distance (the case in support vector machines), or it can be the result of backwards propagation (in the case of neural networks), or any number of other things; and

  4. A LOT of data.

Without boring you and the readership of WSO with details, these factors are combined into a process whereby a computer essentially throws darts at a metaphorical dart board, first completely at random, and then progressively with more accuracy as it measures the distance between the impact point of each successive throw and the bull's eye - this is a process of "guess and check", which requires (1) a framework for how to make the guess (the prerequisites of which I listed above), (2) a variable for evaluating the relative accuracy of guesses (this is the "cost" parameter), and (3) an answer key, which allows the machine to know the "correct" answers and calculate the distance between these and its guess, using the cost parameter.

The details of how and why the machine makes these adjustments between throws are varied, and that is more detail than is necessary to make my point, but I'm happy to go into it if you want - suffice to say that each model contains a methodology which is more or less strictly technical and binary - the machine basically just takes random stabs using a set of "test data" for which it has "correct answers" and hones its rules on the fly if those rules produce a smaller mean squared error than its previous rules. I have more experience with Support Vector Machines and Neural Networks than anything, but the principles apply broadly.

The reason AI works is because when you have a LOT of data points with "correct answers" that cover a wide range of outcomes within a few highly descriptive variables (these variables are called "Features" in AI parlance), the machine is able to do a good job taking the rules it established on the "test data" and using them to categorize new "observation data", by dint of the fact that it has essentially established a complex, nuanced framework for comparing this data to data it has already "learned" (i.e., gotten low-error guesses on).

This makes AI great at approaching tasks which inherently follow a similar pattern always. Credit scoring, screwing pieces of a vehicle together, reading news headlines, etc. are all good examples - sure, there's a little nuance between each iteration of these tasks, but nothing that is outside the previously-known universe of observations. You don't come across a loan application with a different set of metrics than any seen before. You don't walk on an assembly line and see pieces arriving at a station in any form other than "ready for further assembly" or "dud".

The pitfall of any AI is what is known as the "bias-variance tradeoff". This means that when you make a machine better at predicting relationships between variables in the test data (the data that the machine practices "throwing darts" with), you make it more susceptible to making ever-larger errors when looking at a novel data point, which is referred to as "overfitting". Conversely, when you make a machine better at throwing darts it has never seen before, you significantly reduce the model's ability to recognize and capitalize on relationships present in the test data.

For purposes of the metaphor, (albeit with a little hyperbole for clarity's sake) the bias variance trade off means you can have one of two things with AI:

  1. A machine that can throw 1000 identical darts perfectly, but will utterly fail when we hand it a weighted dart with a different tail fin; or

  2. A machine that can throw 1000 different darts and hit the board somewhere (hopefully, maybe not), but not reliably land on the bull's eye.

What does this mean? I think it means AI are surely going to replace people in tasks where there are no new things on the horizon - an assembly line, as mentioned before, isn't a process that fluctuates outside certain rigidly defined parameters, because this is how it is designed to work. If parts are arriving at the next station on the line outside of a certain narrowly defined range of states, it's easy to discard them as duds and you have a smooth (albeit maybe wasteful) system - but no major harms.

Does an M&A transaction follow the same pattern? Not really. Sure, lots of deals are similar to one another, but there is no way to contain the potential for a novel outcome like when you design an assembly line. There's just no way to avoid, for instance, one party to the transaction deciding they hate the other and scuttling the deal. Further, the amount of data that would be requisite for predicting such a complex process would become prohibitive because the calculations involved in these models typically center around dot products (a concept from linear algebra), meaning that the time it takes to make calculations rises exponentially with the number of points in the data set being considered.

I can elaborate more if others want, or provide resources for those seeking to understand these processes in more detail.

Is this a good thing? Depends who you ask, and how. If you are speaking economically, and ask that segment of the population that would rather spend their lives performing easy, repetitive tasks in exchange for a livelihood, the answer is probably "no". These people are akin to the ferrymen that were made obsolete when steam power hit the riverboats of industrial England. The arrival of AI will presage their being pushed into more desperate conditions, and widen the gap between their abilities and those abilities that are rewarded within a capitalist framework.

If, on the other hand, you ask the CEO, his answer is probably "yes", as he is now able to replace labor with more efficient capital in his factors of production mix, and attain higher earnings.

Aside from this, there are massive moral and ethical dilemmas inherent to AI that will become apparent in our lifetime. For instance, if you teach cars to drive themselves, you must also teach them the "right way" to resolve ethical quandaries for which there is no answer - for example, if two cars are on a crash course, one full of convicts, and one full of mothers, and one must swerve and kill its passengers and save the others, how do you decide? MIT and other AI labs are soliciting public opinion on these questions, because there is no correct answer, despite what anyone might say.

I do wonder whether AI will presage the beginning of what Marx was discussing in Capital and the Manifesto. The prerequisite for his world was one of infinite abundance. And are we so far off? In a world where capital and consumer goods, and perhaps professional services, can be produced, distributed, and maintained indefinitely at the push of a button and the whir of a complex AI, it will call into question much of what makes our system stable and worth living in. Will everyone subscribe to the same economic truths we now hold dear when, someday, a fleet of Ferrari's can be made essentially at no capital or personal cost to anyone? Will the goods we center our lives around hold the same value when they are divorced from the blood, sweat, and tears that we currently associate with their acquisition? I'm not sure, but it's food for thought.

People tend to forget how much each of us centers our daily striving from cradle to grave around the economy, and our place within it. People derive a lot of momentum and personal meaning from just getting up and doing things they have to do. But what will the narrative become when the system doesn't really need most of us anymore in order to function?

It gets to the core of what, in my opinion, differentiates various economic/moral/political philosophies, which is: if people were not bound by material constraint and necessity, what would they do with themselves? What would they do for (or to) others?

Edited for clarity.

Array

Career Advancement Opportunities

May 2024 Consulting

  • Bain & Company 99.4%
  • McKinsey and Co 98.9%
  • Boston Consulting Group (BCG) 98.3%
  • Oliver Wyman 97.7%
  • LEK Consulting 97.2%

Overall Employee Satisfaction

May 2024 Consulting

  • Bain & Company 99.4%
  • Cornerstone Research 98.9%
  • Boston Consulting Group (BCG) 98.3%
  • McKinsey and Co 97.7%
  • Oliver Wyman 97.2%

Professional Growth Opportunities

May 2024 Consulting

  • Bain & Company 99.4%
  • McKinsey and Co 98.9%
  • Boston Consulting Group (BCG) 98.3%
  • Oliver Wyman 97.7%
  • LEK Consulting 97.2%

Total Avg Compensation

May 2024 Consulting

  • Partner (4) $368
  • Principal (25) $277
  • Director/MD (55) $270
  • Vice President (47) $246
  • Engagement Manager (100) $226
  • Manager (152) $170
  • 2nd Year Associate (158) $140
  • Senior Consultant (331) $130
  • 3rd+ Year Associate (108) $130
  • Consultant (588) $119
  • 1st Year Associate (539) $119
  • NA (15) $119
  • 3rd+ Year Analyst (146) $115
  • Engineer (6) $114
  • 2nd Year Analyst (345) $103
  • Associate Consultant (166) $98
  • 1st Year Analyst (1048) $87
  • Intern/Summer Associate (190) $83
  • Intern/Summer Analyst (552) $67
notes
16 IB Interviews Notes

“... there’s no excuse to not take advantage of the resources out there available to you. Best value for your $ are the...”

Leaderboard

1
redever's picture
redever
99.2
2
Secyh62's picture
Secyh62
99.0
3
BankonBanking's picture
BankonBanking
99.0
4
Betsy Massar's picture
Betsy Massar
99.0
5
dosk17's picture
dosk17
98.9
6
kanon's picture
kanon
98.9
7
CompBanker's picture
CompBanker
98.9
8
GameTheory's picture
GameTheory
98.9
9
Linda Abraham's picture
Linda Abraham
98.8
10
bolo up's picture
bolo up
98.8
success
From 10 rejections to 1 dream investment banking internship

“... I believe it was the single biggest reason why I ended up with an offer...”