As per usual, there has been much discussion about the role AI might begin playing in finance and white collar professions at large. I came across a fantastic article about the limitations of this subject that I think many of you might find informative.
Here's a somewhat long but interesting excerpt:
In short, deep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humans--as embodied earthly creatures. Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way. By annotating large numbers of training examples to feed into our models, we get them to learn a geometric transform that maps data to human concepts on this specific set of examples, but this mapping is just a simplistic sketch of the original model in our minds, the one developed from our experience as embodied agents--it is like a dim image in a mirror.
Current machine learning models: like a dim image in a mirror.
As a machine learning practitioner, always be mindful of this, and never fall into the trap of believing that neural networks understand the task they perform--they don't, at least not in a way that would make sense to us. They were trained on a different, far narrower task than the one we wanted to teach them: that of merely mapping training inputs to training targets, point by point. Show them anything that deviates from their training data, and they will break in the most absurd ways.