Most sought-after skills for Quantitative research position
Hi there,
I went to undergrad at a target/semi-target school (top 10 but not ivy) and am currently pursuing a master's in operations research (OR) in an ivy. My plan is to work as a quantitative analyst after graduation.
That being said, I am torn between doing a financial engineering track (taking MBA courses as well as basic OR courses such as optimization and stochastic model) and big data/machine learning track (data analysis and data mining courses). I would love to hear some advice from people who know/currently work in the quantitative finance industry (Hedge fund, Wealth management, etc.)
Thanks in advance!
Majority of real-life quant research is not very mathematically intense, so bear that in mind. Also, depending on what the group does, their requirements will vary. Here is a list of what I looked for when hiring recently: - have at least one programming language that you are very comfortable with - two languages, one prototyping/research and one for power computing are nice - be comfortable with most statistical concepts, it's surprising how many people aren't - basic linear algebra and surrounding data processing ideas (PCA etc) - most importantly, have a brain - that means know what you don't know
I would say the big data/machine learning track. Definitely the hot trend going on. Any quantitative asset management conference is pretty much solely big data/ml at this point. I think this will only continue to be the case as the newer datasets get longer histories.
Others are right in that day to day you aren't thinking about crazy math or whatever but it's useful to know. As another poster pointed out, it's arguably much more useful to gain intuition about how to deal with data, do cross-validation, PCA, and other stats techniques
Agree and disagree. Depends how ML is being used. It's certainly not a fad though.
ML methods are fantastic for extracting alpha signals from unconventional datasets, which are then fed into standard quantitative models.
The reason that the ML strategies struggle to get rolled out is because there is still significant pushback from LPs in terms of alpha/factor contribution. When you start using complex machine learning models for actual return forecasting, it becomes difficult if not impossible to attribute your alpha to the different variables that your model is using. Standard quantitative investing approaches give you the ability to do alpha attribution on a variable by variable basis. LPs do not trust this, which is why most of the cutting edge stuff is done at prop shops as well as the top quant funds. And anyway, RenTec is clearly has been using ML so kind of hard to dispute them.
Another major reason as to why machine learning has struggled is because most hedge funds are far behind in technology, particularly non-quant firms that have hired ML people to 'check the box' as something they're looking into. Doing high quality ML research takes both a large amount of computing power and a large amount of clean financial data. People underestimate how difficult it is to clean data such that it is good enough to be put into an ML system. As I'm sure you know, financial data from vendors is often terrible in quality. Regressions tend to be a bit more robust in dealing with shitty data because you can do certain things to minimize the impact and you can interpret them in a more causal fashion. Not the case with ML. ML methods are made to find non-linear patterns and if you give it something incorrect, it may overfit based off garbage data.
There's very little reason to think that the standard regression based approaches are the optimal way of forecasting equities. No reason to think that the many financial datasets that are out there are linearly related. The only reason this is the standard approach is because of the ease of interpret-ability of regression based approaches that are the basis of most active quantitative equity strategies.