Friday, December 29, 2017

Data Scientist Skill Set

1         Background

Data science is first and foremost a talent-based discipline and capability. Platforms, tools and IT infrastructure play an important but secondary role. Nevertheless, software and technology companies around the globe spend significant amounts of money talking business managers into buying or licensing their products which often times results in unsatisfying outcomes that do not come close to realizing the full potential of data science.
Talent is key - but unfortunately very rare and hard to identify. If you are trying to hire a data scientist these days you are facing the serious risk of recruiting someone with the wrong or an insufficient skill set. On top of things, talent is even more crucial for small or medium-sized companies whose data science teams are likely to stay relatively small. Wasting one or two head counts on wrong profiles might render an entire team inefficient.
The demand for data scientists has risen dramatically in recent years [1, 2, 3, 4, 5]:
  • New technologies significantly improved our ability to manage and process data; including new data types of data as well as large quantities of data.
  • shift in mind set in business environments took place [6] regarding the utilization of data: from data as a reporting and business analytics necessity towards a valuable resource to enable smart decision making.
  • Last but not least exciting new intellectual developments
  • Last but not least exciting new intellectual developments have taken place in relevant related academic disciplines like machine learning [7, 8] or natural language processing.
Due to high demand, the term ‘data scientist’ developed into a recruiting buzz word which is broadly being abused these days. Experienced lead data scientists share a painful experience when trying to fill a vacant position: Out of a hundred applicants, typically only a handful matches the requirements to qualify for an interview. Some candidates feel already qualified to call themselves ‘data scientist’ after finishing a six-week online course on a statistical computing language. Unqualified individuals often times end up being hired by managers who themselves lack data science experience - leading to disappointments, frustration and an erosion of the term ‘data science’.

2         Who is a Data Scientist?

The data scientist skill set described in the following is based on the idea that it fundamentally rests on three pillars, each representing a skill set mostly orthogonal to the remaining two.
Following this idea, a solid data scientist needs to have the following three well-established skill sets:
  1. Technical skills,
  2. Analytical skills and
  3. Business skills.
Although technical skills are often times the focus of data science role descriptions, they represent only the basis of a data scientist’s skill set. Analytical skills are much harder to acquire (and to test) but represent the crucial core of a data scientist’s ability to solve business problems utilizing scientific approaches. Business skills enable a data scientist to thrive in corporate environments.

2.1        Technical skills | Basis

Technical skills are the basis of a data scientist’s skill set. They include coding skills in languages such as R or Python, the ability to handle various computational architectures, including different types of data bases and operating systems but also other skills such as parallel computing or high performance computing.
The ability to handle data is a necessity for data scientists. It includes data management, data consolidation, data cleansing and data modelling amongst others. As there is often times a high demand for these skills in corporate environments, it comes with the risk of focusing data scientists on data management tasks - thus distracting them from their actual work.
Almost more important than a candidate’s current technical skill set is their mind set. A key factor is intellectual agility providing candidates with the ability to adapt to new computational environments in a short amount of time. This includes learning new coding languages, dealing with new types of data bases or data structures or keeping up with current technological developments like moving from relational databases to object-analytical approaches.
A data scientist with a static technical skill set will not thrive for long as the discipline requires constant adaption and learning. Strong candidates show a healthy appetite for developing their technical skills. When a candidate focusses on a tool discussion during an interview it can be an indication of a narrow technical comfort zone with firm constraints.
Unfortunately, data science job profiles are often times narrowly focused on technical skills; caused by a) the misperception that a successful data scientist’s secret lies exclusively in the ability to handle a specific set of tools and b) a lack of knowledge on the hiring manager’s end as to what the right skill set looks like in the first place. Focusing on technical skills when evaluating candidates renders a significant risk.

2.2        Analytical skills | Core

Scientific problem solving is an essential part of data science. Analytical skills represent the ability to succceed at this complex and highly non-linear discipline. Establishing throrough analytical skills requires a high amount of commitment and dedication (which is a limiting factor contributing to the global shortage of data scientists).
Analytical skills include expertise in academic disciplines like computer science, machine learning, advanced statistics, probability theory, causal inference, artificial intelligence, feature extraction and others (including strong mathematical skills). The list can be extended almost infinetely [9, 10, 11] and has been subject to many debates.
Covering all potentially usefull analytical disciplines is a life-time achievement for any data scientist and not a requirement for a successful candidate. Rather, a data scientist needs to have a healthy mix of analytical skills to succeed. For instance, an expert on Markov chains and an expert on Bayesian networks might both be able to develop a solution for the very same business problem although utilizing their respective strengths and thus fundamentally different methods.
Analytical skills are typically beeing developed through pursuing excellence in a highly quantitative academic field such as computer science, theoretical physics, computational math or bioinformatics. These skills are trained in academic institutions through exposure to hard, unsolved research problems that require a high level of intellectual curiosity and dedication to tackle and eventually solve. This is typically done over the course of a PhD.
Mastering a quantitative research question that nobody else has solved before is a non-linear process inadvertedly accompanied by failing over and over again. However, this process of scientitic problem solving shapes the analytical mind and builds the expertise to later succeed in data science. It typically consists of iterative cycles of
  1. implementing and adapting an analytical approach
  2. applying it and observing it fail, then
  3. investigating the problems and
  4. building an understanding why it failed and where the limitations of the approach lie
  5. to come up with a better more refined approach.
These iterations are acompanied with key learnings and represent small steps towards the project goal thus effectively zig-zagging towards the final solution.
A key requirement for analytical excellence is the right mind set: A data scientist needs to have an intrinsic, high level of curiosity and a strong appetite for intellectual challenges. Data scientists need to be able to pick up new methods and mathematical techniques in a short amount of time to then apply them to a problem at hand - often times within the limited time frame of an ongoing project.
A good way to test analytical skills during an interview process is to provide potential candidates with a business problem and real data to then ask them to spend a few of hours working on it remotely. Discussing the code they wrote, the approach they chose, the solution they built and the insights they generated is a great way to evaluate their potential and at the same time provide the candidates with a first feeling for their potential new tasks.

2.3        Business Skills | Enablement

Business skills enable data scientists to thrive in a corporate environment.
It is important for data scientists to communicate effectively with business users utilizing business lingua and at the same time avoiding a shift towards a conversation that is too technical. Healthy data science projects start and end with the discussion of a business problem supported by a valid business case.
Data scientists need to have a good understanding of business processes as it will be required to make sure the solution they build can be integrated and ultimately consumed by the respective business users. Careful and smart change management almost always plays a role in data science projects as well. A solid portion of entrepreneurship and out-of-the-box thinking helps data scientists to consider business problems from new angles utilizing analytical methods that their business partners do not know about. Last but not least, many big and successful data science projects that ultimately lead to significant impact were achieved through ‘connecting the dots’ by data scientists who built up internal knowledge by working on different projects across departments and functions.
Candidates who come with strong technical and analytical skills are often times highly intelligent individuals looking for intellectual challenges. Even if they have no experience in an industry or in navigating a corporate environment, they can pick up required business skills in a short amount of time - given that they have a healthy appetite for solving business cases. Building strong analytical or technical skills takes orders of magnitude longer.
When trying to determine whether a candidate has an intrinsic interest in business questions or whether he or she would rather prefer to work in an academic setting, it can help to ask yourself the following questions:
  • How well can the candidate explain data science methods like deep learning to business users?
  • When discussing a business problem can the candidate communicate effectively in business terms while thinking about potential mathematical or technical approaches?
  • Will the business users collaborate with the data scientist in the future respect him or her as a partner at eye-level?
  • Would you feel comfortable sending the candidate on their own to present to your manager?
  • Do you think the candidate will succeed in your business environment?

3         Recruiting

Data science requires a mix of different skills. In the end, this mix needs to be adapted to the requirements and the situation at hand, and the business problems that represent the biggest potential value for your company. Big data for instance, is a strong buzz word but in many companies data is under-utilized to a degree that a data science team can focus on low hanging fruit for one or two years in the form of small and structured data sets and at the same time already have a strong business impact.
A key characteristic of candidates that has not been mentioned so far and which can be hard to evaluate is attitude. Hiring data scientists for business consultant positions will require a different mindset and attitude than hiring for integration into an analytics unit or even to supplement a business team.

4         References

[1] NY Times, Data Science: The Numbers of Our Lives by Claire Cain Miller http://nyti.ms/1TfCFmX[2] TechCrunch: How To Stem The Global Shortage Of Data Scientists http://tcrn.ch/1TUIqsB[3] Bloomberg: Help Wanted: Black Belts in Data http://bloom.bg/1Xt8bTO[4] McKinsey on US opportunities for growth http://bit.ly/1WAonmD[5] McKinsey on big data and data science http://bit.ly/1VXQJHD[6] Big Data at Work: Dispelling the Myths, Uncovering the Opportunities; Thomas H. Davenport; Harvard Business Review Press (2014)
[7] Andrew Ng on Deep Learning http://bit.ly/1Tg3g74[8] Andrew Ng on Deep Learning Applications http://bit.ly/1Wza02H[9] Data scientist Venn diagram by Drew Conway http://bit.ly/1Xd6MAn[10] Swami Chandrasekaran’s data scientist skill map: http://bit.ly/1ZUGUIF[11] Forbes: The best machine learning engineers have these 9 traits in common. http://onforb.es/1VXR9Og

Sunday, December 17, 2017

Guide to machine learning and big data jobs in finance from J.P.Morgan

Minimum Spanning Tree for 31 JPM tradable risk premia indices
Financial services jobs go in and out of fashion. In 2001 equity research for internet companies was all the rage. In 2006, structuring collateralised debt obligations (CDOs) was the thing. In 2010, credit traders were popular. In 2014, compliance professionals were it. In 2017, it’s all about machine learning and big data. If you can get in here, your future in finance will be assured.
J.P. Morgan’s quantitative investing and derivatives strategy team, led Marko Kolanovic and Rajesh T. Krishnamachari, has just issued the most comprehensive report ever on big data and machine learning in financial services.
Titled, ‘Big Data and AI Strategies’ and subheaded, ‘Machine Learning and Alternative Data Approach to Investing’, the report says that machine learning will become crucial to the future functioning of markets. Analysts, portfolio managers, traders and chief investment officers all need to become familiar with machine learning techniques. If they don’t they’ll be left behind: traditional data sources like quarterly earnings and GDP figures will become increasingly irrelevant as managers using newer datasets and methods will be able to predict them in advance and to trade ahead of their release.
At 280 pages, the report is too long to cover in detail, but we’ve pulled out the most salient points for you below.

1. Banks will need to hire excellent data scientists who also understand how markets work

J.P. Morgan cautions against the fashion for banks and finance firms to prioritize data analysis skills over market knowledge. Doing so is dangerous. Understanding the economics behind the data and the signals is more important than developing complex technical solutions.

2. Machines are best equipped to make trading decisions in the short and medium term

J.P. Morgan notes that human beings are already all but excluded from high frequency trading. In future, they say machines will become increasingly prevalent over the medium term too: “Machines have the ability to quickly analyze news feeds and tweets, process earnings statements, scrape websites, and trade on these instantaneously.” This will help erode demand for fundamental analysts, equity long-short managers and macro investors.
In the long term, however, humans will retain an advantage: “Machines will likely not do well in assessing regime changes (market turning points) and forecasts which involve interpreting more complicated human responses such as those of politicians and central bankers, understanding client positioning, or anticipating crowding,” says J.P. Morgan. If you want to survive as a human investor, this is where you will need to make your niche,

4. An army of people will be needed to acquire, clean, and assess the data 

Before machine learning strategies can be implemented, data scientists and quantitative researchers need to acquire and analyze the data with the aim of deriving tradable signals and insights.
J.P. Morgan notes that data analysis is complex. Today’s datasets are often bigger than yesterday’s. They can include anything from data generated by individuals (social media posts, product reviews, search trends, etc.), to data generated by business processes (company exhaust data, commercial transaction, credit card data, etc.) and data generated by sensors (satellite image data, foot and car traffic, ship locations, etc.). These new forms of data need to be analyzed before they can be used in a trading strategy. They also need to be assessed for ‘alpha content’ – their ability to generate alpha. Alpha content will be partially dependent upon the cost of the data, the amount of processing required and how well-used the dataset is already.
JPMorgan big data

5. There are different kinds of machine learning. And they are used for different purposes

Machine learning has various iterations, including supervised learning, unsupervised learning and deep and reinforcement learning.
The purpose of supervised learning is to establish a relationship between two datasets and to use one dataset to forecast the other. The purpose of unsupervised learning is to try to understand the structure of data and to identify the main drivers behind it. The purpose of deep learning is to use multi-layered neural networks to analyze a trend, while reinforcement learning encourages algorithms to explore and find the most profitable trading strategies.
JPMorgan machine learning classification

6. Supervised learning will be used to make trend-based predictions using sample data

In a finance context, J.P. Morgan says supervised learning algorithms are provided with provided historical data and asked to find the relationship that has the best predictive power. Supervised learning algorithms come in two varieties: regression and classification methods.
Regression-based supervised learning methods try to predict outputs based on input variables. For example, they might look at how the market will move if inflation spikes.
Classification methods work backwards and try to identify which category a set of classifications belong to.

7. Unsupervised learning will be used to identify relationships between a large number of variables

In unsupervised learning, a machine is given an entire set of returns from assets and doesn’t know which are the dependent and the independent variables. At a high level, unsupervised learning methods are categorized as clustering or factor analyses.
Clustering involves splitting a dataset into smaller groups based on some notion of similarity. For example, it cant involve identifying historical regimes with high and low volatility, rising and failing rates, or rising and falling inflation.
Factor analyses aim to identify the main drivers of the data or to identify best representation of the data. For example, yield curve movements can be described by the parallel shift of yields, steepening of the curve, and convexity of the curve. In a multi-asset portfolio, factor analysis will identify the main drivers such as momentum, value, carry, volatility, or liquidity.

8. Deep learning systems will undertake tasks that are hard for people to define but easy to perform

Deep learning is effectively an attempt to artificially recreate human intelligence. J.P. Morgan says deep learning is particularly well suited to the pre-processing of unstructured big data sets (for instance, it can be used to count cars in satellite images, or to identify sentiment in a press release.). A deep learning model could use a hypothetical financial data series to estimate the probability of a market correction.
Deep Learning methods are based on neural networks which are loosely inspired by the workings of the human brain. In a network, each neuron receives inputs from other neurons, and ‘computes’ a weighted average of these inputs. The relative weighting of different inputs is guided by the past experience.
JPMorgan neural network

9. Reinforcement learning will be used to choose a successive course of actions to maximize the final reward

The goal of reinforcement learning is to choose a course of successive actions in order to maximize the final (or cumulative) reward. Unlike supervised learning (which is typically a one step process), the reinforcement learning model doesn’t know the correct action at each step.
J.P. Morgan’s electronic trading group has already developed algorithms using reinforcement learning. The diagram below shows the bank’s machine learning model (we suspect it’s blurry on purpose).
JPMorgan algorithmic trading architecture

10. You won’t need to be a machine learning expert, you will need to be an excellent quant and an excellent programmer

J.P. Morgan says the skillset for the role of data scientists is virtually the same as for any other quantitative researchers. Existing buy side and sell side quants with backgrounds in computer science, statistics, maths, financial engineering, econometrics and natural sciences should therefore be able to reinvent themselves. Expertise in quantitative trading strategies will be the crucial skill. “It is much easier for a quant researcher to change the format/size of a dataset, and employ better statistical and Machine Learning tools, than for an IT expert, silicon valley entrepreneur, or academic to learn how to design a viable trading strategy,” say Kolanovic and Krishnamacharc.
By comparison, J.P. Morgan notes that you won’t need to know about machine learning in any great detail. – Most of the Machine Learning methods are already coded (e.g. in R): you just need to apply the existing models. As a start, they suggest you can look at small datasets using GUI-based software like Weka. Python also has extensive libraries like Keras (keras.io). And there are open source Machine Learning libraries like Tensorflow and Theano.
JPMorgan machine learning 2

11. These are the coding languages and data analysis packages you’ll need to know

If you’re only planning to learn one coding language related to machine learning, J.P. Morgan suggests you choose R, along with the related packages below. However, C++, Python and Java also have machine learning applications as shown below.
Machine learning tables
machine learning r
machine learning r3

12. And these are some examples of popular machine learning codes using Python

Machine learning python
machine learning python 2
Python code 3

13. Support functions are going to need to understand big data too

Lastly, J.P. Morgan notes that support functions need to know about big data too. The report says that too many recruiters and hiring managers are incapable of distinguishing between an ability to talk broadly about artificial intelligence and an ability to actually design a tradeable strategy At the same time, compliance teams will need to be able to vet machine learning models and to ensure that data is properly anonymized and doesn’t contain private information. The age of machine learning in finance is upon us.

Friday, November 24, 2017

Free Deep Learning Book (MIT Press)

The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free.
Source for picture: click here
Content
For more information, click here. The book is also available on Amazon, and also here (MIT Press). 

Lectures

We plan to offer lecture slides accompanying all chapters of this book. We currently offer slides for only some chapters. If you are a course instructor and have your own lecture slides that are relevant, feel free to contact us if you would like to have your slides linked or mirrored from this site.
  1. Introduction
    • Presentation of Chapter 1, based on figures from the book [.key] [.pdf]
    • Video of lecture by Ian and discussion of Chapter 1 at a reading group in San Francisco organized by Alena Kruchkova
  2. Linear Algebra [.key][.pdf]
  3. Probability and Information Theory [.key][.pdf]
  4. Numerical Computation [.key] [.pdf] [youtube]
  5. Machine Learning Basics [.key] [.pdf]
  6. Deep Feedforward Networks [.key] [.pdf]
    • Video (.flv) of a presentation by Ian and a group discussion at a reading group at Google organized by Chintan Kaur.
  7. Regularization for Deep Learning [.pdf] [.key]
  8. Optimization for Training Deep Models
    • Gradient Descent and Structure of Neural Network Cost Functions [.key] [.pdf]
      These slides describe how gradient descent behaves on different kinds of cost function surfaces. Intuition for the structure of the cost function can be built by examining a second-order Taylor series approximation of the cost function. This quadratic function can give rise to issues such as poor conditioning and saddle points. Visualization of neural network cost functions shows how these and some other geometric features of neural network cost functions affect the performance of gradient descent.
    • Tutorial on Optimization for Deep Networks [.key] [.pdf]
      Ian's presentation at the 2016 Re-Work Deep Learning Summit. Covers Google Brain research on optimization, including visualization of neural network cost functions, Net2Net, and batch normalization.
    • Batch Normalization [.key] [.pdf]
    • Video of lecture / discussion: This video covers a presentation by Ian and group discussion on the end of Chapter 8 and entirety of Chapter 9 at a reading group in San Francisco organized by Taro-Shigenori Chiba.
  9. Convolutional Networks
    • Convolutional Networks [.key][.pdf]
      A presentation summarizing Chapter 9, based directly on the textbook itself.
    • Video of lecture / discussion: This video covers a presentation by Ian and group discussion on the end of Chapter 8 and entirety of Chapter 9 at a reading group in San Francisco organized by Taro-Shigenori Chiba.
  10. Sequence Modeling: Recurrent and Recursive Networks
    • Sequence Modeling [.pdf] [.key]
      A presentation summarizing Chapter 10, based directly on the textbook itself.
    • Video of lecture / discussion. This video covers a presentation by Ian and a group discussion of Chapter 10 at a reading group in San Francisco organized by Alena Kruchkova.
  11. Practical Methodology [.key][.pdf] [youtube]
  12. Applications [.key][.pdf]
  13. Linear Factors [.key][.pdf]
  14. Autoencoders [.key][.pdf]
  15. Representation Learning [.key][.pdf]
  16. Structured Probabilistic Models for Deep Learning[.key][.pdf]

Friday, November 17, 2017

Explainable Artificial Intelligence (XAI)


Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:
  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.
 
Figure 1: XAI Concept
The XAI program will focus the development of multiple systems on addressing challenges problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the Department of Defense (intelligence analysis and autonomous systems).
XAI research prototypes will be tested and continually evaluated throughout the course of the program. At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications.