The research team at Refinitiv Labs is looking at applying causal inference to machine learning problems. How can a causality model be applied to data and data management use cases that test for causes and identify paths to performance gains?
- Narrow AI — artificial intelligence that focuses on a singular or limited task — struggles to differentiate between actions or states that appear in proximity (correlation), and actions that actually affect each other (causation).
- To address this challenge, Refinitiv Labs teamed up with the MIT-IBM Watson AI Lab to apply causal inference to address machine learning problems.
- Applying causal inference will not only create more robust models and predictions, but will also allow for higher-level causal reasoning and explainability of Refinitiv data.
Causal inference can be explained with the following analogy: Does eating caviar make a person healthy — or even wealthy?
As humans, we intuitively understand that consuming fish eggs doesn’t confer well-being or riches. If anything, the opposite is true.
Those who can afford to eat delicacies often do, while also likely having access to superior healthcare. But such logical leaps are generally beyond the capabilities of today’s narrow AI systems.
Narrow AI is the ability to perform specific tasks at a super-human rate within various categories from chess, Jeopardy!, and Go, to voice assistance, debate, language translation, and image classification.
Today, narrow AI struggles to differentiate between actions or states that appear in proximity (correlation), and actions that actually affect each other (causation).
Whether treating patients or trying to predict future stock movements, it’s not enough to identify or even to predict symptoms. The path to addressing health problems or forecasting future stock movements involves understanding the complex chain from instigation to outcome.
We need to get to root causes.
Refinitiv Labs is helping to build AI solutions that can distinguish between causation and correlation. As capital markets increasingly look to AI to make decisions, the current methods of identifying co-variants will become obsolete.
Watch: The economic impact of Coronavirus — What can news data tell us?
How does causal inference work?
Correlation versus causality is a classic challenge which — once addressed — can offer profound strategic value in decision making.
The team aims to develop a transformative learning theory of cause-and-effect networks, and build on established work in complex and high-dimensional learning tasks, including learning networks.
Causality is used by the scientific community when trying to identify relationships between different entities, variables and concepts. It’s also a concept that is applied to historical data, and when testing out different scenarios.
Driving better investment decisions
By analyzing an infinite amount of data sets, you will inevitably identify a range of correlations. You may find that historic fuel prices correlated strongly with a portfolio of airline equities, for instance.
You might also notice that the amount of sunspots correlated with the performance of a portfolio of municipal bonds.
The first one is an example of causal inference. Airline performance is directly linked to fuel prices. You can expect this relationship to hold in the future, and can therefore use this information to support your investment decisions.
The second is a case of spurious correlation. In this instance, the two factors are completely unrelated, and you should not base your municipal bond investment decisions on the amount of recorded sunspots.
Applying causal inference in financial services
We are building solutions that apply causal inference concepts to important machine learning problems.
There are three key elements involved in the project:
- Learn causal graphs from existing data
- Design new experiments to learn the graph
- Applications of causal graph discovery in financial services
This will not only yield more robust models and predictions, but will also allow for higher-level causal reasoning and explainability of Refinitiv data.
Collaborating with the MIT-IBM Watson AI Lab
Founded in September 2017, the MIT-IBM Watson AI Lab is an industrial-academic collaboration focused on developing AI tools and technologies for broad scientific and societal impact. Other members include Samsung, Nexplore, Woodside, Boston Scientific and Wells Fargo.
Data is transforming global finance, and as AI techniques evolve, the disruptive potential of data will reach an exciting new phase. The MIT-IBM Watson AI Lab is at the cutting edge of fundamental research in the field.
Refinitiv is working together with MIT-IBM Watson Lab researchers to build AI solutions that can differentiate between causation and correlation.
Research areas include explainability, understanding relationships in graph data through applied AI, and using AI to extract more insights from unlabeled data.