Artificial Intelligence (AI) is influencing every aspect of the global economy, including investment banking. Ryan Roser, Director of Data Science and Text Analytics, and Sofia Spencer, Head of Digital for Investment Banking, explore how AI can be used to predict opportunity in M&A.
- Our labs in San Francisco have developed a quantitative prediction of M&A targets by analyzing text, patent and fundamental content.
- The team overcame data quality challenges as they made use of our deals database.
- A recent webinar — the first in our Investment Banking Digitalization series — explores whether AI can help uncover hidden M&A opportunities and targets.
We believe that we’re on the cusp of a technological transformation where investment bank are engaging in significant investments into technology. It can transform their workflow to drive efficiency, drive growth opportunities and the automation of certain workflows and potentially create new business models.
The use of artificial intelligence (AI) tools to support idea generation is one of the themes associated with this technological shift.
I recently sat down with Ryan Roser, Director of Data Science and Text Analytics and Sofia Spencer, Head of Digital for Investment Banking, to discuss this topic and how fintech can help find M&A opportunities.
The following Q&A and accompanying webinar — the first in our Investment Banking Digitalization series — explored this space.
Banking digitalization tools
Sofia Spencer: What are some of the technical milestones that you’ve seen in the last few years?
Ryan Roser: We have experienced a very rapid 10-year period with three pivotal events:
- Amazon Web Services opened up the cloud for thousands of companies in 2006.
- The initial release of Apache Hadoop made big data open source in 2011.
- The release of TensorFlow in 2015, a deep learning software that leverages GPU processors to rapidly accelerate the development of AI models.
Sofia Spencer: What do you think it means for investment banks, especially at a time when competition for talent is high?
Ryan Roser: The competition for good talent is definitely intensifying. Investment banks are competing with hedge funds and tech companies for that talent. But there is also competition around the development of new tools, and this also impacts AI.
Deloitte’s recent 2018 M&A Trends survey, for example, revealed that 63 percent of companies surveyed are using new M&A technology tools.
Another key finding is that technology acquisition is the number one driver of M&A pursuits across the companies surveyed.
Sofia Spencer: What are the key investment banking themes in relation to finding opportunity in M&A?
Ryan Roser: The progression from cloud to big data to AI plays out along four different themes:
- The co-mingling of proprietary information.
- Connecting disparate data sets.
- Using predictive tools to help originate new deals and new insights.
- The adoption of automation tools to support better workflows.
Bringing third party and proprietary information together, or connecting disparate data sets, for example, requires big data tools to host all the different types of content.
Identifying opportunity in M&A
Sofia Spencer: Can you tell us about the M&A prediction model that you and the team have been working on?
Ryan Roser: Our Labs team in San Francisco has worked on a new quantitative prediction of M&A targets by analyzing text, patent and fundamental content.
The model provides a daily estimate of the probability of acquisition from more than 25,000 public companies globally. The model breaks into three main components:
There’s fundamental analysis where we bring together M&A history and data from our deals database and pricing data, as well as a company’s financials.
Then we look at insights from patents. There are millions of historical filings that we can analyze and harness to really drive new insights.
And then there’s the text mining component.
Back in 2011, our StarMine research team released the text mining credit risk model.
It was the first of its kind to analyze news, streetevents, conference call transcript filings and research from entitled contributors, to create a probability of default score for publicly traded companies.
For this work, we’re really harnessing that technology and then building on top of that with some of the latest people-learning approaches.
Big data tools
Ryan Roser: If you look across the board, there are a lot of big data tools that help power this model.
We have QA Direct, which is a scalable platform for integrating quantitative analysis and investment data, as well as Hadoop and Spark that manage the patent data and text data.
And there’s extensive flow for deep learning.
This requires technology to make the common mappings between all these different content sets.
We use Intelligent Tagging to identify entities such as companies and text in order to uniquely map these entities into an information model.
Sofia Spencer: So once you have your data lined out and you have your tools available, what happens next?
Ryan Roser: This isn’t a process where the magic insights just happen. In fact, we don’t favour a purely automated data mining approach. Predictive analytics, in my opinion, succeeds by modeling real-world explainable and repeatable phenomena. AI is a very powerful tool for predictive analytics, but it needs to be applied in the right way to avoid spurious conclusions.
Sofia Spencer: Is that a challenge of purely relying on the automated data mining approach?
Ryan Roser: A purely automated data mining approach may lead to spurious correlations.
In our view, data science is science. So, we use a scientific method and are hypothesis-driven.
We start with the question, do background research, test the hypothesis, construct that hypothesis, create experiments and then look at the results and draw conclusions — and repeat.
Using quality data
Sofia Spencer: Once you put together your hypothesis and develop the model, you need training data. How important is that data for this type of modeling?
Ryan Roser: In my view, for AI, data is like water. AI requires a lot of data sets to thrive.
One major development in AI techniques, especially with deep learning, is that you can try to overcome data quality problems by making use of massive amounts of data.
In other words, by looking at the text and patent data across millions of patents and documents we can make the best conclusions possible.
But time and time again we’ve seen that the easiest way to improve your predictive algorithm is to fundamentally improve the quality of data.
For this project we worked on developing a deal forecast by labeling our training data to predict the likelihood that a company may become deal target.
We overcame that by working with our deals database and its rich history of M&A events.
Data you can rely on
Our deals data allows you to monitor deal flow, identify market trends and gain insight into your competitive positioning within any region, asset class or industry vertical with flexible levels of granularity.
Our industry-leading League Table rankings of investment banks and law firms — published by dozens of leading media outlets worldwide — leverage our global news and sourcing capabilities along with our strong relationships with the deal making community to ensure data accuracy and completeness.