What is the future of financial modeling in an AI-powered world? I have been thinking about this a lot lately and wanted to share my thoughts.
Large private equity firms have already created software to automate the standard LBO models they rely on. Blackstone has BX Atlas, for example. Per the Financial Times, it provides the deal team with “a near-instantaneous readout of a deal’s feasibility and whether it merits a more detailed study.”
The last part is what I am most curious about. I don’t doubt that software can build a model so long as the inputs are properly arranged. I think Excel templates and software can both answer this challenge. But can AI provide the detailed study that follows?
It helps to understand how these AI models are trained. Alexandr Wang, the founder and CEO of Scale AI—a major supplier of AI training and annotation data—has become the youngest self-made billionaire in history. Scale AI outsources its annotation effort for training AI to thousands of remote workers around the world.
As to the number of people doing this work globally, an article in The Verge notes that “A recent Google Research paper gave an order-of-magnitude figure of ‘millions’ with the potential to become ‘billions.’”
These workers categorize the emotions being expressed by people in videos, label offensive social media content, judge how sexy different advertisements are, and even identify images of corn so that automated tractors can learn to harvest it. In many cases, the wages are as little as $1.20 an hour, though workers in the U.S. and those with needed expertise can earn significantly more.
Will that level of talent work for more detailed models? Will the economics work if the talent recruited demands a much higher hourly wage?
In addition to the cost-benefit analysis, finance is also interesting because it’s so tempting to break or bend the rules. I don’t see that changing as the tools and technology available to automate the process of generating a company’s financial data continue to expand.
The challenge is that you cannot fully automate what humans are so heavily influenced to manipulate. For so long as fortunes are tied to financial performance, there will be examples of exaggerated financial performance. With that in mind, how should models be monitored to avoid manipulation?
I’ll be the first to admit that I am loving AI as it increasingly appears in software that I use on a regular basis (especially for content creation). But the massive amount of labor required to develop these tools, and the degree to which they can be influenced, makes me question where they will be most useful.
When I build a financial model from scratch, it rarely resembles anything I have built previously. And I am frequently pulling from multiple data sources and developing new assumptions to make the information seamless. So, at least at the moment, I have a hard time picturing how this will work.
There is so much to figure out here. I plan to continue exploring topic, and I am posting this video with the intention of connecting with people who share the same interest. The best thing about exploring a new frontier is that very few people know what they are talking about – myself included :). Advantages come from a willingness to jump in (and potentially be wrong, at least, initially).