Everyone in finance is looking at AI. Every bank, every broker is trying to figure out how AI can help manage wealth. Not just for the rich, but for everyone.
It’s actually a threat to those who don’t catch up fast enough. Managing assets requires more than just moving money around.
It’s about choosing the right mix of investments and adjusting them as things change. Right now, human advis0rs handle that. But is it possible that maybe AI can do it better? Spoiler: Probably not.
But wealth management is expensive, and most people can’t afford the high costs. This is one area where AI can help.
AI-powered systems can offer tailored advice at a lower cost, giving access to people who were previously left out because their wealth wasn’t “enough” to justify the price of human advice.
But here’s the little catch. Robo-advisers haven’t exactly been popular. Even when AI offers the best mix of stocks, bonds, or funds, it’s not enough to just make suggestions.
What’s missing? Communication, according to Juan Luis Perez, former Global Head of Research at Morgan Stanley. That’s the real problem AI has to solve.
AI can analyze thousands of financial instruments in seconds. It knows the numbers, past returns, and risks. But understanding people? That’s a different story.
AI can’t capture the personal narratives or the shifts in expectations that define who we are as investors. Because you see, human investing (even institutional one) isn’t about data.
It’s about emotions, decisions to save, spend, or invest, and long-term planning. These things are deeply personal, and even human advis0rs struggle to understand them (sometimes).
So how is a robo-adviser supposed to? It’s no wonder that most clients end up with the same old 60/40 equity-bond portfolio. That’s the default. AI isn’t needed to figure that out.
To make real progress, AI needs to be smarter. It needs to understand how advisers work, not just throw out generic recommendations. It’s not enough to recommend the same products over and over again.
AI needs to learn from interactions with clients. If the AI can’t explain a portfolio in simple terms, no one will ever actually trust it.
Asset managers are now at a crossroads. For AI to be truly useful, it must give power to both the adviser and the client.
That means decentralizing the process and letting advisers use AI tools to make better decisions. It’s not about following a centralized plan laid out by some Chief Investment Officer (CIO) who’s trying to push high-margin products.
In fact, decentralizing decisions could complicate the process for firms trying to sell those products. Compliance and risk are also challenges.
The future might see conversations with AI that feel almost human. Large language models (LLMs) and AI agents could change the game by learning from our digital footprints.
These AI systems would have enough context from our lives to predict what we want as things change. Theoretically, this could make wealth management more efficient.
But who’s really going to hand over their most personal information to a machine? The level of trust required is simply enormous.
Though if Silicon Valley keeps pushing AI to new heights, we might soon see robo-agents that can have fluid, real conversations with clients. And when that happens, it would change everything.
Now BlackRock, the largest asset manager on the planet, has already been using AI for years. They’ve employed machine learning and large language models to power their investment strategies.
They even use AI to streamline thematic investing. They’ve got this tool, called Thematic Robot, that combines AI with human expertise to create equity baskets based on emerging market themes.
It reportedly speeds up the process of finding investment opportunities across different sectors, which means more efficiency and less wasted time.
But AI is not infallible. Human oversight is still essential, because again, these robots don’t have the judgment or the nuanced understanding of a seasoned portfolio manager like Larry Fink.
If AI makes a mistake, someone needs to catch it. Errors in AI outputs happen, and without human intervention, they could lead to serious consequences. The ideal setup? A mix of human expertise and AI-driven efficiency.