AI in Finance: Quick Interview Prep

Medical Professionals Magazine - 30 min interview

Key themes: AI augments human expertise (not replacement), balanced view (benefits + limitations), medical analogies, mole detection story shows AI-human collaboration
Question 1
Something about your background, working in the field of AI and finance. Are you an AI man in the first place, or is your background financial? What's your idea about the connection between these two? Btw. Do you have any connection with health care?

Honestly, it's never been one or the other for me. I'm an Associate Professor at the University of Twente and also teach at Bern Business School, but before that I spent years at Goldman Sachs, Credit Suisse, and other major financial firms where AI and quantitative methods were already transforming everything. I also coordinated a European research network with 400 academics across 51 countries looking at AI in finance, plus work with the ECB and Bank for International Settlements.

As for healthcare, I've got a great story. A few years ago I was frustrated waiting to see a dermatologist about skin moles, so I built my own ML app to analyze them. When I finally saw the dermatologist, their expert judgment was actually better than my model. That taught me what I see in finance too: AI is a powerful assistant, but human expertise matters. The dermatologist integrated context and subtle cues my algorithm couldn't catch. Same in finance – best results come from combining AI power with human judgment.

Question 2
More specific: What's the impact of artificial intelligence on investing and wealth management nowadays?

AI is definitely changing wealth management, but it's more evolution than revolution. The big wins are in processing speed – AI can crunch through mountains of data that would take humans weeks – and in making sophisticated strategies accessible to regular investors, not just the ultra-wealthy. Think of it like having a very fast, very thorough research assistant. It's also great for continuous risk monitoring and automating routine tasks like portfolio rebalancing, which frees up wealth managers to focus on the relationship and strategic advice.

Question 3
To what extent is AI fundamentally changing the investment process, and what does this mean for risk, returns, and the role of human judgement.

I'd say AI is enhancing the process, not fundamentally changing it. The core questions are still the same. On risk, AI helps identify patterns we'd miss but also introduces new risks – like what if everyone's using the same AI and they all make the same mistake? On returns, the evidence is mixed. Some areas like high-frequency trading show clear AI advantages, but for traditional stock picking the jury's still out.

Human judgment remains essential for things AI just can't do: understanding context, dealing with unprecedented situations like the 2008 crisis or COVID, making ethical decisions about where to invest, and building trust with clients. It's like evidence-based medicine – protocols help, but you still need clinical judgment when a patient presents atypically.

Question 4
How does it actually work? Do computers decide what the best investment choices are at any given moment? What data do they use? What kind of information is involved in the process?

Most systems don't actually decide – they recommend. Think of it like a medical decision support system that flags drug interactions but the doctor makes the call. There are three levels: basic decision support (most common), supervised automation where the system can execute pre-approved actions, and fully autonomous systems (rare, mostly in high-frequency trading). Even those autonomous systems have humans who can override or shut them down.

The data is fascinating – traditional stuff like stock prices and earnings, but also alternative data like satellite images of retail parking lots to gauge consumer traffic, social media sentiment, supply chain data. The AI finds patterns like "when these three indicators rise together, tech stocks typically underperform three months later." But humans still review and make the final call in most cases.

Question 5
This could be particularly relevant in today's market environment, where geopolitical tensions and sometimes unexpected policy shifts, particularly in the United States, can quickly influence sentiment and financial markets.

You've hit on exactly where AI shows both its strength and its limits. On the plus side, AI is incredibly fast – when a tariff announcement drops, it can scan thousands of news sources and identify which sectors are exposed within seconds. It's also great at tracking sentiment and modeling multiple scenarios.

But here's the limitation: AI learns from historical data, and today's geopolitical environment is often unprecedented. A trade war via Twitter wasn't in the training data. It's like COVID in medicine – when a completely new disease emerges, historical knowledge has limits. You need human experts to adapt and reason through situations the algorithm has never seen. The best approach combines AI's speed with human understanding of political context.

Question 6
Is there experience already made? Does AI have shown real added value already in the investment process? If yes, in what fields? Or is it still mainly a promise?

There's absolutely real value in specific areas. Credit risk assessment is probably the clearest win – AI models consistently beat traditional methods at predicting loan defaults. Fraud detection is another success story, and high-frequency trading firms have profited consistently from AI. Robo-advisors are providing institutional-quality portfolio management to regular people at low cost.

The mixed bag is traditional stock picking – some AI funds outperform, others don't. And macroeconomic forecasting is still mostly promise – AI hasn't cracked predicting recessions or major market turns. The pattern: AI works best with lots of data, stable patterns, and rapid feedback. It struggles with rare events and changing environments.

Question 7
Isn't it difficult to measure this value?

Absolutely – it's similar to evaluating medical treatments. How do you know what would have happened without the AI? If a portfolio returns 8%, would it have been 6% or 10% without AI? You need careful controls, like A/B testing identical portfolios with and without AI. There's also selection bias – successful applications get publicized, failures stay quiet.

Measurement is easier for things with clear outcomes and rapid feedback, like did the loan default or was fraud detected. It's much harder for long-term portfolio management where many factors are at play. Healthy skepticism is appropriate – demand rigorous evidence over multiple market cycles, not just one favorable period.

Question 8
Which new risks and vulnerabilities does AI introduce in wealth management, such as model risk and the black box problem?

AI introduces some genuinely new risk categories. Model risk is when the AI is simply wrong – it learned patterns that don't actually predict the future. Many models failed spectacularly in 2008 because they'd never seen a crisis like that. The black box problem is that many AI systems can't explain their reasoning, which creates trust issues and makes debugging difficult when things go wrong.

There's also concentration risk – if everyone uses similar AI, they all make the same trades at the same time, which can create market instability like the 2010 Flash Crash. And there's cybersecurity – AI systems can potentially be manipulated by adversarial attacks. We manage these through human oversight, using multiple different models, stress testing, and requiring circuit breakers that halt systems if behavior becomes erratic.

Question 9
Would investors trust an 'automatic' investment system? I can imagine they want to make the final decisions themselves, especially in volatile markets, and consider AI mainly as a help.

Trust is really context-dependent. Younger tech-savvy investors are pretty comfortable with robo-advisors for routine stuff. High-net-worth individuals generally prefer human advisors who use AI tools. And here's an interesting paradox: people are most skeptical of AI exactly when it might be most valuable – during market crashes when emotions run high.

The winning model is hybrid – AI efficiency combined with human touchpoints. Some robo-advisors now offer access to human advisors for complex questions, and traditional advisors use AI behind the scenes. It's like healthcare moving toward AI-assisted diagnosis where the algorithm helps the doctor but the doctor remains responsible. Your intuition is spot on – investors want AI as help, not replacement.

Question 10
This would make personal contact between investor and wealth manager remain indispensable.

Exactly, and I'd argue personal relationships are becoming more valuable, not less. Life events that require financial decisions – retirement, inheritance, health crises – are complex and unique. They need someone who understands your complete situation, not just your numbers. One of the most valuable things advisors do is behavioral coaching – keeping clients from panic selling during crashes or chasing bubbles.

What's changing is how wealth managers spend their time. Less time on routine calculations and paperwork, more time on relationships, complex planning, and judgment calls. It's the "centaur model" from chess – human plus computer beats either alone. The best outcomes come from combining AI's data analysis with human context, creativity, and accountability.

Question 11
This means that human judgement and responsibility will remain the norm for the foreseeable future, and that the combination of AI and professional human decision-making is likely to create the greatest added value for investors.

That's exactly right, and I don't think this is a temporary phase – it's the fundamental model going forward. You can't delegate accountability to an algorithm. You can't sue a neural network. When something goes wrong, a human needs to be responsible. Plus, unprecedented events will keep happening, and AI only learns from history.

The medical parallel is instructive. Studies show AI-assisted physicians outperform both unaided physicians and AI alone. The combination is more powerful because AI catches patterns the doctor might miss, but the doctor provides context and catches AI errors. Finance is heading the same direction. Even in 20-30 years, I expect this hybrid model to persist – the balance may shift but the need for human oversight and judgment will remain.

Question 12
I can imagine there is also an opposite influence possible: the AI investments affect the markets (e.g. more demand caused by AI could cause higher prices). Isn't that an extra problem?

This is a really sophisticated question – you're identifying feedback loops. If many AI systems spot the same undervalued stock and buy it, their buying drives the price up and it's no longer undervalued. The prediction becomes self-fulfilling. During the 2010 Flash Crash, automated systems interacted in unexpected ways causing a 9% drop in minutes.

Regulators worry about herding behavior – if everyone uses similar AI, they all exit together during stress, amplifying crashes. We partially manage this through diversity of approaches, circuit breakers, and human oversight who can recognize when feedback loops are becoming dangerous. But you've identified a genuine unsolved challenge. It's one reason purely algorithmic markets would likely be unstable – you need human judgment to spot when things are going sideways.

Question 13
Finally: what do you think wealth management may look like in the next five to ten years?

I think we'll see AI tools become universal – no wealth manager will work without them, just like no doctor practices without access to lab tests and imaging. Sophisticated strategies currently only available to the ultra-wealthy will become accessible to regular people through AI. The market will likely split: automated robo-advisors for straightforward situations at low cost, and high-touch human advisors using AI tools for complex cases at higher cost.

Advisors will increasingly be valued for behavioral coaching, complex problem-solving, and relationships rather than pure investment selection, which becomes commoditized. Portfolio management will shift from quarterly reviews to continuous real-time monitoring. But what won't change: the need for human judgment in unprecedented situations, the importance of trust and accountability, and behavioral challenges – people will still make emotional mistakes and need coaching.

In 5-10 years, the question won't be "AI or human?" any more than we ask "diagnostic tools or doctors?" The winners will be those who master the centaur model – combining algorithmic power with human wisdom.