Interview Preparation: AI in Finance

For Medical Professionals Magazine

Interview Context

Preparation Notes

This document contains all 13 potential questions with suggested talking points. Key themes to emphasize:

Question 1
Something about your background, working in the field of AI and finance. Are you an AI man in the first place, or is your background financial? What's your idea about the connection between these two? Btw. Do you have any connection with health care?

Suggested Answer

That's a great starting question, and it's actually not an either-or situation. My background has always been at the intersection of both AI and finance from the very beginning. I'm currently an Associate Professor at the University of Twente in the Netherlands and also a Professor at Bern Business School in Switzerland, where my work specifically focuses on Finance and AI together.

I spent years working in the financial industry – at firms like Goldman Sachs, Credit Suisse, Man Investments, and Bank of America Merrill Lynch. During that time, I saw firsthand how quantitative methods and increasingly sophisticated algorithms were transforming how markets operate. My academic work has continued that focus, looking at how AI can improve portfolio optimization, credit risk modeling, fraud detection, and even regulatory compliance.

Beyond my industry and academic positions, I've had the opportunity to coordinate large-scale international research efforts in this space. From 2020 to 2024, I chaired a European COST Action that brought together 400 academics across 51 countries to study AI applications in finance. I also coordinate an EU-funded Marie Skłodowska-Curie research network focused on Digital Finance. This work has involved collaborations with major institutions including the European Central Bank, the Bank for International Settlements, ING, Deutsche Börse, and Quoniam Asset Management – giving me a broad perspective on how AI is being applied and evaluated at the highest levels of the financial system.

Healthcare Connection - Personal Story:

Since you asked about healthcare – I actually have a very personal story that illustrates the AI-human expertise dynamic perfectly. A few years ago, I was frustrated by long wait times to see a dermatologist about skin moles. So I did what many researchers might do: I built my own machine learning application to analyze mole images and assess potential risks.

Here's the interesting part: when I finally did see the dermatologist, their expert assessment was ultimately better than my ML model. The dermatologist could integrate context, patient history, subtle visual cues, and years of pattern recognition in ways the algorithm couldn't.

This experience taught me something crucial that applies equally to finance and medicine: AI is a powerful tool that augments human expertise, but it doesn't replace it. The best outcomes come from combining algorithmic power with human judgment, experience, and contextual understanding. Just as a doctor uses diagnostic tools but makes the final clinical decision, financial professionals use AI systems but retain ultimate responsibility for investment decisions.

Key Talking Points

  • Not "AI person" vs "finance person" – intersection has been my focus throughout
  • Industry experience at major financial institutions + academic research
  • Leadership: COST Action (400 academics, 51 countries), EU Marie Curie network, ECB/BIS collaborations
  • Personal healthcare ML story: dermatologist example shows AI augmentation, not replacement
  • This lesson applies equally to finance: human expertise + AI tools = best outcomes
Question 2
More specific: What's the impact of artificial intelligence on investing and wealth management nowadays?

Suggested Answer

AI is having a significant and multifaceted impact on wealth management today, but it's important to understand that this is an evolution, not a revolution. The impact shows up in several key areas:

1. Processing Scale and Speed: AI systems can analyze vast amounts of data – financial reports, market movements, news sentiment, economic indicators – far faster and more comprehensively than human analysts alone. Think of it like medical imaging: a radiologist can review X-rays much faster with AI-assisted detection systems, but the radiologist still makes the diagnosis.

2. Pattern Recognition: Machine learning excels at identifying complex patterns in market behavior, price movements, and risk factors. This is particularly valuable in areas like credit risk assessment – similar to how AI helps identify patterns in medical diagnostics, but with financial data.

3. Personalization at Scale: AI enables wealth managers to provide more customized portfolio recommendations tailored to individual client risk profiles, time horizons, and goals. Previously, this level of personalization was only available to the ultra-wealthy. Now it can be scaled to a broader client base.

4. Risk Management: AI systems can monitor portfolios continuously and identify emerging risks in real-time. They can also help with stress testing – running thousands of "what-if" scenarios to understand how portfolios might perform under different market conditions.

5. Operational Efficiency: Many routine tasks – portfolio rebalancing, compliance checks, reporting – can be automated, freeing wealth managers to focus on relationship building and strategic advice.

Key Talking Points

  • Evolution not revolution – AI augments existing processes
  • Scale and speed: analyzing vast data sets rapidly
  • Personalization at scale (democratizing high-end wealth management)
  • Enhanced risk management and scenario analysis
  • Medical parallel: AI-assisted diagnosis vs. AI replacing doctors
Question 3
To what extent is AI fundamentally changing the investment process, and what does this mean for risk, returns, and the role of human judgement.

Suggested Answer

This is really the crucial question. AI is enhancing the investment process in meaningful ways, but "fundamental change" is too strong a characterization. Let me break this down:

Changes to the Investment Process:

  • Data integration: Traditional investment analysis relied heavily on financial statements and analyst reports. AI now incorporates alternative data sources – satellite imagery of retail parking lots, social media sentiment, supply chain data, credit card transactions. This provides a more complete picture, but the core question remains: "What's this asset worth?"
  • Speed of reaction: AI systems can identify and respond to market movements in milliseconds. This matters most in high-frequency trading, but even for longer-term investing, faster information processing helps.
  • Systematic decision-making: AI removes some emotional biases from investment decisions – fear, greed, herd mentality. This is genuinely valuable.

Impact on Risk:

This is nuanced. AI can help identify and measure existing risks better – finding correlations and patterns humans might miss. However, AI also introduces new risks: model risk (what if the algorithm is wrong?), concentration risk (many firms using similar AI systems could all make the same trades), and the "black box" problem (difficulty understanding why an AI made a particular decision).

Think of it like medical treatment protocols: evidence-based guidelines improve care consistency and reduce errors, but they don't eliminate the need for clinical judgment when a patient presents atypically.

Impact on Returns:

The evidence here is mixed. AI has shown clear value in specific areas – high-frequency trading, credit scoring, fraud detection. For traditional asset management, the jury is still out. Some AI-driven funds have outperformed, others haven't. There's also a paradox: if everyone uses similar AI systems, the advantage disappears. It's like a medical diagnostic tool – it's most valuable when you have it and others don't, but once it's universally adopted, it becomes table stakes rather than an advantage.

The Role of Human Judgment:

This is where the parallels to medicine are strongest. Human judgment remains essential for several reasons:

  • Context and interpretation: AI can identify patterns, but humans understand why those patterns matter and when they might break down
  • Unprecedented situations: AI learns from historical data. In truly novel situations (think 2008 financial crisis, COVID-19 pandemic, recent geopolitical shocks), historical patterns may not apply
  • Ethical and value judgments: Should we invest in this sector? How do we balance returns with ESG considerations? These are human decisions
  • Client relationships: Understanding a client's real risk tolerance, life circumstances, and behavioral tendencies requires human empathy and communication
  • Accountability: Ultimately, someone needs to be responsible for investment decisions – and that needs to be a human, not an algorithm

Key Talking Points

  • Enhancement not replacement of investment process
  • Risk: AI identifies existing risks better but introduces new ones (model risk, concentration, black box)
  • Returns: mixed evidence; advantage disappears if everyone uses same tools
  • Human judgment essential for: context, unprecedented events, ethics, client relationships, accountability
  • Medical parallel: evidence-based protocols improve care but don't replace clinical judgment
Question 4
How does it actually work? Do computers decide what the best investment choices are at any given moment? What data do they use? What kind of information is involved in the process?

Suggested Answer

Let me demystify this a bit. It's helpful to think about different levels of automation, because not all AI in finance operates the same way:

Level 1: Decision Support (Most Common):
The computer doesn't decide – it analyzes and recommends. It's like a medical decision support system that flags potential drug interactions or suggests diagnoses based on symptoms, but the doctor makes the final call. The wealth manager reviews AI recommendations, adds their own judgment, and makes the decision.

Level 2: Supervised Automation (Increasingly Common):
The system can execute certain pre-approved actions automatically – like rebalancing a portfolio when allocations drift, or selling a position if it drops below a certain threshold. But these rules are set by humans, and there are guardrails. Think of it like automated medication dispensing in hospitals: the system follows protocols, but within carefully defined boundaries.

Level 3: Autonomous Systems (Rare, Specialized):
Some hedge funds use fully automated trading systems, particularly in high-frequency trading. But even here, humans design the system, set the parameters, and monitor performance. They can override or shut down the system if needed.

What Data Do These Systems Use?

This is where AI really shines – its ability to integrate diverse data sources:

  • Traditional financial data: Stock prices, trading volumes, company financial statements, earnings reports, interest rates, currency exchange rates
  • Economic indicators: GDP growth, unemployment rates, inflation, manufacturing data
  • News and sentiment: AI can analyze news articles, earnings call transcripts, social media to gauge market sentiment about companies or sectors
  • Alternative data: Satellite imagery (counting cars in retail parking lots), credit card transactions (consumer spending patterns), web traffic, supply chain data, weather patterns (affecting agriculture, energy)
  • Market microstructure: Order flow, bid-ask spreads, trading patterns

The AI systems use machine learning algorithms to find patterns in all this data. For example: "When these three economic indicators rise together, tech stocks typically underperform three months later" or "Companies with this pattern of insider trading activity tend to report earnings surprises."

The Process (Simplified):

  1. Data collection and cleaning (ensuring quality)
  2. Pattern recognition using machine learning models trained on historical data
  3. Generation of signals or recommendations
  4. Risk assessment and portfolio optimization
  5. Human review and decision (for most systems)
  6. Execution and monitoring
  7. Continuous learning and model refinement

Key Talking Points

  • Three levels: decision support (most common), supervised automation, autonomous (rare)
  • Medical analogy: clinical decision support systems – suggest, don't decide
  • Data sources: traditional financial + economic + sentiment + alternative data
  • Process: collect data → find patterns → generate recommendations → human review → execute
  • Even autonomous systems have human oversight and kill switches
Question 5
This could be particularly relevant in today's market environment, where geopolitical tensions and sometimes unexpected policy shifts, particularly in the United States, can quickly influence sentiment and financial markets.

Suggested Answer

You've identified exactly where AI shows both its strengths and limitations. Current geopolitical volatility – trade tensions, policy unpredictability, shifting international relationships – creates a fascinating test case.

Where AI Helps:

  • Speed of reaction: When a policy announcement happens (a new tariff, a regulatory change, a central bank decision), AI systems can analyze the immediate market reaction, scan thousands of news sources, and identify which sectors and companies are most exposed – all within minutes or seconds. That's genuinely valuable.
  • Sentiment tracking: AI can gauge how market sentiment is shifting in real-time by analyzing news flow, social media, options pricing, and trading patterns. This early warning system helps identify when panic or euphoria is building.
  • Scenario analysis: AI can rapidly model multiple scenarios: "What if tariffs increase by 20%? What if the Fed cuts rates? What if both happen?" This helps prepare for multiple possible futures.
  • Cross-market connections: Geopolitical events ripple through markets in complex ways. AI can identify non-obvious connections – how US-China tensions affect European exporters or emerging market currencies.

Where AI Struggles:

Here's the critical limitation: AI learns from historical data. But today's geopolitical environment is often unprecedented. A trade war via Twitter, pandemic lockdowns, coordinated sanctions at this scale – these aren't in the historical training data.

Think about medical research: clinical trials establish what treatments work based on past data. But when a completely new disease emerges (like COVID-19), that historical knowledge has limits. You need human experts to adapt, reason by analogy, and make judgments in uncertainty.

Similarly in finance: An AI trained on decades of relatively stable geopolitical relations may not correctly interpret today's policy volatility. The patterns it learned may not apply. This is where human expertise becomes crucial – understanding political motivations, historical context, and being able to reason about situations the algorithm has never seen before.

The Combination is Powerful:

The best approach combines AI's speed and pattern recognition with human judgment about context and unprecedented situations. The AI might flag: "Manufacturing stocks are selling off hard, this pattern usually continues for 3 days." But the human analyst adds: "However, this is a policy announcement that might be reversed or negotiated, unlike past events that drove similar patterns. Let's not overreact."

Key Talking Points

  • AI strengths: speed, sentiment tracking, scenario modeling, cross-market connections
  • AI limitations: unprecedented events not in training data, policy unpredictability
  • Medical parallel: clinical trials vs. novel disease – need expert judgment for unprecedented situations
  • Best outcomes: AI rapid analysis + human contextual understanding
  • Geopolitical volatility = perfect test case for AI-human collaboration
Question 6
Is there experience already made? Does AI have shown real added value already in the investment process? If yes, in what fields? Or is it still mainly a promise?

Suggested Answer

This is a healthy skepticism, and the answer is: yes, there's real demonstrated value in specific areas, but it's not universal and not a panacea. Let me give you concrete examples:

Proven, Measurable Value:

1. Credit Risk Assessment:
This is probably the clearest success story. AI models for credit scoring and loan default prediction have consistently outperformed traditional methods. They can incorporate many more variables and identify complex patterns in borrower behavior. Major banks and fintech lenders use these systems extensively. The impact is measurable: lower default rates, better pricing of risk.

2. Fraud Detection:
AI excels at identifying anomalous patterns – unusual transactions, suspicious trading behavior, money laundering patterns. False positive rates have dropped significantly while catching more actual fraud. This is similar to AI in medical imaging: detecting anomalies is a natural fit for machine learning.

3. High-Frequency Trading and Market Making:
In very short timeframes (milliseconds to minutes), AI-driven systems have clearly demonstrated value. They can identify tiny pricing inefficiencies and execute trades faster than humans. Major market makers and some hedge funds have profited consistently from these systems.

4. Portfolio Optimization and Rebalancing:
AI can efficiently manage portfolios with many assets, maintaining target allocations while minimizing trading costs and tax impact. This has been particularly valuable for robo-advisors serving retail clients – providing institutional-quality portfolio management at low cost.

5. Regulatory Compliance (RegTech):
AI helps financial institutions monitor trades for regulatory violations, file required reports, and manage compliance obligations. This reduces costs and errors – similar to how AI helps with medical coding and billing.

Mixed Evidence:

6. Active Asset Management (Stock Picking):
Here the picture is less clear. Some AI-driven funds have outperformed, others haven't. There's no consistent evidence that AI provides a persistent advantage in selecting stocks or timing markets over longer horizons. Part of the challenge: if AI finds a profitable pattern, that information gets traded away quickly as others adopt similar approaches.

Still Mostly Promise:

7. Macroeconomic Forecasting:
Despite enormous effort, AI hasn't revolutionized prediction of recessions, inflation, or major market turning points. These events are relatively rare and influenced by complex, changing factors. Similar to long-term weather forecasting or predicting disease outbreaks – difficult even with sophisticated models.

The Pattern:
AI works best where: (1) there's lots of data, (2) patterns are relatively stable, (3) the task is well-defined, (4) feedback is rapid. It struggles where: (1) data is scarce, (2) the environment keeps changing, (3) unprecedented events matter, (4) long time horizons are involved.

Key Talking Points

  • Clear proven value: credit risk, fraud detection, high-frequency trading, portfolio optimization, RegTech
  • Mixed evidence: active stock picking, market timing
  • Still promise: macroeconomic forecasting, predicting major turning points
  • AI succeeds with: lots of data, stable patterns, well-defined tasks, rapid feedback
  • AI struggles with: scarce data, changing environments, unprecedented events, long horizons
  • Not hype, not panacea – demonstrable value in specific domains
Question 7
Isn't it difficult to measure this value?

Suggested Answer

Excellent question – you're absolutely right that measurement is challenging. This is actually very similar to challenges in medical research when evaluating new treatments or diagnostic tools.

The Measurement Challenges:

1. The Counterfactual Problem:
How do you know what would have happened without the AI? If an AI-assisted portfolio returns 8%, would it have returned 6% or 10% without AI? This is like asking whether a patient would have recovered anyway without a particular treatment. You need careful controls.

2. Time Horizon Matters:
An AI system might perform well for months or even years, then fail spectacularly when market conditions change. Short-term success doesn't guarantee long-term value. In medicine, we see similar issues with treatments that seem effective initially but show problems in long-term follow-up studies.

3. Attribution is Complex:
Financial outcomes result from multiple factors – skill, luck, market conditions, risk-taking. Separating AI's contribution from these other factors is difficult. Did the portfolio outperform because of AI or because it took more risk? Or just got lucky?

4. Selection Bias:
Successful AI applications get publicized; failures often remain quiet. This publication bias exists in medical research too – negative results are less likely to be published, creating an overly optimistic picture.

5. The Data Snooping Problem:
If you test 100 different AI models and publish the one that worked best, that success may be random chance rather than genuine predictive power. This is similar to multiple testing problems in clinical trials.

How We Can Measure More Rigorously:

  • A/B testing: Run identical portfolios with and without AI, compare results. Similar to randomized controlled trials in medicine.
  • Out-of-sample testing: Test the AI on data it wasn't trained on, from time periods it hasn't seen. Like testing a diagnostic tool on a new patient population.
  • Specific, measurable objectives: Rather than vague "better performance," measure specific things: "reduced loan defaults by X%," "lowered fraud losses by Y%," "decreased trading costs by Z%"
  • Risk-adjusted returns: Don't just measure returns, but returns relative to risk taken. A system that makes 20% but risks bankruptcy isn't better than one making 10% safely.
  • Long-term track records: Require evidence over multiple market cycles, not just one favorable period.

Where Measurement is Easier vs. Harder:

Easier: Credit default prediction (clear outcome: did the loan default?), fraud detection (did fraud occur?), execution quality (were trades executed at favorable prices?)

Harder: Long-term portfolio management (many confounding factors), strategic asset allocation (long feedback cycles), market timing (rare events)

Key Talking Points

  • Yes, measurement is difficult – similar challenges to evaluating medical interventions
  • Key problems: counterfactual, time horizon, attribution, selection bias, data snooping
  • Better measurement: A/B testing, out-of-sample validation, specific objectives, risk adjustment, long track records
  • Easier to measure: clear outcomes with rapid feedback (credit defaults, fraud)
  • Harder to measure: long-term outcomes with multiple confounding factors (portfolio returns)
  • Healthy skepticism is appropriate – demand rigorous evidence
Question 8
Which new risks and vulnerabilities does AI introduce in wealth management, such as model risk and the black box problem?

Suggested Answer

This is crucial – AI isn't just adding capabilities, it's adding new categories of risk that need to be managed. Your medical audience will recognize parallels with risks in medical AI systems.

1. Model Risk:
This is the risk that the AI model is simply wrong – that it has learned patterns that don't actually predict the future, or that it makes systematic errors in certain conditions. In medicine, this would be like a diagnostic algorithm that performs well on the patient population it was trained on but fails on different demographics. Financial markets change over time, so a model trained on past data may not work in future conditions.

Real example: Many quantitative models failed spectacularly in 2008 because they were trained on data from relatively calm market periods and didn't anticipate extreme correlation breakdowns during a crisis.

2. The Black Box Problem:
Many AI systems (especially deep neural networks) are very difficult to interpret. They make predictions, but we can't easily understand why. This creates several issues:

  • Trust: Would you trust a medical diagnosis if the AI couldn't explain its reasoning? Similarly, investors and regulators are uncomfortable with investment decisions they can't understand.
  • Debugging: When the model makes a bad prediction, how do you figure out what went wrong and fix it?
  • Bias detection: Hidden biases (like discriminating based on protected characteristics) may be embedded in the model without anyone realizing it.
  • Regulatory compliance: Regulations often require explainable decision-making. A black box creates compliance challenges.

There's active research in "explainable AI" trying to address this – similar to efforts in medical AI to make diagnostic systems more interpretable.

3. Data Quality and Bias Risk:
AI models are only as good as the data they're trained on. If the training data is biased, incomplete, or unrepresentative, the model will be too. In medicine, we worry about algorithms trained predominantly on one demographic group failing for others. In finance, models trained during bull markets may fail in bear markets, or models may perpetuate historical biases in lending.

4. Concentration and Systemic Risk:
If many market participants use similar AI systems, they may all make the same decisions at the same time, amplifying market movements and creating instability. This is like many doctors following the same flawed protocol – individual errors become systemic. The 2010 "Flash Crash" showed how automated systems can interact in unexpected ways.

5. Cybersecurity Vulnerabilities:
AI systems can be hacked or manipulated. "Adversarial attacks" can fool AI models by subtly altering inputs. Imagine malicious actors manipulating news articles or data feeds to trick AI systems into making bad investment decisions. Similar to concerns about hacking medical devices or electronic health records.

6. Over-Reliance and Deskilling:
If professionals become too dependent on AI systems, they may lose the skills to recognize when the system is wrong. Medical professionals worry about this too – over-reliance on decision support systems may erode clinical judgment. In finance, if a generation of analysts grows up trusting AI recommendations without questioning them, who will catch the errors?

7. Feedback Loops and Self-Fulfilling Prophecies:
AI models can influence the markets they're trying to predict. If many AI systems predict a stock will fall and sell it, that selling pressure makes the stock fall – not because of fundamental value, but because of the AI predictions themselves. This creates unstable feedback loops.

How We Manage These Risks:

  • Human oversight: Keep humans in the loop for critical decisions
  • Diverse models: Use multiple different AI approaches rather than relying on one
  • Stress testing: Test models in extreme scenarios they haven't seen
  • Explainability requirements: Prioritize interpretable models where possible
  • Continuous monitoring: Track model performance and intervene when it degrades
  • Circuit breakers: Automatic safeguards that halt systems if behavior becomes erratic
  • Regulatory oversight: Develop standards and requirements for AI in finance

Key Talking Points

  • Model risk: AI may learn wrong patterns or fail in new conditions
  • Black box: can't understand AI reasoning – trust, debugging, bias, compliance issues
  • Data quality/bias: garbage in, garbage out
  • Systemic risk: everyone using same AI creates herd behavior
  • Cybersecurity: AI systems can be manipulated or hacked
  • Over-reliance: erosion of human judgment and expertise
  • Feedback loops: AI predictions influence what they predict
  • Medical parallels: diagnostic AI faces similar challenges
  • Mitigation: human oversight, diverse models, stress testing, explainability, monitoring
Question 9
Would investors trust an 'automatic' investment system? I can imagine they want to make the final decisions themselves, especially in volatile markets, and consider AI mainly as a help.

Suggested Answer

You've touched on a fascinating behavioral and psychological dimension. The research on trust in AI systems shows patterns very similar to trust in medical AI – it's complex and context-dependent.

Current Evidence on Trust:

Segmented by investor type:

  • Younger, tech-savvy investors: Generally more comfortable with automated systems, especially for routine tasks like rebalancing or tax-loss harvesting. Robo-advisors have attracted billions in assets, primarily from this demographic.
  • High-net-worth individuals: Generally prefer human advisors with AI assistance. They want the relationship, the customization, the ability to discuss complex situations. They're comfortable with AI as a tool their advisor uses, but not as a replacement.
  • Retail investors: Mixed – some are drawn to low-cost automation, others are skeptical and prefer human guidance.

Context Matters Enormously:

Trust varies by situation, similar to medical contexts:

  • Routine tasks: High trust. People are comfortable letting AI handle routine portfolio rebalancing, just as they're comfortable with automated medication dispensing for routine prescriptions.
  • Stable markets: Moderate trust. When markets are calm, people are more willing to let systems operate automatically.
  • Volatile markets or major decisions: Low trust. When markets crash or when making life-changing financial decisions, people want human guidance. This parallels medicine: patients may use symptom-checker apps for minor issues but want a real doctor for serious diagnoses or major treatment decisions.

The "Automation Paradox":
Research shows an interesting pattern: people are most skeptical of AI precisely when it might be most valuable. During market crashes, human investors often panic and make emotion-driven mistakes – this is exactly when AI's lack of emotions could be beneficial. But it's also when people are least willing to trust automation.

Similarly in medicine: a patient experiencing frightening symptoms may benefit most from a calm, systematic diagnostic algorithm, but that's exactly when they most want to see a human doctor.

Building Trust:
The financial industry is learning that trust requires:

  • Transparency: Explain what the AI is doing and why (the explainability problem we discussed earlier)
  • Control: Let investors set parameters, override decisions, or adjust automation levels. Like medical patients wanting shared decision-making, not just being told what to do.
  • Track record: Trust builds gradually through consistent, good performance
  • Human backstop: Knowing a human expert is available if needed provides psychological comfort
  • Appropriate framing: Position AI as "assistant" or "tool" rather than "replacement" for human judgment

The Hybrid Model is Winning:
The most successful approaches combine AI efficiency with human touchpoints. Some robo-advisors now offer access to human advisors for complex questions. Traditional advisors use AI tools behind the scenes. This hybrid model – which your question anticipates – seems to match investor preferences.

It's exactly like the direction healthcare is moving: AI-assisted diagnosis where the algorithm helps the doctor, but the doctor remains responsible and patients can discuss concerns with a human professional.

Key Talking Points

  • Trust varies by investor type, context, and task
  • High trust for routine tasks in stable markets; low trust for major decisions in volatility
  • Automation paradox: skepticism highest precisely when AI might be most valuable
  • Building trust requires: transparency, control, track record, human backstop
  • Hybrid model (AI + human) winning in both finance and medicine
  • Medical parallel: AI-assisted diagnosis with physician responsibility
  • Your intuition is correct: investors want AI as help, not replacement
Question 10
This would make personal contact between investor and wealth manager remain indispensable.

Suggested Answer

Yes, exactly – and I'd argue this personal relationship isn't just surviving despite AI, it's actually becoming more valuable because of AI. Let me explain why.

Why Personal Contact Remains Indispensable:

1. Complex, Context-Dependent Situations:
Life events that require financial decisions – retirement, inheritance, divorce, career changes, health crises, family obligations – are complex and unique. They require understanding someone's complete situation, their values, their fears, their goals. AI can analyze scenarios and crunch numbers, but it can't fully grasp context the way another human can.

This is exactly like medicine: an algorithm can analyze symptoms and lab values, but a good doctor also considers the patient's living situation, support system, ability to follow treatment plans, and personal values about quality of life versus longevity.

2. Behavioral Coaching:
Research consistently shows that one of the most valuable things financial advisors do is keep clients from making emotional mistakes – panic selling in crashes, chasing performance in bubbles, abandoning strategies prematurely. This behavioral coaching requires human understanding, empathy, and persuasion. An algorithm can't talk a panicked investor off the ledge during a market crash.

3. Trust and Accountability:
People need someone to be responsible. When things go wrong or when difficult decisions need to be made, clients want a person they can talk to, who will explain what happened and take responsibility. You can't build that kind of relationship with an algorithm.

4. Customization and Judgment:
While AI can personalize at scale, truly bespoke solutions for high-net-worth or complex situations still require human creativity and judgment. Tax planning, estate planning, coordinating multiple goals and constraints – these benefit from experienced human insight.

5. Ethical and Value-Based Decisions:
Many investors care about ESG factors, ethical investing, supporting certain causes, or avoiding certain industries. These values-based decisions require discussion and understanding – they're not just optimization problems.

How the Role is Evolving (Not Disappearing):
What's changing is how wealth managers spend their time:

  • Less time on: Routine calculations, paperwork, simple rebalancing, basic compliance tasks → these get automated
  • More time on: Relationship building, complex planning, behavioral coaching, education, communication → these become the core value proposition

This mirrors medicine: technology handles more routine tasks (automated test ordering, flagging drug interactions, tracking vitals), freeing doctors to focus on complex diagnoses, difficult conversations, patient relationships, and judgment calls.

The "Centaur Model":
There's a term from chess that applies here: "centaur" players (human + computer) consistently beat both pure humans and pure computers. The combination is better than either alone. In wealth management, this means:

  • AI handles data analysis, pattern recognition, calculations, monitoring, routine optimization
  • Humans handle relationships, context, judgment, creativity, communication, behavioral coaching, accountability

What This Means for the Industry:
Wealth managers who embrace this model – using AI to enhance their capabilities while focusing on the irreplaceably human elements – will thrive. Those who try to compete with AI on purely computational tasks, or those who resist using AI tools entirely, will struggle.

It's similar to radiology: AI is very good at spotting anomalies in images, but the best outcomes come from AI-assisted radiologists who use these tools to be more efficient and accurate while applying their expertise to complex cases and communicating with patients and referring physicians.

Key Talking Points

  • Personal contact not just surviving but becoming MORE valuable
  • Human essential for: complex context, behavioral coaching, trust, accountability, customization, values-based decisions
  • Role evolving: less routine work, more relationship and judgment
  • "Centaur model": human + AI better than either alone
  • Medical parallel: doctors focusing on complex cases and patient relationships while AI handles routine analysis
  • Winners will be wealth managers who embrace AI tools while excelling at human elements
Question 11
This means that human judgement and responsibility will remain the norm for the foreseeable future, and that the combination of AI and professional human decision-making is likely to create the greatest added value for investors.

Suggested Answer

Yes, that's exactly right – and I'd say this isn't a temporary transitional phase, it's the fundamental model going forward. Let me explain why human judgment and responsibility aren't just remaining, but are actually non-negotiable.

Why Human Judgment Remains Central:

1. Accountability Cannot Be Delegated to Algorithms:
Legally, ethically, and practically, someone needs to be responsible for investment decisions. You can't sue an algorithm. You can't hold a neural network accountable. When things go wrong – and sometimes they will – there needs to be a human who made the decision and can explain the reasoning.

This is identical to medicine: even if an AI suggests a treatment, the physician is ultimately responsible for prescribing it. The doctor can't say "the algorithm told me to do it" if something goes wrong.

2. Novel Situations Require Human Reasoning:
Financial markets will continue to produce unprecedented events. AI learns from historical patterns, but history doesn't repeat exactly. When something truly new happens – the next financial crisis, the next technological disruption, the next geopolitical shock – human reasoning, creativity, and judgment become essential.

Think about COVID-19 in medicine: no algorithm trained on pre-2020 data could have guided the response to a novel pandemic. Human medical experts had to reason by analogy, make decisions with incomplete information, and adapt rapidly. Finance faces similar challenges.

3. Ethical Dimensions Require Human Values:
Investment decisions often involve competing values: returns versus sustainability, growth versus stability, personal gain versus social impact. These are fundamentally human value judgments, not optimization problems with objectively correct answers. AI can inform these decisions, but it can't make them.

4. Trust Requires Human Relationships:
As we discussed, investors – especially when facing uncertainty or major life decisions – need human relationships. This isn't just sentiment; it's practical. Humans can understand and communicate context, build trust over time, adapt to changing circumstances in ways algorithms cannot.

Why the Combination Creates Greatest Value:

The evidence increasingly shows that human+AI outperforms either alone:

  • AI provides: Speed, scale, consistency, pattern recognition, continuous monitoring, freedom from emotional biases (fear, greed)
  • Humans provide: Context, creativity, ethical judgment, accountability, relationship, communication, adaptation to novel situations

These capabilities are complementary, not competing. AI is a powerful tool that makes human experts more effective, not a replacement for human expertise.

The Medical Parallel is Instructive:
Healthcare has grappled with this question longer than finance has. The consensus that's emerging is clear: AI augments medical practice but doesn't replace physicians. The doctor remains responsible, but uses AI tools to be faster, more accurate, more consistent.

Studies show that AI-assisted physicians outperform both unaided physicians and AI alone. The combination is more powerful because:

  • AI catches patterns the physician might miss
  • The physician provides context and catches AI errors
  • The physician communicates with the patient and makes final judgments

Wealth management is heading toward exactly the same model: AI-assisted advisors providing superior outcomes by combining algorithmic power with human judgment.

What "Professional Human Decision-Making" Means Going Forward:
The skill set for financial professionals is evolving. Tomorrow's successful wealth managers need to:

  • Understand AI capabilities and limitations (not necessarily program them, but use them intelligently)
  • Excel at the irreplaceably human skills: empathy, communication, complex judgment, behavioral coaching
  • Know when to trust AI recommendations and when to override them
  • Explain AI-driven insights to clients in accessible language
  • Take responsibility for decisions even when aided by algorithms

This is similar to how modern physicians need both medical knowledge and the ability to interpret diagnostic technologies, communicate with patients, and integrate multiple information sources into clinical judgment.

A Long-Term View:
Even in the long term – 20, 30 years out – I expect this hybrid model to persist. AI will become more sophisticated, but so will markets, so will human expectations, and so will regulatory requirements for accountability. The specific balance may shift (AI handling more complex tasks), but the fundamental need for human oversight, judgment, and responsibility is likely to remain.

Key Talking Points

  • Human judgment isn't temporary transitional phase – it's the fundamental model
  • Accountability cannot be delegated to algorithms (legal, ethical, practical)
  • Novel situations require human reasoning beyond historical patterns
  • Ethical dimensions require human values
  • Human+AI outperforms either alone (complementary capabilities)
  • Medical consensus instructive: AI-assisted physicians outperform both AI alone and unaided physicians
  • Professional skills evolving: understand AI, excel at human elements, know when to override
  • Long-term view: hybrid model likely to persist indefinitely
Question 12
I can imagine there is also an opposite influence possible: the AI investments affect the markets (e.g. more demand caused by AI could cause higher prices). Isn't that an extra problem?

Suggested Answer

This is a really sophisticated observation – you're identifying what economists call "endogeneity" or feedback loops. Yes, this is a genuine concern, and it creates some fascinating and challenging dynamics.

The Feedback Loop Problem:

Here's the issue: AI systems are both observing and influencing the markets simultaneously. If many AI systems identify an opportunity and act on it, their collective action changes the market, potentially invalidating the original insight.

Simple example: Many AI systems identify that a particular stock is undervalued. They all buy it. Their buying drives the price up. The stock is no longer undervalued – the AI predictions became self-fulfilling.

This is similar to medical screening programs: if you screen a population for a disease and treat those you find, you change the underlying prevalence of the disease in the population, which can affect how future screening performs.

Specific Manifestations:

1. Momentum Amplification:
If AI systems detect rising prices and buy based on momentum, their buying amplifies the momentum further. This can create bubbles – prices rising not because of fundamental value but because AI systems are chasing trends. And the reversal can be equally dramatic when the systems all try to exit at once.

2. Liquidity Evaporation:
During market stress, if many AI systems are programmed to reduce risk or stop trading under certain conditions, liquidity can vanish suddenly. We saw this in the 2010 "Flash Crash" when automated systems interacted in unexpected ways, causing a 9% market drop in minutes.

3. Crowding and Convergence:
If many institutions use similar AI approaches (same data sources, similar algorithms, trained on same history), they'll make similar trades. This creates "crowding" – everyone trying to execute the same strategy simultaneously. When it works, profit opportunities disappear quickly. When it fails, everyone tries to exit together, amplifying losses.

4. Self-Fulfilling Prophecies:
AI systems predicting market movements can cause those movements. If AI predicts volatility will increase and therefore reduces exposure, that selling creates the volatility it predicted. The prediction influenced the outcome.

5. Breaking Historical Patterns:
Paradoxically, as AI systems learn to exploit historical patterns, those patterns may disappear. If an AI discovers that stocks tend to rise after certain news events, and trades on that pattern, the trading itself changes the pattern. The market adapts, and the AI-discovered edge vanishes.

This is similar to the Hawthorne effect in medical research: the act of observing or measuring something can change the behavior being observed.

Systemic Risk Concerns:

Regulators are increasingly worried about systemic risks from AI in markets:

  • Herding behavior: Synchronized AI responses creating market instability
  • Flash crashes: Algorithms interacting in ways that create extreme volatility
  • Procyclicality: AI amplifying market cycles rather than stabilizing them
  • Opacity: Difficulty understanding why markets moved when multiple AI systems are involved

Is This a Solvable Problem?

Partially, but not entirely. Here's what helps:

  • Diversity of approaches: If different firms use genuinely different AI systems, not all move together. This requires avoiding convergence on similar methods.
  • Circuit breakers and guardrails: Automatic pauses when markets move too fast, limits on algorithm behavior
  • Human oversight: Humans can recognize when AI is contributing to unstable dynamics and intervene
  • Regulatory coordination: Rules designed to prevent dangerous interactions between automated systems
  • Position limits: Restricting how much any one algorithm can influence markets

The Deeper Issue – Markets as Complex Adaptive Systems:
Your question touches on something fundamental: financial markets are "complex adaptive systems" where participants learn and adapt to each other. This creates inherent instability and unpredictability. AI doesn't create this problem – markets have always had feedback loops and crowd behavior. But AI can amplify these dynamics and accelerate them.

The medical analogy here might be antibiotic resistance: using antibiotics changes bacterial populations, which adapt, requiring new antibiotics. There's an ongoing evolutionary arms race. Similarly, markets adapt to AI strategies, requiring AI to adapt, in an endless cycle.

Bottom Line:
Yes, this is a real problem. No, we don't have complete solutions. It's one reason why human oversight, regulatory guardrails, and diversity of approaches remain important. It's also why purely algorithmic markets would likely be unstable – you need human judgment to recognize when feedback loops are becoming dangerous.

Key Talking Points

  • Yes, feedback loops are a real concern – AI observes AND influences markets
  • Manifestations: momentum amplification, liquidity evaporation, crowding, self-fulfilling prophecies
  • Systemic risks: herding, flash crashes, procyclicality, opacity
  • Mitigation: diversity of approaches, circuit breakers, human oversight, regulation
  • Medical analogy: screening changes disease prevalence; antibiotic resistance arms race
  • Markets as complex adaptive systems – AI amplifies existing feedback dynamics
  • Sophisticated question identifying genuine unsolved challenge
Question 13
Finally: what do you think wealth management may look like in the next five to ten years?

Suggested Answer

Let me paint a picture of what I see coming, based on current trajectories and emerging capabilities – with the caveat that predicting technology is always uncertain!

The Likely Evolution (5-10 Years):

1. Ubiquitous AI Assistance:
Just as no doctor today would practice without access to lab tests, imaging, and databases, no wealth manager will work without AI tools. These will include:

  • Real-time portfolio analysis and optimization
  • Continuous market monitoring and opportunity identification
  • Automated compliance and regulatory reporting
  • Sophisticated scenario modeling
  • Client behavior prediction and coaching prompts

The difference from today: these tools will be more integrated, more accurate, more interpretable, and more accessible.

2. Democratization of Sophisticated Strategies:
Investment strategies that today are only available to institutional investors or the ultra-wealthy will become accessible to mass affluent clients through AI. Think about how telemedicine has made specialist consultations more accessible – similar dynamics in finance. This includes:

  • Advanced portfolio optimization with multiple constraints
  • Sophisticated tax optimization strategies
  • Dynamic asset allocation responding to changing conditions
  • Customized ESG portfolios matching individual values

3. Bifurcation of the Market:
I expect we'll see two distinct segments:

Mass Market (Lower Complexity):
Primarily automated robo-advisors with optional human support. Very low cost. Good enough for straightforward situations – young professionals accumulating wealth, simple retirement planning. Similar to how routine medical care might increasingly happen via telemedicine and apps.

High-Touch Advisory (Higher Complexity):
Human advisors using sophisticated AI tools for complex situations – high net worth individuals, business owners, complex tax situations, multi-generational planning, behavioral challenges. Higher cost, but justified by the complexity and customization. Similar to how complex medical cases still require specialist physicians.

4. Shift in Advisor Value Proposition:
Wealth managers will increasingly be valued for:

  • Behavioral coaching: Keeping clients disciplined during volatility
  • Comprehensive planning: Integrating investments with tax, estate, insurance, life goals
  • Complex problem solving: Situations requiring creativity and judgment
  • Relationship and communication: Understanding clients deeply, explaining complex situations clearly
  • AI interpretation: Helping clients understand what AI tools are saying and whether to trust them

Pure investment selection and portfolio construction will be increasingly commoditized.

5. Enhanced Personalization:
AI will enable much more granular personalization:

  • Portfolios optimized for each individual's tax situation, not just their risk profile
  • Integration with overall financial life – spending patterns, income volatility, upcoming expenses
  • Behavioral customization – strategies adapted to each client's psychology and decision-making patterns
  • Values alignment – portfolios that genuinely reflect individual priorities beyond simple ESG screens

Think "precision medicine" – treatments tailored to individual genetics and biomarkers – applied to finance.

6. Real-Time, Continuous Management:
Rather than quarterly reviews, portfolios will be monitored and adjusted continuously. Rebalancing, tax-loss harvesting, risk adjustments will happen automatically in response to changing conditions. Similar to continuous glucose monitors in diabetes care versus periodic blood tests.

7. Better Integration of Alternative Data:
Investment decisions will incorporate much broader information:

  • Climate risk and environmental sustainability data
  • Social and governance metrics
  • Supply chain and operational data
  • Consumer behavior and sentiment
  • Geopolitical risk indicators

This provides a more complete picture of investment risks and opportunities.

8. Enhanced Regulatory Technology (RegTech):
Compliance will be largely automated, with AI continuously monitoring for regulatory violations, conflict of interest, suitability issues. This makes wealth management safer and reduces costs.

What Probably WON'T Change:

  • Need for human judgment in complex, novel, or values-based decisions
  • Importance of trust and relationships, especially for major decisions
  • Regulatory requirement for human accountability
  • Behavioral challenges – people will still make emotional mistakes, still need coaching
  • Market unpredictability – AI won't solve the fundamental challenge that the future is uncertain

The Healthcare Parallel – Looking Forward:
The trajectory in wealth management mirrors what we're seeing in healthcare:

  • Routine tasks increasingly automated
  • Sophisticated tools available to practitioners
  • Personalization at scale
  • But human expertise remains central for complex cases, judgment calls, patient relationships

In both fields, technology is empowering professionals to be more effective, extending their reach to serve more people better – not replacing them.

My Prediction:
In 5-10 years, the question won't be "AI or human in wealth management?" any more than we ask "diagnostic tools or doctors in healthcare?" The question will be: "How effectively is this wealth manager using AI tools while providing the human judgment, relationship, and accountability that clients need?"

The winners will be those who master the centaur model – combining algorithmic power with human wisdom.

Key Talking Points

  • Ubiquitous AI assistance – table stakes for all wealth managers
  • Democratization of sophisticated strategies (precision medicine parallel)
  • Market bifurcation: automated mass market vs. high-touch advisory
  • Advisor value proposition shifts to behavioral coaching, complex planning, relationships
  • Real-time continuous management vs. periodic reviews
  • What won't change: need for human judgment, trust, accountability, behavioral coaching
  • Healthcare trajectory mirrors finance: tools empowering professionals, not replacing them
  • Winners will master centaur model: AI tools + human wisdom