Publication

DIALOGUE DISTILLED

Dialogue Private Office's publication on private wealth, family, and the questions that matter. Published periodically, each edition brings considered perspectives designed to inform, provoke reflection, and support confident decision-making.

Less noise. More meaning.

Issue 03 · Spring 2026

AI, Wealth, and the Questions We Are Not Asking

Featuring Katherine Maslova on AI risk and Dion Bailey on AI literacy.

Previous Editions

Issue 02 · October 2025

Considered Conversation

On the value of human conversation in an age of large language models.

Read

Issue 01 · January 2026

AI, Longevity, and the Practical Reality of Planning for Longer Lives

How AI is already embedded in health and wealth planning decisions.

Read
Dialogue Distilled | AI, Wealth, and the Questions We Are Not Asking
Clarity. Opportunity. Security.
Navigating Your Wealth Ecosystem
Dialogue Distilled
Less noise.  More meaning.
Spring 2026 Issue 03 dialogueprivateoffice.com
Foreword
James Badcock
James Badcock
CEO & Founder
Dialogue Private Office
Beverley Wedderburn
Beverley Wedderburn
COO & Head of Strategic Innovation
Dialogue Private Office

AI, Wealth, and the Questions We Are Not Asking

There is no shortage of commentary on AI and the wealth management industry. Reports from major institutions tell us that adoption is accelerating, that efficiency gains are real, and that those who move fastest will be best positioned.

We think that framing is missing something important.

This month we held our first Dialogue Circle, bringing together senior professionals to sit with questions that rarely get asked directly, and without reaching for easy answers.

Dialogue Circle exists because we believe the most consequential questions are not being asked loudly enough. Not how quickly can we adopt, but what are we actually running, and do we understand it. Not what can AI do for our process, but who is accountable when it shapes a decision that goes wrong. Not whether AI serves our clients, but whether it reflects the values, cultures, and complexities of the families whose futures depend on getting this right.

In this edition of Dialogue Distilled, we have asked two of the voices who helped shape that evening to go deeper. Dion Bailey writes on literacy, drawing on a rare vantage point across news media, insurance, finance, and healthcare, watching AI land inside organisations that often have little clarity about what their systems are actually trained on. Kate Maslova writes on risk, bringing the hard-won operational perspective of someone who has built wealth management infrastructure at scale across EMEA and navigated the regulatory complexity that most commentary in this space glosses over.

Accountability, the question of who ultimately owns the decisions that algorithms increasingly inform, runs through both pieces as an unresolved challenge. It is a question we intend to keep asking.

The next Dialogue Circle takes place in June.

James Badcock and Beverley Wedderburn
Dialogue Private Office
Contributor Perspectives
Katherine Maslova
Katherine Maslova
Family office advisor, founding partner of Bourgeois Bohème Fintech
On Risk

AI Risks: The Questions to Ask

Most conversations about AI risk in wealth management focus on the obvious: algorithmic bias, regulatory exposure, and data leaks. These matter. But having built and tested AI agentic infrastructure from the ground up, I want to flag a quieter set of risks that family offices are significantly underestimating.

The first is data residency. A 2025 survey found that fewer than 30% of wealth management firms could identify the precise jurisdiction in which their AI vendor’s model was hosted. When a family office connects client data to an AI tool, most principals have no idea where that data sits or who has contractual access to it. The question every family should be asking every vendor is simple: where is the model hosted, and who can see what? In Europe and the Middle East, that is not a theoretical concern. It is a regulatory and reputational exposure hiding in plain sight.

The second is what I would call governance by assumption. Many offices are using tools quietly integrated into daily workflows with no oversight structure, no accountability framework, and no human override protocol. When something goes wrong, nobody knows who is responsible. And things do go wrong. I have a documented case: one autonomous agent breached an internal corporate chatbot in under two hours, accessing tens of millions of confidential messages through a single unpatched API. Not a sophisticated nation-state attack. Just one AI agent.

The third is the absence of practical boundaries. NVIDIA’s internal security team applies what they call a Rule of Two: AI agents can access files, access the internet, and execute code, but never all three at once. A simple boundary that most family offices have not thought about at all.

Before any firm-wide rollout, start small. Use ready-made tools to solve your own pain points first. Understand what access they require, what they touch, and how they behave before bringing in consultants who are often more motivated to build than to protect. The organisations that navigate this well are not those who block everything or chase everything. They are those who learn early by making small, deliberate steps, drawing smart boundaries, and staying curious without losing judgment.

One rule to start with today: never paste private client information into a public AI tool. It sounds obvious. It is not yet standard practice.

Katherine Maslova is an investor and entrepreneur across wealthtech and family office software, advising families and family offices across Europe and MENA. She works at the intersection of human networks and modern family office infrastructure, helping principals navigate complexity with the right people, not just the right tools.

Dion Bailey
Dion Bailey
Co-founder & Chief Product and Technology Officer
On Literacy

Do You Actually Understand What You Have Already Adopted?

I sit across multiple industries simultaneously, and the thing I keep bumping into is the same conversation, dressed differently depending on the room. Everyone is talking about AI adoption. Boards, advisers, executives. I understand it. But adoption was the conversation from a few years ago. The question that actually matters now is whether you understand what you have already adopted.

Here is what I mean. Open your Gmail. AI is in there. Your CRM, your compliance screening, your portfolio analytics. AI is already in there too. These were not decisions most people made consciously. They came bundled with the product, turned on by default, and now they are informing the work. That is not a criticism. That is just where we are. The real question is whether anyone has actually looked at what these systems are trained on, or what assumptions they are carrying.

Because here is the part that does not get said enough. These are prediction engines. They sound like they understand things. They do not. They have no grasp of context, culture, or consequence. They reflect the data they were trained on, and that data has blind spots baked in.

In 2024, Goldman Sachs faced regulatory penalties over the Apple Card credit algorithm. The system was found to have produced discriminatory outcomes across applicant profiles. Goldman could not clearly explain why, even when the decisions were technically within the rules. The institution did not know what it was running. You cannot challenge something you cannot see.

Scale that into multigenerational wealth decisions across jurisdictions and cultures, and the stakes feel different. These models are overwhelmingly built on Western, English language data. There is documented evidence that the same tool gives materially different answers depending on what language you ask it in. That is not a footnote for a family operating across geographies. That is a structural problem sitting in the middle of your process.

Literacy is not about knowing what AI is. It is about knowing when it is wrong, when it has drifted, when it is confidently telling you something that does not hold.

The system you are trusting most right now was built by someone, trained on something, and designed to see certain things clearly and others not at all. The question worth asking, whether you are a principal or the adviser beside one, is whether you actually know which is which.

Dion Bailey is an entrepreneur and Adjunct Professor who helps businesses grow through context, insight, and collaboration. He is Co-founder and Chief Product and Technology Officer of Caliber, a media company with newsrooms in New York and London building a new era for journalism across more than 20 countries.

The Next Dialogue Circle

The next Dialogue Circle takes place in June.

Get in touch
Dialogue Private Office

We work with families and advisers who value clarity over noise. If you would like to explore whether we might be useful to you, the conversation starts here.

dialogueprivateoffice.com
London
1 Warwick Street, London W1B 5LR
+44 204 604 1416

Geneva
Rue du Port 3, 1204 Geneva, Switzerland

Confidential. For intended recipients only.
© 2026 Dialogue Private Office Ltd
James Badcock James Badcock

AI, Longevity, and the Practical Reality of Planning for Longer Lives

AI is often framed as a disruptive breakthrough. In the context of longevity planning, what matters most is its practical impact on outcomes, not the attention it attracts. 

AI already works in the background, embedded in diagnostics, dashboards, and monitoring tools. It enables insight without demanding your attention. 

Read More