Participants: Max, Maha Duration: ~31 min Video: YouTube
Context
Short working call between Max and Maha. Reiner was not on this one. Meeting was cut short by Maha's next appointment — they agreed to continue later.
The call picks up two threads from the previous day's session: a fresh lead from a Phoenix Consultancy partner, and a concept Max sketched out the night before around a GDPR-compliant AI tool for coaches.
Phoenix Consultancy lead
Maha opened with news from the day before: a Phoenix Consultancy partner (they're expanding in the Arab world and elsewhere) reached out. They are looking for something like Amy (Coach Hub's AI coach) but cheaper — a way for their leaders to practise difficult conversations before real meetings. Amy is "too expensive" for them.
Their contact already had an intro to the Zurich setup; Maha + Reiner had a 90-minute call with her. She wants a pilot.
Maha and Reiner sent her the lab-for-pilots landing page that morning with an invitation: register for a pilot, they design per-client.
The framing Maha intends to use: "What was happening with Amy is not what we're doing. Today we can do a lot more with AI — we can create characters to have conversations with — but first we need to know where you are and what you want."
The "regulated AI for coaches" concept
Max walked Maha through the documents he prepared overnight from their previous call. Core problem: European AI regulation is unclear, coaches are scared to use AI on real client data, so they either avoid it (losing the learning) or use it at their own risk without compliance.
Validation approach
Max's proposal: don't build it first. Show the concept as a "we are working on this" prop, see what questions and objections come back. Objections map the real requirements. Then consult an AI-regulation legal specialist — possibly for free, via Ukraine's state AI+blockchain sandbox, which provides free legal support to qualifying projects on EU AI compliance. This sidesteps what would otherwise be a fortune in European legal fees.
Technical shape (sketch, not a build)
- Option A — Zoom integration.
- Option B — separate platform. Max noted from personal experience with a language-learning platform that dedicated platforms tend to lose out to Zoom for practical reasons; sharing materials etc. breaks down.
- Consent flow: explicit digital signature / checkbox on terms before any AI processing of recordings.
- Storage: recordings held in safe-enough storage that doesn't leak. AI processing on local / closed servers, not public LLM APIs.
Maha's counter-image, brainstormed live: "a little bag" — a personal container issued per user. You talk to your bag, the bag doesn't talk to the outside world. Every so often we plug into the bag and update it with new LLM capability. Max agreed that lines up with the direction — host-it-yourself storage where the user has maximum control, but the product still has a surface to work with.
Why this might actually be a gap
Maha pushed hard on "if this is so simple, why isn't it already done?" — multiple times. Max's quick ChatGPT research surfaced:
- Major coaching platforms (Coach Hub / BetterUp / Sounding Board type) — serious clients, but rated weak on AI compliance.
- AI note-takers / assistants — not compliant by design, just technical solutions.
- Mental health / therapy tools — closer to compliant, but different domain (US HIPAA-style medical-data regulation, not coaching).
- Generic GDPR tools — store things compliantly, but no coaching workflow.
Conclusion: fragmented solutions, no specific coaching-workflow-plus-compliance-plus-AI-analysis tool. Max flagged that this was one ChatGPT query and needs deeper work.
Maha's concern about the big incumbents: even if we build something good, they can copy it. Max agreed — monopoly risk is real, either from a regulator body or from an incumbent platform.
Whether AI is worth the risk at all for a coach
Max argued: regulators everywhere recommend AI as an extension, never as sole method. Same applies for coaching supervision. Maha pushed back sharply: "Do we want to encourage psychologists, doctors, coaches to practise every single minute they want, or tell them that everything they do will be held against them?" Max conceded — of course we want them using every tool available.
Maha also flagged the political dimension: ICF could decide AI use isn't allowed in supervision. She called it short-sighted but real. She gave a personal signal: she has stopped some practices because the bureaucratic load of getting signatures and storage right — for her reputation — isn't worth it. Stopping is easier than being compliant. That's the market signal.
Supervision angle
The coaching world has grown two more legs — mentoring and supervision. Maha already sends her mentees off with recordings of their practice clients and tells them "go ask AI how you did, tell it to be an assessor." They don't think of it themselves, but once told, they self-teach. Coaches are passionate spenders on their own development — "coaches don't think of money on coaching products." Guaranteed client base, and specifically scared of using AI on client data right now.
Monetisation sketch
Max floated a low price of entry — EUR 5–10/month or ~EUR 20/year — not as the main revenue line, but as the gateway. Base service: safe recording storage + basic AI processing. Upsell layer: advanced analytics, courses on how to use AI for self-supervision, smarter models for deeper analysis. Recurring base, expansion on top.
Maha again: "it sounds so simple and easy — I'm still surprised not many people are doing it." Next step she asked for: check who else is doing something similar, and why nobody has stepped in.
Max's read on why the gap exists: AI regulation is new everywhere, EU went first and is arguably premature, standards aren't established, the field is huge and fragmented. Medical was bigger and got served first. Coaching is smaller, still open.
Business model questions parked
Max framed the stages: problem–solution fit → product–market fit → scale. Right now: problem–solution fit. Before legal/technical investment, confirm people would actually pay. If yes, market size calculation gates how much to spend on legal + tech.
Maha agreed: test market need first.
Close
Maha had a hard stop for her next meeting. Both agreed to continue.
Action items
| Owner | Action |
|---|---|
| Max | Deeper competitive research — is anyone actually building a coaching-workflow-plus-compliance-plus-AI-analysis tool? Dig past the one-query ChatGPT result. |
| Maha | Follow up with the Phoenix Consultancy partner on pilot scoping via the lab page. |
| Max | Explore the Ukraine AI+blockchain sandbox for free EU-compliance legal support. |
| Both | Validate problem–solution fit with a small number of coaches before any build — surface their objections and willingness-to-pay. |
| Both | Reconvene to continue the conversation (cut short by Maha's next call). |