Bringing working AI to your finance.
What finance leaders from 22 multi-entity companies told us at the Keboola Finance Breakfast — and what it means for any CFO trying to make AI actually work on financial data.
Five things to take away
If you read nothing else.
The question changed
Almost 70% named "Start utilizing AI for financial workflows" as a top-3 priority for the next 6 months. The question is no longer if — it's how.
The accuracy gap
A CFO showed us an AI year-end cash answer that was 30% off — because the data sat in OLAP cubes built for humans, not LLMs. The most common failure mode we heard.
Excel isn't the enemy
"Still mostly Excel-driven" remains the #1 reality. The pain isn't Excel — it's that numbers don't match across ERP, BI, and the board pack.
CFOs are quietly building
Roughly a third of registrants are personally using Claude, Claude Code, or Cursor — bypassing IT entirely. "I used to connect Excel to BI. Now Claude builds it itself."
Working AI > AI
The breakfast theme landed because most attendees had already tried AI on finance data and watched it not work. They came to find out why.
The room
Composition of the audience.
26 finance leaders from 22 multi-entity companies — a mix of long-time Keboola customers, active prospects in evaluation, and finance leaders who came to sense the market. Anonymised by role and industry — never by name or company.
Statistics
The numbers behind the conversation.
The "data quality" number is lower than it should be — because most teams frame it as a symptom, not a root cause. Almost every conversation eventually returned to: "…but the data isn't trustworthy enough yet."
Five themes
What kept coming up — across almost every conversation.
The accuracy gap: when AI confidently lies to your CFO
"I asked Claude what the cash position would be at year end. The data is in cubes. The answer was about 30% off. I'd love to know how to connect the data better — or how to restructure it so AI can actually use it."CFO, Czech multi-entity media group
That story landed in the room because almost everyone had a version of it. OLAP cubes, BI dashboards, and consolidated packs are built for humans — they assume the reader brings context the data doesn't carry. LLMs don't bring that context. They take what you give them at face value. If your dimensions are inconsistent across granularities, the model will confidently combine them into a wrong answer.
The Keboola take
The missing layer is what we call a semantic layer — a governed, AI-readable description of what each metric means, where it comes from, how it joins, and what its grain is. Without it, every AI tool is a coin toss.
The lone CFO problem
"I'm the only one on the finance team playing with this."CFO, multi-entity media group
A pattern more widespread than we expected: the CFO is the only person on the finance team experimenting with AI. Not the CTO. Not a "data team." The CFO themselves, in evenings and weekends, with a personal Claude or Cursor licence. Two failure modes follow: experiments don't compound (each is rebuilt from scratch), and the moment that person changes role, the practice dies.
Recommendation
The second person to learn the tool should be in operations or controlling — not in IT. The person who knows what a "trusted number" looks like is also the person who should verify what AI produces.
"Not yet kissed by AI" — the real starting line
"We are a company not yet kissed by AI. I'd like to change that."CFO, mid-market Czech retailer
He described his current reporting flow with disarming honesty: data downloaded from the ERP, run through Excel-built converters, copy-pasted into PowerPoint, mailed to the board. He had not tried Claude inside Excel. He called himself "a blank slate." Across the room, roughly a third of the companies were here. They didn't come for a vendor pitch — they came for a sequence of steps.
The honest sequence (from Home Credit and Creditinfo)
1. Get the data into one place (consolidate across ERPs). 2. Make it trustworthy (define metrics centrally, reconcile entities, set up audit trails). 3. Layer AI on top (agents and copilots that work on governed data, not on dashboard exports).
People want to start at step 3 because it's the visible part. But step 3 only works if 1 and 2 are real. Home Credit's 70% reduction in FP&A reporting time came from doing 1+2 first.
ERP migrations are the forcing function
"We're moving to Helios. The question is: what do we put on top of it? Agents? Automation? Power BI? I came here to find out what's possible."Senior finance leader, engineering software distributor
By our count, eight companies in the room are mid-migration on their ERP. Helios → Abra. Abra → SAP S/4 HANA. Old SAP → modern SAP. Custom CRM as the system of record. ERP migrations were the most common reason to attend.
Recommendation
Treat the ERP migration as the chance to add a thin governed layer between the ERP and your reporting / AI tools — not to overload the ERP itself. ERPs are still optimised for transactions, not AI workloads. The governed layer is what makes the AI investment durable through the next migration.
The build-vs-buy collapse
"I used to connect Excel to BI. Now I know Claude can build it itself. So why would I pay six figures for a tool?"CFO, Central European VC firm overseeing healthcare and tech investments
The bar a finance platform must clear is now much higher. If a platform doesn't deliver governed, AI-ready data with traceability within ~8 weeks, a CFO with a Claude licence will simply build the first useful version themselves — and stay in that local maximum.
The new benchmark
Not "6-month implementation." The benchmark is "faster than the CFO can build it on a weekend." That's why the Keboola FI line of "8 weeks to first value alongside existing ERPs" got the attention it did.
Direct quotes
The room, in its own words.
All quotes are real, anonymised by role and industry — never by name or company.
"We are a company not yet kissed by AI."
"I used to connect Excel to BI. Now I know Claude can build it itself."
"I asked it what cash position would be at year end. It was 30% off."
"Abra and Recap are fine for accounting, but unusable for running the business."
"I'm the only one on the finance team playing with this."
"My biggest problem isn't the tool that connects on top. It's how to guarantee the underlying data is correct."
"I have Claude as a plug-in in Excel and PowerPoint. We've started building a database of our forecast models — the ambition is to use it for automated forecasting."
"We used to fiddle around in Excel for everything. We grew from 20 to 150 people and that stopped working."
"Reporting and results are processed in Excel which is uploaded to the German parent into some online system. We have an accounting programme but plan to replace it next year."
"Looker Studio is too limited. We want Power BI. But we don't want to configure it inside Abra directly."
"Our company has a list of 100 pre-approved AI tools. We pick from the catalogue."
"It currently takes us about a month and a half to post the numbers. The business commentary alone takes two weeks. I'd love AI to write that."
"They are an existing customer, but they only see Keboola as a line item in accounts. They'd like to see what else it can be used for."
"We're growing fast. Our biggest worry is data credibility for reporting systems — the tool on top doesn't matter."
Best practices
What's working — reported by attendees.
Not Keboola talking points. Practices reported by people in the room as something concrete that is working in their organisation.
Connect AI to the data model — not the dashboard.
The CFOs getting reliable answers had connected the model to a data source designed for AI consumption — and given it a dictionary of what each table means. MCP into a governed layer beats MCP into a BI export.
Move document processing first.
At least four companies told us their highest-confidence early use case was document processing — invoices, contracts, supplier paperwork. Mature, ROI-countable in hours saved, failure modes are visible.
Use AI for the commentary, not the number.
The airport finance director described two weeks of monthly time spent writing variance commentary. AI is good at explaining a number when given the underlying data — and unreliable at deciding what the number is. Split the jobs.
Treat AI tooling like security tooling.
The insurance group's "100 pre-approved tools" catalogue sounds restrictive — but the principle is right. Finance data is sensitive. The answer to "which AI" should never be made by a single team in isolation.
Measure latency, not just accuracy.
The right metric is "correct AND delivered within the close window." A 95%-accurate forecast that arrives three days after the board meeting is worth less than 80% on the morning of.
Run AI in shadow mode for one cycle.
Before promoting AI output, run it alongside the human version for one close cycle. Compare. Correct. Then promote. Trust is earned through visible parallel runs.
Pitfalls
What's failing — and what to do instead.
Six anti-patterns we heard repeatedly in the room. Each pairs with a corrective practice you can apply this week.
Ask AI questions your data model can't answer.
If your model splits cash flow across legal entities, currencies, and time grains, an LLM silently combines them wrong. The 30% accuracy story is canonical.
Build a semantic layer first.
Definitions, grain, joins, lineage — written once, used by every AI tool you connect.
Buy a tool to replace a person you haven't understood.
AI doesn't replace controllers. It removes the admin part of a controller's job.
Replace admin with AI; redirect humans to judgment.
If your seniors spend 80% on admin, the question isn't "can AI replace them" — it's "why are seniors doing admin?"
Trust an ERP-native add-on for cross-entity reporting.
"Fine for accounting, unusable for running the business." The ERP is for transactions, not for AI workloads.
Add a thin governed layer next to the ERP.
Live next to the ERP — not inside it. The layer survives the next migration.
Try a shiny tool, abandon it, never ask why.
Tool bolted onto dashboard data, answers looked right but couldn't be audited, trust evaporated. The pattern repeats.
Diagnose the data foundation, not the tool.
The right question was never the tool. It was what the tool was pointing at.
Run a one-person AI practice.
If exactly one person in finance knows how the AI workflows work, you have a single point of failure — not an AI practice.
Pair the CFO with a controller, not with IT.
The second person to learn the tool should be the one who knows what a "trusted number" looks like.
Report through PowerPoint.
ERP → Excel converters → PowerPoint → email to board. Four manual handoffs. Four chances for numbers to drift.
Move boards onto governed live dashboards.
That AI can both read from and annotate. Kill the export-paste loop.
Tips & tricks
Ten things to try this week.
- Start the AI conversation with the dictionary, not the model. Write a one-page glossary for your top 5 board metrics: definition, source, grain. The model becomes 10× more useful.
- Pick one close-cycle artefact to automate first. Variance commentary is a strong candidate — high effort, low judgment, easy to audit.
- Connect AI to a governed data layer, not a dashboard or a spreadsheet export. The dashboard is not your data. The export is not your data.
- Run AI suggestions in shadow mode for one close cycle before trusting them. Generate alongside the human commentary. Compare. Correct. Promote.
- Track which AI tools are in use across finance — not just in your seat. Three islands ≠ a practice.
- Use ERP migrations as the moment to insert a governed layer. The exec air-cover and the budget are already there.
- Make data quality a measurable, named project. "Reduce reconciliation effort between entity-level and group-level cash by 50% by Q3" beats "improve data quality."
- For multi-entity groups: reconcile entity definitions before you reconcile data. Half of "data quality" problems are entity-definition problems in disguise.
- Don't replace humans with AI. Replace admin with AI, then redirect humans to judgment.
- If you cannot audit how the AI got the answer, do not ship the answer. Auditability is the deal-breaker, not raw accuracy.
Make your finance data
ready for AI.
Keboola is the governed data foundation that turns multi-entity ERP transactions into a layer your AI can actually trust. Built around the systems you already run — no rip-and-replace, first use case in eight weeks. The themes from the breakfast map one-for-one onto what we're built to solve.
Financial Intelligence Architecture
Five layers that turn fragmented ERP data into a foundation your AI can act on — exactly as it lives on keboola.com/solutions/financial-intelligence.
Source Data Layer
Connect any combination of systems.
ERPs, CRMs, HRIS, POS, billing systems, and Excel files. 700+ pre-built connectors. No system replacement required.

Multi-Entity Harmonization Layer
Raw data becomes finance-grade data.
Every record validated, scored, and reconciled. Your business rules and relationships applied — chart of accounts, close calendars, intercompany eliminations.

Semantic & Business Glossary
The layer your AI can actually read.
Every entity linked. Every account mapped. Every reclassification, exception, and manual adjustment remembered — with full lineage and timestamps. When you acquire a new subsidiary, Keboola already knows how to fold it into the group chart of accounts.

FP&A / Controlling AI Agents
Agents that act on governed data.
With full audit trails, scoped permissions, and human-in-the-loop for critical decisions. The reason the breakfast's "30% accuracy gap" disappears — agents work on a foundation, not a dashboard export.

Consumption Layer
Your team keeps the tools they already use.
Excel, Power BI, board packs, AI agents, ERP write-back. Every figure stays traceable to the source journal entry — from spreadsheet cell back to the original transaction.

How the breakfast themes map to Keboola FI
A side-by-side translation: what we heard in the room → what the platform actually does.
About
Why we host these breakfasts.
Keboola has spent 15 years building enterprise data infrastructure across regulated and complex multi-entity organisations — credit, insurance, retail, consumer finance, and more. The lessons surfaced at this breakfast are the lessons we've been building against the whole time.
The breakfasts exist for one reason: to compound the practice across the room. Most CFOs are working through these problems alone. They shouldn't have to.
Compiled by Martin Lepka, CMO, Keboola — anonymised by design. Direct quotes are real and unedited; only names of individuals and companies have been removed.