EBTHub LAB 01 — Senior Contributors Onboarded
European Business Transformation Hub completes onboarding of the senior practitioner cohort. The first independent enterprise GenAI study built on paid enterprise tools, real anonymized workflows, and senior decision-makers is now in the field.
Why this matters to a C-level reader at a European SME
Most European SMEs are already at what George Westerman calls Level 1 in his GenAI Risk Slope— individual employees using GenAI tools for single tasks across siloed teams. A marketing manager drafting a campaign brief. A finance controller summarising a board pack. A project lead generating status updates. A salesperson preparing a customer reply.
Level 1 looks low-risk because each task looks small. The hidden problem is that the time spent checking, correcting, and re-checking AI output is rarely measured. Employees who do not have full subject-matter depth in the topic they are checking cannot reliably spot hallucinations — fluent output is not the same as correct output. The cost shows up later: a wrong number in a forecast, a misattributed quote in a press release, a customer commitment that nobody authorized.
Level 1 is where the ROI of GenAI gets quietly destroyed. Lab 01 exists to measure where, why, and what to do about it.
Why a research association — and not a university study
Most public research on enterprise GenAI today has one of three structural weaknesses:
→ It is conducted with students, not senior practitioners.
→ It uses free, rate-limited consumer GenAI versions — not the paid enterprise tiers that organizations actually deploy.
→ It runs on synthetic tasks, not real functional tasks with real stakes and quality assumptions.
None of those conditions describes what a CFO, CMO, Head of Sales, or PMO Lead at a European mid-market company is actually working with. A graduate student running a free model on a hypothetical case study cannot tell a Vienna-based mid-cap CFO whether the paid enterprise GenAI tier, on a real cash flow forecast with a real auditor's deadline, saves time or costs it.
Lab 01 closes that gap. The Lab tests paid enterprise GenAI tiers on real, anonymized workflows executed by senior practitioners — Sales, Marketing, Finance, and Project Management leaders with 10+ years of operational responsibility. The methodology is double-blind, pre-registered on the Open Science Framework before any data is collected, and built to academic publication standards while staying directly relevant to enterprise decision-makers.
The Official Kick-Off
On 12 May 2026, at the end of research week 2, EBTHub held the official Lab 01 launch in Vienna. Onboarding of the Spoke Contributor cohort is complete and all four research strands — Sales, Marketing, Finance, Project Management — are now operational.
The Hub Team
The internal team running the 20-week research cycle, with first-name credit on every Lab 01 output:
Pauline — Ruth Pauline Wachter, Research Lead and CEO. Initiator of the Lab 01 research design.
Verena — Verena Mersmann, Research Operations Lead and COO. Partner company coordination and research execution.
Bernd — Bernd Walzer, R&D Quality Lead. Methodological standards across all Lab outputs.
David — David Leroy, PMO and Project Management.
Chris — Christoph Allstadt, Technical Environment Lead. Owner of the double-blind testing environment in which GenAI models are evaluated.
Lara — Lara Marenich, Lab Assistant. Meeting documentation and time-capture integrity.
Sophie — Education Lead. Translation of Lab findings into partner and member learning formats.
The Spoke Contributors
The senior practitioner cohort is joining for the field phase. Each contributor brings a minimum of ten years of operational seniority in their function — and is named on every Lab 01 publication they contribute to.
Sales — Maximilian Schappelwein, Iris Clauss, Leif Jürgensen
Marketing — Nicole Mayr, Markus Puschacher, Leif Jürgensen
Finance — Ferit Yildiz, Jürgen Schneider
Project Management — David Leroy, Bernd Walzer
Those 9 spokes create more than 100 real, functional tasks from their experience, including the time frame for a non-AI execution and the quality standards assumed for each task's outcome.
What the Lab will give partner organizations
Three proprietary tools, developed from the research findings, were made available to partner companies ahead of public release:
→ The GenAI Verification ROI Calculator — a parameterized financial model that tells an organization whether GenAI is actually saving time on a given function and task type, or merely relocating the work into verification. Designed to identify which tasks should be automated, which should be assisted, and which should be left alone.
→ The Enterprise GenAI Quality Rubric (Q-Matrix) — a standardized grading framework that allows any employee to assess AI output quality consistently, even when they do not have full subject-matter depth in the underlying content. Built to reduce the risk of unrecognized hallucinations being passed downstream.
→ The Trust Infrastructure Diagnostic — a maturity assessment that identifies whether the governance architecture around the tools is ready to scale GenAI reliably, or whether structural gaps will undermine adoption regardless of tool quality.
Pre-Registration and Open Science
All Lab 01 instruments — the trust survey, the interview protocol, the digital time-capture telemetry, and the embedded multiple case study design — are pre-registered on the Open Science Framework before data collection begins.
OSF pre-registration: 10.17605/OSF.IO/TZNF8
Pre-registration means the methodology is locked in writing, public, and timestamped before any data is collected. No retrospective reframing of the analysis. No selective publication. This is the academic standard—and, in our view, the only honest way to do enterprise GenAI research in 2026.
“C-level executives ask us regularly whether the GenAI productivity numbers they read in vendor materials apply to their organisation. The honest answer today is: nobody knows yet, because most of the published research has been done with the wrong people, on the wrong tools, on the wrong tasks. Lab 01 is built to give European SMEs a number they can actually trust — and a method they can replicate inside their own four functions.”
What Comes Next
The 20-week research cycle now runs through to November 2026, with no R&D weeks scheduled in August. Interim findings will be shared with partner organisations on an ongoing basis. Final results will be submitted to top-tier Information Systems and Human-Computer Interaction journals — and made openly available on the EBTHub site for partner companies and Members.
Partner company applications remain open for organisations wishing to participate in the study and receive early access to all three deliverables.