How to Put Numbers on the Table
- Kaisa Vaittinen

- 3 days ago
- 6 min read
You're in a pitch meeting. The methodology has been explained. References shared. The client is nodding along.
Then the CFO asks: "What results can we expect? In euros?"
Many consultants feel the ground shifting beneath their feet.
The numbers tell a stark story. Only a small minority of organizations progress to systematic impact measurement (LinkedIn, 2024). According to vendor reports, up to 94% of executives expect ROI evidence from learning investments, but only about one in ten L&D teams measures it systematically (Continu, 2025).
The problem isn't you – it's the entire industry.
But it also means: a consultant who can answer with numbers stands out from the crowd.
Why testimonials alone aren't enough
Testimonials are valuable. They show that real people experienced real benefits.
But in B2B decision-making, they have their limits.
They're selective: the enthusiastic participant speaks up, the uncertain one stays silent. Buyers know this.
They're subjective: "it changed my perspective" means different things to different people. Comparing vendors becomes difficult.
And they're backward-looking: a testimonial tells what happened to someone else. It doesn't predict what will happen to this buyer.
Numbers address these limitations. Not by replacing testimonials, but by complementing them.
What the buyer really wants to know
When a buyer asks for numbers, they're usually asking one of three questions:
Did something actually change? This is the most fundamental question. Participants said they learned – but did it show up somewhere concrete? In time spent, errors made, process speed?
How much did it change? This shifts the conversation from qualitative to quantitative. Not "improved" but "improved by 15%". Not "faster" but "3 days faster".
Was the investment worthwhile? This is the ROI question. How much did it cost, how much did it return? When does the investment pay for itself?
ROI doesn't tell the whole story
A ROI calculation tells you whether the investment was worthwhile. It doesn't tell you why.
This is a crucial distinction. If you only know the ROI was 300%, you don't know what to do next. You don't know which mechanism produced the result – and therefore you can't strengthen it.
That's why financial metrics need to be accompanied by measures of what you can't see directly on the balance sheet: perceived benefit, competency growth, engagement, behavior change. These are latent variables – things that can't be directly observed but that influence results.
Research has shown that these "soft" factors predict hard outcomes. Psychological safety is associated with team learning behavior and performance (Edmondson, 1999). Google's Project Aristotle research identified psychological safety as the most important predictor of team effectiveness. And according to Gallup, 70% of the variance in team engagement is explained by manager quality.
When you measure both financial results and the mechanisms behind them, you can answer two questions:
For leadership: "The payback period was 4 months."
For developers: "The result occurred because managers' sense of their own capability increased and they started using the practices they learned in their daily work."
Metric | Type | What does it tell you? |
ROI / Payback period | Financial | Was the investment worthwhile? |
Turnover / Errors | Operational | Did behavior actually change? |
Psychological safety | Latent variable | Why did the result occur (root cause)? |
Perceived competency | Latent variable | Is the change sustainable? |
Where the numbers come from
Demonstrating impact doesn't require complex research designs. It requires planning.
Baseline measurement. Before the training, measure what you want to change. Without a baseline, demonstrating change is impossible.
Follow-up measurement. First follow-up at 3–6 months, with additional measurement if needed to identify delayed effects.
Connecting to organizational data. The observed change is converted to euros using the organization's own figures: hourly rates, turnover costs, error costs.
A concrete example: Leadership development
Let's look at what this looks like in practice.
Financial metrics:
Turnover is one of the most common foundations for ROI calculations because its costs are well documented. According to SHRM, replacing one employee typically costs 6–9 months' salary. Gallup estimates the cost at 50–200% of annual salary depending on the role. Replacing a mid-level employee can cost 125–150% of annual salary.
If leadership development reduces turnover from 18% to 12% in a 100-person organization, that means 6 fewer employees leaving. If the average salary is €50,000 and replacement cost is 100% of salary, the savings are €300,000. If the training cost €30,000, the ROI is 900%.
Latent variables – where the result came from:
ROI tells you the investment was worthwhile. But why did turnover decrease? This is where latent variables come in:
Managers' perceived leadership competency: 4.2 → 5.8 (on a 1–7 scale)
Team psychological safety: 4.8 → 5.9
Frequency of giving feedback: 45% → 78% (report giving feedback weekly)
Employees' experience of manager support: 5.1 → 6.2
These metrics reveal the mechanism: managers feel more capable of leading, they give more feedback, teams feel safer, and therefore people stay.
How to get participants to commit to follow-up measurements
One of the most common questions is: "How do we get people to respond to a follow-up survey 3–6 months later?"
The answer lies in how the measurement is designed from the start.
Dialogic goal-setting creates ownership. When metrics are developed together with participants, they don't experience measurement as an external requirement but as tracking their own development. In evaluoi.ai's methodology, goals and metrics are defined together with the target group – which is why they're also experienced as meaningful.
Self-efficacy predicts engagement. Bandura's (1977, 1997) self-efficacy theory shows that people commit to tasks where they believe they can succeed. That's why follow-up measurement needs to be short, clear, and accessible, not a 45-minute questionnaire. When a participant feels "this is easy and useful," the threshold for responding drops.
Scaffolding and progressive steps. Vygotsky's (1978) zone of proximal development and scaffolding thinking (actually this term scaffold in pedagogical context was first used by Wood, Bruner & Ross in 1976) also apply to measurement: measurement should be built progressively. Baseline measurement creates the foundation, interim check-ins keep the process active, and follow-up measurement feels like a natural continuation, not a surprising extra task.
Social commitment mechanisms. Research shows that public commitment and peer group support increase follow-up participation. In practice, this means:
Participants share at the start of training what they're aiming for (public commitment)
The team tracks progress together (social support)
Follow-up measurement results are shared with the group in aggregate (meaningful feedback)
Measurement as intervention. Perhaps the most important insight: measurement is not a separate action done after training. It's part of the intervention. When a participant answers the question "How often do you give feedback to your team?", they're simultaneously reflecting on their own behavior. This self-assessment is already learning.
How to get started
Measuring impact doesn't require a massive research project. It requires three things:
A clear goal. What should the training change? Not general "competency" but a concrete phenomenon: reduction in errors, faster processes, decision-making quality, perceived quality of management.
Baseline data. What's the situation before training? If there's no measured baseline, an estimated baseline is better than nothing.
Organizational figures. How much does an hour cost? How much does an error cost? How much does turnover cost? These figures belong to the client, not the consultant.
When these three are in place, numbers emerge naturally.
Dialogical measurement
Impact measurement isn't something the consultant does to the client. It's something done together.
The client knows their organization's figures. The client knows what matters. The client knows what change would be meaningful.
The consultant's role is to bring structure: what to measure, how to measure, how to interpret. But the content emerges through dialogue.
This makes measurement an intervention in itself. When an organization has to define what it wants to achieve and how it would recognize it – that's already a step toward change.
Making the invisible visible – then into euros
A consultant's work changes people and organizations. Much of that change is invisible: competencies, attitudes, behaviors, culture.
Measuring these isn't easy. But it's possible, and it separates the consultant who promises from the consultant who proves.
When you can make the invisible visible and translate it into euros, you have the answer to the CFO's question.
evaluoi.ai makes impact measurement systematic, without requiring consultants to be statisticians. The platform builds metrics based on goals, conducts baseline and follow-up measurements, analyzes change statistically, and produces reports for different audiences.
References
State of measurement:
LinkedIn. (2024). Workplace Learning Report 2024. LinkedIn Learning. [Systematic impact measurement is the exception, not the norm]
Continu. (2025). Measuring Enterprise LMS ROI. [Vendor report: executive ROI expectations vs. L&D team measurement capability]
Turnover costs:
SHRM. Employee Turnover Cost Reports. Society for Human Resource Management. [6–9 months' salary]
Gallup. (2024). The Hidden Cost of Employee Turnover. [50–200% of annual salary]
Latent variables and business outcomes:
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383.
Edmondson, A. C. (2018). The Fearless Organization. Wiley.
Google. Project Aristotle. [Psychological safety #1 predictor of team effectiveness]
Gallup. State of the Global Workplace 2024. [70% of engagement variance explained by manager quality]
Self-efficacy and engagement:
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215.
Bandura, A. (1997). Self-efficacy: The exercise of control. W. H. Freeman.
Scaffolding and learner support:
Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.
Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100.
ROI methodology:
Phillips, J. J., & Phillips, P. P. (2007). Show Me the Money: How to Determine ROI in People, Projects, and Programs. Berrett-Koehler.



