The number of organisations using coaching is steadily rising, yet its true value is still not being assessed. The CIPD’s John McGurk shares his practitioner guide to real-world coaching evaluation
I was amazed when a colleague told me that the energy company she works for doesn’t use coaching. After all, it’s now part of normal management practices for most organisations, as a string of Chartered Institute of Personnel and Development (CIPD) surveys have shown.
Coaching and mentoring are powerful and enabling tools for raising performance, aligning people and their goals to the organisation, and cementing learning and skills1. Coaching is also a powerful agent for driving cultural change and agility – organisations use it linked with organisational development2.
However, coaching has an Achilles heel. Evaluation is largely neglected and this mustn’t continue, particularly in the current climate when every item and line of expenditure is being acutely scrutinised.
While the number of people who report the use of coaching is steadily rising, its real value is still not being captured, as we established in our 2010 Learning & Talent Development survey.
Make the link
Well under half of respondents used approaches such as linking coaching outcomes to key performance indicators (KPIs) or more quantitative and mixed-method approaches, such as return on expectation (ROE) and return on investment (ROI). Few linked coaching evaluation to performance and only 13 per cent frequently discussed evaluation at management meetings. Only about 20 per cent frequently collected and analysed data on the impact of coaching.
Clear and present danger
In difficult times, anything that cannot prove its value will be increasingly vulnerable. Coaching cannot claim a unique contribution to organisational performance and impact if its practitioners and champions assume its value rather than prove it. We need to build a convincing evaluation narrative, yet many organisations are failing to do this.
We need to move towards a systematic approach based on a thorough review of the coaching process. The CIPD sees evaluation as a cornerstone of effective coaching and we want to assist practitioners in developing best evaluation practice. So what’s getting in the way? Perhaps it’s our focus on delivery.
Delivery focus
Many practitioners think that developing and delivering coaching is what they are there to do. This can lead people to believe that simply introducing coaching is enough. As Jarvis et al pointed out in The Case For Coaching (2006), there is often an assumption that time spent in any learning activity such as coaching always has a positive payback. The authors also suggest that evaluation may not be addressed because we might uncover negative results that could threaten coaching. Although we know from our surveys that coaching is being used primarily for performance management and leadership development, we know less about its impact on those areas.
There are other issues too:
- An overuse of the Kirkpatrick model, even the augmented versions. Despite valiant attempts by Kirkpatrick and his successors to update the model, many use the least sophisticated version based on reactions and anecdote. But we need to look for broader and richer approaches to evaluation.
- An obsession with a very narrow view of ROI. This generally subtracts the costs from the benefits of a coaching assignment and expresses that as a percentage. This is meaningless without the context of the coaching and evaluation of other activities. Philips and Philips (2008), provide a much more robust and systematic ROI approach, which is detailed in the report. As evaluation expert Paul Kearns argues, ROI without a baseline is next to useless.
- Concern that evaluation is not a favourite activity of L&TD practitioners. We explain in the report that MBTI type ENFP is well over-represented in the coaching and L&TD community, so we have to be more mindful about developing evaluation. L&TD practitioners work best with delivery and collaboration over learning issues. Getting down to the data may not be their favourite task.
- Our use of the softer data around coaching is not systematic. For example, coaching conversations are a source of rich data about the progress of coaching. With an appropriate and proportionate approach to confidentiality we can use basic tools to capture the nature of the conversation.
- A lack of systematic collation of the sources of data to inform coaching. For example, psychometrics test pre-employment, manager reports, 360s and employee engagement scores, all provide valuable data.
Knowing and doing
So what should we do? Evaluation starts with delivery and what people want from us.
Stakeholders won’t expect us to produce a spreadsheet with scenario forecasts for coaching and ROI. They are more likely to be convinced if we can tell them how many people are coached, how much we spend on external coaches, the length of assignments and data on impact: perhaps engagement scores before or after coaching, or maybe anonymous 360 feedback on people’s ability to complete projects.
If we are also generous about interventions introduced by other departments and we can apportion some of the effect to coaching, we will have compelling evidence. That’s not happening enough.
We use the lift conversation with the finance director to illustrate how poor evaluation can undermine the resources available for coaching in testing times. According to our survey, roughly seven out of every ten lift conversations would not go well.
What next?
The CIPD believes that the best method of evaluation for any L&TD intervention is to take a holistic approach – the Value of Learning approach we refer to in our research with Portsmouth University (2007). We should shift from this narrow ROI to ROE approach. What did we expect the coaching intervention to deliver? Which behaviours or skills do we wish to see? What improvements?
This raises the issue of alignment. Aligning coaching interventions to the goals of the business is key. The basics are simple. Make it relevant, align what you are doing and measure it.
We provide a simplified graphic model of the approach in Figure 2 above. The RAM (relevance, alignment and measurement) approach is useful for all learning and talent interventions and keeps us focused on the outcome, not the process.
Finally, an integrated approach is vital. If, for example, we are unaware of the sponsorship and ownership issues within the organisation, we won’t get a clear view. If we are not conversant with the positioning and purpose of coaching we cannot design evaluation effectively. If we haven’t given a great deal of thought to how coaching is resourced and paid for, including issues like the role of external support and consultancy, the use of internal coaches and the training of line managers, we will not be able to evaluate effectively from the start.
The CIPD has developed the OPRA model (Figure 1), which helps practitioners think about coaching from the point of view of Ownership, Positioning, Resourcing, Procurement, Assessment and Evaluation. This thinking tool can help us to provide an effective space for evaluation.
What the CIPD surveys revealed
- Coaching is not being effectively evaluated (CIPD Learning & Talent Development surveys)
- Only 36 per cent of organisations evaluate coaching (CIPD L&D Survey 2010)
- The minority that do evaluate, focus on qualitative assessment, such as reaction and stories and testimony
- Where quantitative evaluation is used, it is often a crude use of return on investment (ROI)
- Evaluation is not being grounded in a capability perspective linking it to the organisation and people plan
- Good practice is out there and a more systematic mindset would deliver a step change in evaluation performance
- A wide range of data can be used for coaching evaluation, but practitioners need to access these data streams
Source: CIPD’s Real World Coaching Evaluation Project (autumn 2010)
Figure 1 The OPRA model
Figure 2 The RAM model
References and further information
- 1 CIPD 2011 Learning & Talent Development survey
- 2 CIPD Sustainable Organisation Performance: What Really Makes the Difference? Shaping the Future final report (January 2011)
- CIPD, Taking the Temperature of Coaching, 2009
- CIPD, Developing Coaching Capability in Organisations, 2008
- CIPD, Learning & Talent Development surveys’ Training and Development surveys, 2005-2010
- CIPD, Value of Learning: From Return on Investment to Return on Expectation, 2007
- J Jarvis, D Lane and A Fillery Travis, The Case for Coaching: Making Evidence-Based Decisions, CIPD, 2006
- P Kearns, How Accurate or Necessary is ROI for Learning and Talent Development? (Presentation to CIPD HRD Conference 6-7 April, 2011)
- J Passmore (ed), Psychometrics in Coaching, London: Kogan Page, 2009
- J Phillips and P Phillips, ROI in Action Case Book, Pfeiffer, 2008
- J Phillips and P Phillips, Show Me the Money: How to Determine ROI in People, Projects and Programs, San Francisco: Berrett Koehler, 2007
- J Pfeffer and R Sutton, The Knowing-Doing Gap, Harvard Business School, 2004
Coaching at Work, Volume 6, Issue 4