What are the implications for using artificial intelligence in coaching and supervision? We need to consider seriously its future impact, argue Peter Duffell and Natalia de Estevan Ubeda
In a world where Artificial Intelligence (AI) is grabbing headlines and becoming increasingly present in people’s daily lives, we need to seriously consider the implications for its use in coaching and supervision.
ChatGPT was launched in November 2022. Since then we’ve seen significant media interest in the predicted impact of AI on a range of professional disciplines. From a coaching perspective, there has been an increase in suggestions about how AI could transform the way coaches work. For example, there have been claims that we can now create empathetic coaching chatbots and that in the future we could use an AI to monitor client emotion and analyse coaching sessions in real time.
Indeed, a recent study suggested that ChatGPT is more empathetic and offers “higher quality” responses to questions from patients compared with those of doctors (Adams, Poliak & Dredze, 2023). However, suggesting an AI is empathetic means we attribute capabilities of thinking, reflection and concern for others to
a machine.
There are a number of challenges to these ideas. First, we have a natural predisposition to attribute human-like cognition and emotions to objects that appear human-like. We anthropomorphise. In a recent online academic publication (Saunders, 2023), US psychologist Gary Marcus suggested we stop treating AI models like people: “…using emotive words like ‘empathy’ for an AI predisposes us to grant it the capabilities of thinking, reflecting and of genuine concern for others – which it
doesn’t have.”
Just an algorithm?
It’s accepted that human interactions are based on the concept of ‘theory of mind’, which is our capacity to understand other people by ascribing mental states to them. This is something that current AI technology is a long way from achieving – if this is even possible – not least because our theory of mind is based partly on our lived experiences. That AIs have the ability to hallucinate is well-documented.
In reality, ChatGPT, rather than being some semi-sentient entity, is a probabilistic algorithm which analyses what you’ve asked and constructs responses based on word probabilities. It has no capability to understand what it’s written or understand the motivation of the ‘human’ to ask the question in the first place. Indeed, repeatedly asking ChatGPT a question will produce different answers, highlighting an absence of understanding. Further issues with ChatGPT are that it can fabricate references in academic style writing, referring with plausibility to articles that don’t exist. You have to ask if the technology is currently anywhere near safe enough to be used without causing harm to clients. Indeed, what would a ‘safe’ application look like and who would define it?
Benefits and drawbacks
While chatbots have potential benefits – for example, in a medical study patients were more likely to meet physiotherapy goals when encouraged to do so via a chatbot accessed via Alexa or Echo (Hassoon et al, 2021), there are also serious concerns. We already have the case of the Belgian man who took his own life at the behest of the chatbot he had ‘befriended’ (I El Atilla, 2023).
This is disturbing, given that we have yet to see AI penetrate society deeply in the way that many observers suggest it will.
There are also well-known concerns about bias in AI. Mathematicians who work in the field highlight this as a significant issue. For example, an algorithm can be fair or it can be accurate, but it cannot be fair and accurate. Mathematicians highlight the need for a societal debate to inform levels of fairness or accuracy that we would support, noting an ‘accurate’ AI might unfairly cause an innocent person to be incarcerated in prison, as has already happened multiple times in the US.
Do your research
We also have the challenge of transparency around technology companies that create AIs (Big Tech). Much of the human–AI interaction is treated as a ‘black box’, which cannot be explained for ‘commercial reasons’.
For example, Google reCAPTCHA is often used to verify website logins: you are shown a grid of nine images and generally have to select the squares that contain a particular object – such as a bicycle. In the past, Google has used people’s responses to this to train its image AIs – were you aware that your responses were going to be used in this way? Regulation (The White House & EU, 2023) is being planned to address this type of issue but critics suggest such planned laws are too Big-Tech friendly.
To date, coaching literature has had very little to say about AI, yet we seem to be developing an ‘embrace the technology’ attitude. In contrast, there is significant research in the AI ethics space (Russell, 2019, gives a good general overview), and in medicine and therapy.
Coaches seems to be reticent about looking outside of their field, in contrast with mainstream academia which is now emphasising interdisciplinary work. For example, the Oxford Institute for Ethics in AI has philosophers, mathematicians and technologists, among others, working together. Coaching would certainly benefit from such research, for example, Hatterley’s study on how use of AI impacts trust between patient and doctor (Hatterley, 2022).
Much of the exposure of AI in coaching publications extols its virtues in lengthy prose that ends, almost as an afterthought, ‘of course, we need to do this ethically’.
We contend that ethics needs to be in the driving seat in human-based interventions such as coaching.
Paying more attention to the wider literature would enable us to sensibly assess areas where an AI may or may not add value to coaching. For example, analysing client emotion from facial expressions is one of the claims made about potential AI capability. Unfortunately, despite the billions of research dollars invested in this cause by Google and Apple, etc, researchers conclude that with current technology, all you can say is a particular facial muscle or muscles have been activated. You cannot deduce the intention or emotion that sits behind this action. Barrett (2021) is particularly well known for this research.
Conclusions
These concerns are very much the tip of the iceberg. For example, we have ethical issues around how low-paid workers are used to tag internet content used for machine learning. We also have to consider the climate impact of the massive amounts of computer power and energy needed to create and maintain AIs.
We’re not against AI – we do have to embrace it at some point and there are some great benefits to these technologies in many fields – however, we need to consider more holistically how we might use AI in coaching.
As a parting thought, we spoke about some of these issues at the Oxford Brookes University Supervision Conference in early May 2023. One of our main points was to explore who will supervise coach AIs. At the moment the answer is no one. So our challenge was, if an AI coach doesn’t need a supervisor, why does a human coach need one?
This underlines some of the big existential questions that we need to consider before we rush into using AIs, perhaps revisiting if coaching itself needs to be regulated and if supervision should be mandatory. This is before we enter the realms of how we would train a coaching AI or indeed what training coaches would need to equip them with to deal with the fast-paced technological changes AI appears to be driving.
References
- J W Ayers, A Poliak, M Dredze, et al, ‘Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum’, in JAMA Intern Med, 183(6), 589-596, 2023: https://doi.org/10.1001/jamainternmed.2023.1838
- L F Barrett, ‘Debate about universal facial expressions goes big’, in Nature, 589, 14 January 2021, pp200-1
- J J Hatterley, ‘Limits of Trust in Medical AI’, in Journal of Medical Ethics, 46(7), 478-81, 2022
- N Saunders, ‘Evolution is making us treat AI like a human, and we need to kick the habit’, in The Conversation, online article, 16 May 2023
(https://bit.ly/3pGqJYg) - I El Atillah, ‘Man ends his life after an AI chatbot “encouraged” him to sacrifice himself to stop climate change’, in Euronews, online article, 31 May 2023
(https://bit.ly/3XK4mhk) - A Hassoon, Y Baig, D Q Naiman, et al, ‘Randomized trial of two artificial intelligence coaching interventions to increase physical activity in cancer survivors’, in
npj Digit. Med. 4, 168, 2021: https://doi.org/10.1038/s41746-021-00539-9 - The US has set out a blueprint for an AI Bill of Rights (https://bit.ly/3NTSIfo) and the EU is in the process of implementing the EU AI Act (https://bit.ly/3D9dN0g)
- S Russell, Human Compatible: AI and the Problem of Control, Allen Lane, 2019
About the authors
- Peter Duffell is managing director & principal coach at Westwood Coaching Associates
- Natalia de Estevan Ubeda is a coach and coaching supervisor, trained at Oxford Brookes University and author of research on supervisor development, supervision of supervision, generational differences, mental health and AI in coaching. She is the director of advisory services in a multinational company headquartered in Madrid and splits her time between the UK and Spain.