Meeting planners should not be fooled: AI is not 'just a tool'
Opinion

Until very recently, the hype surrounding generative AI - specifically ChatGPT - was white hot. It was going to steal our jobs, take over the planet, and then, quite possibly, kill every single one of us because…look how quickly it wrote that freaking blog post!
The astonishing capabilities of large language models (LLM) caught most of us by surprise. For anyone whose previous experience of chatbots was a futile attempt to change an insurance policy or complain about a delayed flight, this stuff looked like witchery!
Generative AI’s ability to rapidly create convincing audio, video, and image-based content provoked existential angst amongst creatives, and raised questions about the meaning of original content, copyright, and the importance of humans in the production of art.
And yet, despite the fuss, some experts think this particular model of generative AI may already have peaked in terms of its functionality.
Were we just dazzled? Time will tell, but as we’ve started to experiment and integrate these products into our working lives, the initial shock and awe has morphed into something a little more circumspect. These apps may be quick, but they’re far from perfect.
On the Gartner Hype Cycle, generative AI may already be transitioning from ‘Peak of Expectation’ to ‘Trough of Disillusionment’.
As if to underscore the point, people have started referring to AI as a ‘tool’. In some cases, ‘just a tool’. This was evident at The Meetings Show last week, in several sessions which addressed the potential impact of AI on the events industry. The conclusions were similar in each: AI can help you perform certain tasks – with breath-taking alacrity – but it can’t do everything. AI can speed up processes, it can organise and prioritise information, it can even add context and colour when asked to do so. But it can’t think for itself. Therefore, ‘just a tool’.
While it might comfort event professionals to mentally hang ChatGPT next to a pair of scissors, it might be a category error. A pair of scissors, like most tools, is basically an extension of our hands. Beyond its primary function it doesn’t change how we behave, interact, work, or think. A rapidly growing body of evidence suggests that the internet, for example, may have effectively rewired our brains, changing (not always negatively) our ability to multi-task, remember ‘facts’ (semantic memory), and make decisions. Even our ability to make friends. AI, in its various forms, clearly has the potential to do the same.
To refer to AI as ‘just a tool’ is to miss something important about the nature of technology, too. Tools have been around ever since humans started digging holes with sticks, but until relatively recently (the word technology only came into common usage during the 20th century) there was no sense that all the handy devices we’d been inventing over two millennia were any more than the sum of their parts.
Today technology means the systematic use of knowledge to practical ends. It is more than just a collective noun for tools. It has come to represent an idea, an approach to doing things that has social and environmental impacts. For some people technology is the force that drives history and the engine of social change. For these technological-determinists, technology is often seen as value neutral. People may use it to do bad things, but the technology itself can never be held to blame. For them the word ‘tool’, when applied to social media for example, is helpful, because it undermines the possibility of any moral responsibility towards the end user.
But ‘tool’ is surely an oversimplification of the digital technologies in use today. Twitter might be a tool, in the most general terms, but that’s where similarities with a can-opener or a corkscrew come to an end. European Digital Rights (EDRi), an association of human and civil rights organisations based in Europe, challenges the notion that modern ‘tech’ is ‘just a tool’.
In a 2019 blog post it states:
Technology is not just a tool, but a social product. It is not intrinsically good or bad, but it is embedded with the views and biases of its makers. It uses flawed data to make assumptions about who you are, which can impact the world that you see.
One passage, in particular, should give meeting planners, especially those organising international conferences, pause for thought:
Many well-meaning people have fallen into the trap of thinking that tech…removes humans’ messy bias, and allows us to make better, fairer decisions. Yet technology is made by humans, and we unconsciously build our world views into the technology that we produce. This encodes and amplifies underlying biases, whilst outwardly giving the appearance of being ‘neutral’. Even the data that is used to train algorithms or to make decisions reflects a particular social history. And if that history is racist, or sexist, or ableist? You guessed it: this past discrimination will continue to impact the decisions that are made today.
Any event professional who cares about diversity, equity and inclusion, for example, might think about where the data that ChatGPT uses to generate content comes from in the first place, before leaning on it too heavily to come up with session titles or programme notes for example.
In a 2016 article for The New York Times called Artificial Intelligence’s White Guy Problem Kate Crawford argues that AI reflects the values of its creators in ways which may already be exacerbating inequality at home and in the workplace. ‘Histories of discrimination can live on in digital platforms’ she writes, ‘and if they go unquestioned, they become part of the logic of everyday algorithmic systems.’
As alluded to earlier, the other problem with generative AI for meeting planners is the question of originality. In an age where delegates increasingly demand ‘unique’ experiences, relying on a machine to produce content may prove fatal. It’s not yet clear to what extent programmes like ChatGPT can improve, but it’s safe to say that they can only put out a version of what has gone in. While the algorithm creates something that could be mistaken for original thought, the lack of any distinct tone of voice is usually a giveaway. If content producers get too lazy with AI, the quest for striking, unique content could go into reverse. We could end up with conference programmes that look and sound a bit samey.
ChatGPT uses Natural Language Processing technology, so whether it can technically plagiarise content is a moot point – it is more likely to plagiarise ideas – but it goes without saying that copying large chunks of text without reference to an original source is a risky business, especially with niche subject matter. If there are only several experts in the world on a given subject, LLMs are more likely to repeat phrases that are closer to the original source. If someone recognises several of their own original ideas in a presentation, without attribution, they may attempt to build a case for plagiarism. Unlikely, but running speeches through an AI checker might become commonplace in future.
Of course, it would be silly to ignore AI. If solely used as a time-saving device - to build programme schedules for example - it could give you a significant competitive advantage at a time when budgets are tight and resources scarce. And, as mentioned earlier, it’s great for organising information and brainstorming ideas. If you approach it with caution, bearing in mind all its inherent flaws, you might even argue that, in fact, it is just a tool.
But that ‘just’ is doing a lot of heavy lifting. For good or bad, it’s probably more accurate to say that AI is more than just a tool. It's an imperfect reflection of who we are.