Don’t worry about Generative AI. It’s just part of our cyborg-like future.

(COPY)

AI brain

Artificial Intelligence (AI) was once seen as something that humans made and controlled. Now, some fear that Generative AI might outperform its creators. According to a new study, however, both perspectives – of AI as a tool we command, or a force we should fear – are missing the point.

Writing in the journal
Systems Research and Behavioural Science, Dr Steve Watson – a researcher at the University of Cambridge specialising in AI and education – suggests that Generative AI applications should not be seen as external technologies we interact with, but as systems with which we are “co-evolving”.

As they adapt to our usage patterns, Watson argues, these systems will become increasingly entangled with our lives. His paper anticipates a “cyborg-like” future, “in which the distinction between organism and machine becomes further blurred”.

Watson does not claim that humans will become machine-like, or be replaced by machines, but suggests that we need to understand ourselves as shaping Generative AI, while being shaped by it in return.

In education, AI is being positioned both as a
tool to support tasks like lesson-planning and marking, and as a threat to academic rigour.

The new study calls for a more nuanced understanding, arguing that AI is already transforming how we write and think, but that simultaneously our interactions are prompting the technology to fine-tune itself.

In this sense, Watson writes, humans are already part of the system – whether we like it or not.

"We are entering a future where the boundary between humans and machines will be less clear, even though their distinctiveness will remain."

“A lot of questions are being asked about what AI will do to society and how to  mitigate its effects,” Watson said. “A sharper question might be how it becomes integrated into societal evolution.”

“We are entering a future where the boundary between humans and machines will be less clear, even though their distinctiveness will remain. Instead of simply trying to control AI, we need to see it as perhaps the most visible example of our deepening entanglement with technological systems.”

The paper challenges conventional views of technology either as “instrumental” – a tool we use – or “deterministic” – a force that shapes us.

Smartphones, for example, are seen as instrumental because they enhance our ability to communicate, but deterministic because they mean many people struggle to switch off from work.

Watson proposes a third possibility: that technologies are “autopoietic” systems.

This term – which requires a bit of philosophical agility – was originally coined by biologists and later developed by the sociologist Niklas Luhmann. It refers to a system that processes information and evolves according to its own internal logic.

The system interacts with the world around it, but not indiscriminately: it only responds to inputs that are meaningful to its internal code.

Luhmann applied this to social systems. A legal system, for instance, interprets actions using the binary “legal” or “illegal”. As a system, this is all the law cares about – it does not, for example, concern itself with how people feel about rulings. It is, however, responsive to changes taking place around it. Over time, therefore, its logic about what is legal or illegal may evolve, perhaps in response to public outrage, or unintended consequences.

Watson suggests that technologies like Generative AI are similar. ChatGPT, for  instance, does not actually understand human inputs, but filters them through a binary of “work” or “fail”. This determines whether a prompt can be processed, and what kind of output is generated.

For example, if a user types, “Hello, how are you?” into a chatbot, the system recognises a valid input, matches it to patterns on which it has been trained, and produces a fluent reply. If, however, the user asks “What is happening over there?” – directing the system to something it cannot ‘see’ – the system will generate a response based on its inability to match the input to a pattern that “works”.

Crucially, Watson argues, these systems are autopoietic because – like legal systems – they evolve. New, unexpected inputs will lead developers to adjust the programming, which is the filter that determines what counts as meaningful. These unanticipated “perturbations” might come from users – for example if they use new phrases or idioms, or try to “
jailbreak” the system. They may also arise from broader shifts, such as changes in hardware, or regulation.

"In every field, Generative AI should be seen as more than a tool. It is an active participant in our own transformation."

In this sense, Watson suggests, AI is continually “observing” and adapting to its environment. These adaptations then feed back into human behaviour – often in unpredictable ways.

Recent developments in visual content generation illustrate the dynamic. In response to user demand, chatbots can increasingly interpret and generate images. This, however, has generated fresh concerns about ethical use and
deepfakes. Developers have responded by limiting ethical violations, but this has in turn led to unintended consequences, such as good-faith requests on sensitive subject matter being rejected. Further adaptations are therefore likely.

Watson’s paper characterises the unexpected consequences of this continual, mutual adjustment – such as chatbots refusing to discuss morally sensitive subjects – as AI “asserting its identity”.

He likens its overall development to advances in genetic engineering, or technologies such as prosthetics and pacemakers. Watson says that our relationship with AI will be similarly hybrid: it is a technology that will change and perhaps enhance us, but it will also adapt and respond as we use it, in order to sustain itself.

In his own field, education, Watson has
previously argued for less of a focus on generic “AI solutions” to specific problems, and, instead, for more focus on helping teachers and learners to become reflexive AI users. This means designing ways to use AI in context, and understanding how the technology is shifting practice. Policy actors and researchers, he adds, should track the larger patterns emerging from these grass-roots developments.

“In every field, Generative AI should be seen as more than a tool,” Watson said. “It is an active participant in our own transformation.”

Image: Image: Steve Johnson, Unsplash.