As generative AI tools become more powerful and widely available, researchers are turning their attention to a new and often overlooked driver of digital risk: the Undersphere.
Coined by Milica Stilinovic, A/Prof Jonathon Hutchinson and Dr Francesco Bailo, Deputy Director of the University of Sydney鈥檚 Centre for AI, Trust and Governance (CAITG), the Undersphere describes a decentralised, networked community of practice that forms around the creative and experimental use of generative AI鈥攐ften well beyond what the technology was originally designed to do.
Unlike regulated AI development within companies or institutions, the Undersphere is informal, fast-moving and hard to trace. It thrives on open-source tools, online platforms and collaborative subcultures. And while many outputs are benign or artistically motivated, others carry serious risks.
鈥淯nlike outputs that solely aim to reimagine technologies,鈥 says Dr Bailo.
鈥淢any outputs emerging from the Undersphere possess both intentional and unintentional societal ramifications.鈥
Democratic risks in the shadows
The dangers emerging from the Undersphere are not hypothetical. One prominent example is the creation of deepfake pornography using publicly available AI models.
These images and videos鈥攐ften made without consent鈥攙iolate individual privacy, erode human dignity, and undermine trust in digital information.
This kind of content is more than a breach of ethics; it鈥檚 a direct threat to democratic values. And because it often circulates on fringe platforms or in closed networks, it鈥檚 difficult to monitor, let alone prevent.
鈥淯nderspheric outcomes may be multifaceted,鈥 explains Dr Bailo.
鈥淪ome serve benign artistic purposes, while others may pose risks and negatively impact individuals and societies.鈥
We argue for a governance framework that is more fluid and adaptive, one that can address risks as they emerge across distributed digital systems.
Dr Francesco Bailo
Why existing regulation falls short
Most AI regulations today鈥攕uch as the EU鈥檚 AI Act鈥攁re built around managing known risks associated with specific, intended uses of technology. These frameworks tend to assume a linear path from development to deployment.
But the Undersphere shows how generative AI can quickly take on new forms and applications once it leaves the lab. Users who are unaffiliated with developers鈥攁nd often untraceable鈥攃an adapt tools in ways that were never anticipated. As a result, many harmful outcomes fall outside the scope of current regulation.
鈥淯nderspheres illustrate the various ways in which deviation from intended use can occur,鈥 says Dr Bailo, pointing to a growing gap between how AI is governed and how it is actually used.
Dr Bailo and colleagues argue that we need to shift how we think about AI risk鈥攆rom something that is linear and controllable to something more complex, distributed and evolving.
Rather than treating AI like a conventional product with clearly defined risks, we should approach it more like climate change: a dynamic challenge that emerges across systems, scales and actors. That means building governance models that are adaptive, responsive and able to address harm as it arises鈥攅specially in decentralised spaces like the Undersphere.
鈥淲e argue for a governance framework that is more fluid and adaptive,鈥 he says, 鈥渙ne that can address risks as they emerge across distributed digital systems.鈥
Making the invisible visible
The Undersphere reveals how generative AI is not just shaped by its creators, but by its users鈥攎any of whom operate outside regulatory or institutional view.
As Dr Bailo鈥檚 research shows, understanding and responding to these emergent practices is essential for building a future where AI can support, rather than undermine, the values of a democratic society.
Header image:聽Elise Racine & The Bigger Picture
Manual Name : Dr Francesco Bailo
Manual Description : Centre for AI, Trust and Governance
Manual Address :
Manual Addition Info Title :
Manual Addition Info Content :
Profile image : /content/dam/corporate/images/faculty-of-arts-and-social-sciences/research/research-centres/centre-for-ai,-trust-and-governance/francesco-bailo.jpg
Manual Type : profile
_self
Auto Type : contact
Auto Addition Title :
Auto Addition Content :
Auto Name : true
Auto Position : true
Auto Profile image :
Auto Phone Number : false
Auto Mobile Number : true
Auto Email Address : true
Auto Address : false
UUID :