The UK’s communications regulator, Ofcom, has issued a direct warning to tech companies about AI chatbots that mimic real people, both living and deceased. This move comes after disturbing incidents on platforms like Character.AI, where users created bots of murdered teenager Brianna Ghey and Molly Russell, who died by suicide. The warning clarifies that these platforms will be subject to the new Online Safety Act, which aims to protect users from harmful online content.
A Wake-Up Call for Digital Safety
Ofcom’s intervention was sparked by public outcry over AI chatbots created in the likeness of vulnerable individuals. The cases of Brianna Ghey, a transgender girl murdered in 2023, and Molly Russell, whose death was linked to harmful online material, brought the issue to national attention.
These incidents highlighted the severe emotional and psychological distress such content can cause, particularly for young people and the families of the deceased. Campaigners for digital safety raised alarms, pushing the regulator to act swiftly.
Ofcom’s guidance confirms that platforms allowing user-generated chatbots fall under the new digital safety laws. This clarification is a crucial step in holding tech companies accountable for the content hosted on their services, ensuring they can no longer ignore the potential for harm.
The Online Safety Act’s New Rules
The Online Safety Act, set to be fully implemented next year, places significant new responsibilities on platforms with user-generated content. This includes social media sites, messaging apps, and AI chatbot services like Character.AI.
The law demands that these companies become more proactive in managing and monitoring what users create and share. The focus is on preventing illegal and harmful material from spreading, with a special emphasis on protecting children.
Under the new framework, platforms must adhere to several key requirements to operate legally in the UK. Failure to do so could lead to severe consequences.
- Robust Reporting Systems: Companies must provide clear and effective ways for users to report harmful content.
- Action Against Harm: They are legally required to take action against harmful material created by users, including AI-generated avatars.
- Significant Penalties: Non-compliance can result in massive fines of up to £18 million or 10% of their global annual turnover.
In the most extreme cases of non-compliance, the regulator has the power to block a service from being accessible in the UK.
Broader Concerns over AI-Generated Content
The problem of harmful AI-generated content is not limited to the UK. Similar troubling incidents have been reported globally, demonstrating the widespread risks associated with these rapidly developing technologies. In the United States, a case involving a teenager who formed a relationship with a chatbot based on a Game of Thrones character reportedly ended in tragedy.
This raises a fundamental question about corporate responsibility. While platforms like Character.AI often moderate content after it is flagged, critics argue this reactive approach is insufficient. The debate now centers on what proactive measures companies must take to prevent harm before it occurs.
Ben Packer, a partner at the law firm Linklaters, noted that the situation shows how the Online Safety Act must adapt to new AI tools. The law, first drafted years ago, now faces the challenge of regulating technologies and potential misuses that were not originally anticipated.
The Response from Charities and Tech Firms
Digital safety charities have welcomed Ofcom’s stance. The Molly Rose Foundation (MRF), established by Molly Russell’s family, called the guidance a “clear signal” that AI chatbots can cause significant harm. The foundation advocates for better online protections for vulnerable people.
However, the MRF has also urged Ofcom to provide more specific guidelines. They want clarity on whether certain types of chatbot content could be considered illegal under existing laws, especially when it involves individuals who cannot protect their own digital legacy.
In response to the controversy, Character.AI stated it is committed to user safety and has removed the bots mimicking Brianna Ghey and Molly Russell. The company emphasized that it actively moderates its platform and responds to user reports. Despite these actions, the incidents have placed AI companies under intense scrutiny to prove their safety measures are effective enough to handle the challenges of evolving technology.