A Dutch court in Amsterdam ruled on Thursday that Elon Musk's xAI company must cease generating and distributing nude images of people without their explicit consent using its Grok artificial intelligence tool and the X platform that hosts it. The court imposed a potential fine of 100,000 euros ($115,350) per day for noncompliance, marking a significant legal intervention amid growing global concerns over AI-generated sexual content. This civil suit decision is among the first judicial rulings addressing xAI's responsibility for tools that can create sexualized imagery, as complaints and investigations into Grok have surged across the Americas, Europe, Asia, and Australia.
The case was brought by Offlimits, a Dutch center monitoring online violence, in cooperation with the non-profit Victims Support Fund, focusing on a Grok feature that allowed users to create hyper-realistic deepfake montages of naked women and children using real photos. According to the court website, the judge determined that Offlimits had demonstrated reasonable doubt about the effectiveness of xAI's existing measures, citing an instance where Offlimits produced a video of a nude person using Grok shortly before the hearing. This undermined xAI's claims that it had taken sufficient steps, such as restricting image creation features to paid subscribers in January, to prevent abuse.
Offlimits director Robbert Hoving emphasized that "the burden is on the company" to ensure its tools are not used to create and distribute nonconsensual sexual images, including those of children. The ruling coincided with the European Parliament's approval of a ban on artificial intelligence systems generating sexualized deepfakes earlier on Thursday, a move prompted by global outrage over non-consensual nudes produced by Grok and other AI tools. This dual development highlights the escalating regulatory and societal pressures on tech companies to address the ethical implications of AI, particularly in the context of digital violence and privacy violations.
During the hearing this month, xAI lawyers argued that it was impossible to guarantee the prevention of abuse on its platform and that the company should not be penalized for the actions of malicious users. They pointed to measures implemented in January aimed at curbing the editing of images of real people in revealing clothing. However, the court found these efforts insufficient, siding with Offlimits' evidence of ongoing vulnerabilities. The decision underscores the challenges in balancing innovation with accountability in the rapidly evolving AI landscape, as authorities in Europe and beyond seek to clamp down on harmful applications while grappling with enforcement complexities.
Source: www.aljazeera.com