class-action lawsuit

AppWizard
September 19, 2025
Kian Brose, a developer and content creator, has launched a crowdfunding campaign raising 0,000 for a class-action lawsuit against Mojang and Microsoft, claiming violations of European consumer protection laws. He alleges that Mojang modified its End User License Agreement (EULA) 47 times without proper notification, enforced hidden internal rules, and coerced players into migrating accounts to Microsoft under the threat of losing access, which may violate EU consumer law and GDPR requirements for consent. The lawsuit is opt-in, allowing affected players to join and submit evidence. Community reactions are mixed, with some supporting the initiative and others expressing skepticism about its viability against a large corporation. The lawsuit is set against the backdrop of updated EU collective-redress directives and could lead to various outcomes, including policy changes, dismissal, regulatory scrutiny, or a symbolic victory for digital rights.
TrendTechie
September 8, 2025
Developers of the Claude chatbot have proposed a settlement of [openai_gpt model="gpt-4o-mini" prompt="Summarize the content and extract only the fact described in the text bellow. The summary shall NOT include a title, introduction and conclusion. Text: In a significant development within the realm of artificial intelligence and copyright law, developers of the Claude chatbot have proposed a settlement of .5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal, which aims to resolve ongoing legal disputes regarding the legality of utilizing pirated books for AI training, has been detailed on specialized platforms and awaits approval from a California judge. Background on Claude and the Legal Challenge Claude, an AI chatbot developed by Anthropic, is currently operating on its fourth version, Sonnet 4. The model claims to possess capabilities in “reasoning, analysis, creative writing, programming, and solving complex problems across a wide range of fields.” Notably, it emphasizes its “constitutional AI training,” designed to ensure ethical and constructive discussions on virtually any topic. While Claude shares similarities with other AI projects like OpenAI's ChatGPT and Google's Gemini, it operates on a subscription model, attracting approximately 16 to 18 million users monthly. The legal action was initiated last year by journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who filed a class-action lawsuit on behalf of all authors whose texts may have been copied during the AI's training process. They allege that Anthropic built a multi-billion dollar enterprise by “stealing hundreds of thousands of copyrighted books.” The lawsuit claims that the company downloaded pirated versions of works, including those of the plaintiffs, and subsequently trained its models on this content. Specifically, the complaint indicates that the neural networks analyzed texts from free torrent libraries such as Books3 and The Pile. Claims of Copyright Infringement The plaintiffs assert that Anthropic's actions constitute a violation of their copyright rights under 17 USC § 501. They are seeking compensatory damages, restitution, the return of unlawfully obtained property, attorney fees, and any other appropriate remedies. Furthermore, they are requesting a court order to prohibit Anthropic from engaging in “infringing conduct,” effectively seeking a ban on training neural networks with pirated content. A ruling in this case could set a precedent for future litigation against other developers in the AI sector. The case is being presided over by Senior U.S. District Judge William Alsup in the Northern District of California. Recently, Anthropic submitted a proposal for a pre-trial settlement, avoiding the issue of admitting liability for copyright infringement and instead focusing on a financial resolution. The company has committed to establishing a non-repayable Settlement Fund of “no less than .5 billion,” from which payments will be made based on specific claims submitted by authors within 120 days of the fund's establishment. Additionally, Anthropic has pledged to remove texts from pirated libraries from its databases. In exchange for these concessions, the plaintiffs would need to waive their claims, although they retain the right to pursue further legal action should it be discovered that the developers have once again downloaded books from torrent sites. This proposal is pending approval from Judge Alsup." max_tokens="3500" temperature="0.3" top_p="1.0" best_of="1" presence_penalty="0.1" frequency_penalty="frequency_penalty"].5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal aims to resolve legal disputes regarding the use of pirated books for AI training and is awaiting approval from a California judge. The legal action was initiated by journalists who filed a class-action lawsuit against Anthropic, alleging copyright infringement under 17 USC § 501. They claim that Anthropic built a multi-billion dollar enterprise by using pirated texts for training its models. The plaintiffs are seeking compensatory damages, restitution, and a court order to prohibit Anthropic from infringing conduct. Anthropic's settlement proposal includes establishing a non-repayable Settlement Fund of at least [openai_gpt model="gpt-4o-mini" prompt="Summarize the content and extract only the fact described in the text bellow. The summary shall NOT include a title, introduction and conclusion. Text: In a significant development within the realm of artificial intelligence and copyright law, developers of the Claude chatbot have proposed a settlement of .5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal, which aims to resolve ongoing legal disputes regarding the legality of utilizing pirated books for AI training, has been detailed on specialized platforms and awaits approval from a California judge. Background on Claude and the Legal Challenge Claude, an AI chatbot developed by Anthropic, is currently operating on its fourth version, Sonnet 4. The model claims to possess capabilities in “reasoning, analysis, creative writing, programming, and solving complex problems across a wide range of fields.” Notably, it emphasizes its “constitutional AI training,” designed to ensure ethical and constructive discussions on virtually any topic. While Claude shares similarities with other AI projects like OpenAI's ChatGPT and Google's Gemini, it operates on a subscription model, attracting approximately 16 to 18 million users monthly. The legal action was initiated last year by journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who filed a class-action lawsuit on behalf of all authors whose texts may have been copied during the AI's training process. They allege that Anthropic built a multi-billion dollar enterprise by “stealing hundreds of thousands of copyrighted books.” The lawsuit claims that the company downloaded pirated versions of works, including those of the plaintiffs, and subsequently trained its models on this content. Specifically, the complaint indicates that the neural networks analyzed texts from free torrent libraries such as Books3 and The Pile. Claims of Copyright Infringement The plaintiffs assert that Anthropic's actions constitute a violation of their copyright rights under 17 USC § 501. They are seeking compensatory damages, restitution, the return of unlawfully obtained property, attorney fees, and any other appropriate remedies. Furthermore, they are requesting a court order to prohibit Anthropic from engaging in “infringing conduct,” effectively seeking a ban on training neural networks with pirated content. A ruling in this case could set a precedent for future litigation against other developers in the AI sector. The case is being presided over by Senior U.S. District Judge William Alsup in the Northern District of California. Recently, Anthropic submitted a proposal for a pre-trial settlement, avoiding the issue of admitting liability for copyright infringement and instead focusing on a financial resolution. The company has committed to establishing a non-repayable Settlement Fund of “no less than .5 billion,” from which payments will be made based on specific claims submitted by authors within 120 days of the fund's establishment. Additionally, Anthropic has pledged to remove texts from pirated libraries from its databases. In exchange for these concessions, the plaintiffs would need to waive their claims, although they retain the right to pursue further legal action should it be discovered that the developers have once again downloaded books from torrent sites. This proposal is pending approval from Judge Alsup." max_tokens="3500" temperature="0.3" top_p="1.0" best_of="1" presence_penalty="0.1" frequency_penalty="frequency_penalty"].5 billion and removing texts from pirated libraries from its databases, with the plaintiffs needing to waive their claims in exchange. The case is presided over by Senior U.S. District Judge William Alsup.
AppWizard
December 23, 2024
The landscape of digital game ownership has come under scrutiny as digital distribution increases, leading gamers to question their true ownership of purchased titles. Many high-profile games have disappeared from digital platforms, including Sony's Concord, which was discontinued just 11 days after launch. The Stop Killing Games campaign, initiated by YouTuber Ross Scott, gained traction after Ubisoft shut down The Crew, a decade-old racing game, making it unplayable due to server and licensing issues. The campaign advocates for classifying video games as "goods" rather than "services," arguing that purchased games should not be rendered inoperable. The petition has over 400,000 signatures, aiming for a million by July 2025 to prompt the EU to consider a ban on making multiplayer games unplayable. Steam has updated its disclaimers regarding ownership in response to these issues, influenced by a new Californian law requiring retailers to inform consumers about the revocability of digital games. GOG has positioned itself as a champion of consumer rights, ensuring that purchased games remain with the buyer indefinitely and allowing users to bequeath their game libraries. Industry leaders, including Michael Douse from Larian Studios, have expressed concerns about the implications of losing ownership for developers. While some publishers recognize the value of preserving older titles, others remain less attuned to this importance. The conversation around ownership and preservation is intensifying, highlighting that players do not truly own their games on platforms like Steam, where access can be revoked at any time.
Winsage
September 28, 2024
Windows 10 was introduced with features like the revamped Edge browser, virtual desktops, and a reintroduced Start menu, but faced significant user backlash due to aggressive update policies that disrupted users' work and caused data loss. This led to lawsuits against Microsoft, including a settlement of ,000 for a case where a forced update rendered a user's computer unusable. In response to criticism, Microsoft made changes in April 2019 to enhance the update experience, allowing users to pause updates for up to 35 days, introducing Active Hours to prevent automatic installations during critical times, and improving transparency in the update process.
AppWizard
September 20, 2024
A federal judge has refused to dismiss a class-action complaint against Meta Platforms, alleging unlawful collection of biometric data from Illinois residents using the Messenger and Messenger Kids apps. The ruling, by U.S. District Court Judge Nancy Rosenstengel, highlights concerns over data privacy and claims that Meta violated the Illinois Biometric Information Privacy Act (BIPA) by not obtaining proper consent for biometric data collection. Meta intends to defend its practices, asserting compliance with laws regarding data collection and privacy. The case could set a precedent for tech companies' handling of user data and may encourage stricter privacy legislation in other states.
Search