AI training

AppWizard
December 4, 2025
Opera has integrated innovative artificial intelligence features into its Android application, following previous updates to its desktop browsers. The latest update includes contextual AI, which allows users to use content from their current tab in prompts, and visual intelligence, enabling the attachment of images and files for AI analysis directly within the search bar. A new access point for AI is located beside the search box, allowing users to switch between traditional search and the Ask AI feature easily. The visual intelligence upgrade allows users to attach documents or photos for processing by the AI. Contextual AI reduces typing by using content from the current tab, ensuring that only relevant information is utilized and that user data is not used for AI training or advertising. The update is available for download through the Play Store.
AppWizard
November 22, 2025
The ongoing dialogue surrounding data collection highlights the relationship between users and their digital footprints, particularly during events like Black Friday. Recent claims about a new Gmail setting allowing Google to use user message data for AI training have been clarified by Google, which states it does not use Workspace data for this purpose. Instead, data is anonymized to improve features like spam detection. Users often become the product in a digital economy where their data is exchanged for free services, leading to the creation of detailed profiles that can be sold to advertisers. This results in practices like price discrimination and hyper-targeted messaging. Data breaches pose a threat to companies holding extensive data, and automated decision-making can lead to biased outcomes. Consumers are encouraged to engage with privacy policies and question the necessity of data requests to better manage their data and maintain ownership of their digital identity.
Winsage
November 18, 2025
Microsoft's president announced the evolution of Windows into an "agentic OS," integrating AI capabilities for autonomous operation. A new tool, Copilot Actions, is being rolled out to Insiders globally via the Microsoft Store, allowing AI to interact with local files to assist users with tasks like organizing photos and managing files. Microsoft emphasizes its commitment to security and privacy, referencing its Privacy Report and Responsible AI Standard, although specifics on data handling by AI agents remain unclear.
Winsage
October 22, 2025
Microsoft's Windows 11 is the leading global operating system, used on various devices, including desktops, laptops, and handheld gaming devices. Despite its widespread adoption, user satisfaction is low, with many complaints about technical workarounds being blocked and concerns over data privacy related to AI features like Copilot. Users with outdated hardware, particularly lacking a TPM 2.0 chip, face challenges upgrading to Windows 11, and Microsoft's suggestion to buy new devices has been criticized. While some features are praised, the overall user experience is mixed, with ongoing issues related to the user interface and nostalgia for previous Windows versions.
AppWizard
October 14, 2025
Ask Jerry is a platform focused on addressing inquiries about smart technologies, with a background in engineering and over 15 years of experience in Android and Google. It emphasizes the importance of filtering out harmful online images, particularly those that exploit vulnerable individuals. In July 2025, it was revealed that Meta had trained its AI on every photo uploaded to its platforms since 2007, although they claim to have stopped this practice for current AI models. Meta's photo analysis applies only to uploaded images, and they offer an opt-in service for analyzing random selections from users' device libraries. In contrast, Google Photos asserts that it does not use personal photos for AI training unless shared with third parties. Data privacy policies of tech companies are often complex and can change without clear notification. Users should assume that anything shared online may not be entirely private, as AI technologies may utilize user-generated content for training. Data privacy regulations vary by region, with the EU having stricter rules than the U.S., making it important for individuals to be cautious when sharing online.
Winsage
October 14, 2025
Microsoft is introducing an AI-driven feature in OneDrive that allows users to group photos based on recognized faces. Currently in a mobile preview for select users, the feature requires manual identification of faces and has limitations on user control, allowing toggling of the People section only three times a year. Microsoft has not clarified the reasoning behind this limitation. The company acknowledges that certain regions require user consent for photo processing, which may lead to regulatory scrutiny. If users disable the facial grouping feature, associated data will be erased within 30 days. Microsoft claims it does not use facial scans or biometric data for AI training, but concerns about privacy remain.
Winsage
October 5, 2025
Microsoft Edge is introducing significant enhancements to its Copilot mode, which features a reimagined New Tab Page (NTP) with a Copilot compose box that allows users to switch between traditional address bar functionality, Bing searches, and AI-generated responses. Users can toggle between AI models like GPT-5 and o3, and a new plus icon on the NTP enables users to “Add tabs,” allowing Copilot to identify open tabs and provide context for inquiries. This integration leads users to copilot.microsoft.com, where they can engage in conversations that include their open tabs, enhancing interactivity through a feature called “tab tagging.” Additionally, Microsoft Edge is set to introduce a feature called “Journeys” in 2025, which will summarize users' browsing history in a structured format as cards on the NTP. Clicking on a Journeys card will redirect users to a page summarizing their browsing activity. This feature will require personal Microsoft accounts and access to the last seven days of browsing history, although it raises privacy concerns. Microsoft assures users that browsing history will not be shared with third parties or used for AI training or advertising, but sensitive browsing history cannot currently be filtered out.
TrendTechie
September 8, 2025
Developers of the Claude chatbot have proposed a settlement of [openai_gpt model="gpt-4o-mini" prompt="Summarize the content and extract only the fact described in the text bellow. The summary shall NOT include a title, introduction and conclusion. Text: In a significant development within the realm of artificial intelligence and copyright law, developers of the Claude chatbot have proposed a settlement of .5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal, which aims to resolve ongoing legal disputes regarding the legality of utilizing pirated books for AI training, has been detailed on specialized platforms and awaits approval from a California judge. Background on Claude and the Legal Challenge Claude, an AI chatbot developed by Anthropic, is currently operating on its fourth version, Sonnet 4. The model claims to possess capabilities in “reasoning, analysis, creative writing, programming, and solving complex problems across a wide range of fields.” Notably, it emphasizes its “constitutional AI training,” designed to ensure ethical and constructive discussions on virtually any topic. While Claude shares similarities with other AI projects like OpenAI's ChatGPT and Google's Gemini, it operates on a subscription model, attracting approximately 16 to 18 million users monthly. The legal action was initiated last year by journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who filed a class-action lawsuit on behalf of all authors whose texts may have been copied during the AI's training process. They allege that Anthropic built a multi-billion dollar enterprise by “stealing hundreds of thousands of copyrighted books.” The lawsuit claims that the company downloaded pirated versions of works, including those of the plaintiffs, and subsequently trained its models on this content. Specifically, the complaint indicates that the neural networks analyzed texts from free torrent libraries such as Books3 and The Pile. Claims of Copyright Infringement The plaintiffs assert that Anthropic's actions constitute a violation of their copyright rights under 17 USC § 501. They are seeking compensatory damages, restitution, the return of unlawfully obtained property, attorney fees, and any other appropriate remedies. Furthermore, they are requesting a court order to prohibit Anthropic from engaging in “infringing conduct,” effectively seeking a ban on training neural networks with pirated content. A ruling in this case could set a precedent for future litigation against other developers in the AI sector. The case is being presided over by Senior U.S. District Judge William Alsup in the Northern District of California. Recently, Anthropic submitted a proposal for a pre-trial settlement, avoiding the issue of admitting liability for copyright infringement and instead focusing on a financial resolution. The company has committed to establishing a non-repayable Settlement Fund of “no less than .5 billion,” from which payments will be made based on specific claims submitted by authors within 120 days of the fund's establishment. Additionally, Anthropic has pledged to remove texts from pirated libraries from its databases. In exchange for these concessions, the plaintiffs would need to waive their claims, although they retain the right to pursue further legal action should it be discovered that the developers have once again downloaded books from torrent sites. This proposal is pending approval from Judge Alsup." max_tokens="3500" temperature="0.3" top_p="1.0" best_of="1" presence_penalty="0.1" frequency_penalty="frequency_penalty"].5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal aims to resolve legal disputes regarding the use of pirated books for AI training and is awaiting approval from a California judge. The legal action was initiated by journalists who filed a class-action lawsuit against Anthropic, alleging copyright infringement under 17 USC § 501. They claim that Anthropic built a multi-billion dollar enterprise by using pirated texts for training its models. The plaintiffs are seeking compensatory damages, restitution, and a court order to prohibit Anthropic from infringing conduct. Anthropic's settlement proposal includes establishing a non-repayable Settlement Fund of at least [openai_gpt model="gpt-4o-mini" prompt="Summarize the content and extract only the fact described in the text bellow. The summary shall NOT include a title, introduction and conclusion. Text: In a significant development within the realm of artificial intelligence and copyright law, developers of the Claude chatbot have proposed a settlement of .5 billion to compensate journalists and authors whose works were allegedly used without permission during the training of their neural networks. This proposal, which aims to resolve ongoing legal disputes regarding the legality of utilizing pirated books for AI training, has been detailed on specialized platforms and awaits approval from a California judge. Background on Claude and the Legal Challenge Claude, an AI chatbot developed by Anthropic, is currently operating on its fourth version, Sonnet 4. The model claims to possess capabilities in “reasoning, analysis, creative writing, programming, and solving complex problems across a wide range of fields.” Notably, it emphasizes its “constitutional AI training,” designed to ensure ethical and constructive discussions on virtually any topic. While Claude shares similarities with other AI projects like OpenAI's ChatGPT and Google's Gemini, it operates on a subscription model, attracting approximately 16 to 18 million users monthly. The legal action was initiated last year by journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who filed a class-action lawsuit on behalf of all authors whose texts may have been copied during the AI's training process. They allege that Anthropic built a multi-billion dollar enterprise by “stealing hundreds of thousands of copyrighted books.” The lawsuit claims that the company downloaded pirated versions of works, including those of the plaintiffs, and subsequently trained its models on this content. Specifically, the complaint indicates that the neural networks analyzed texts from free torrent libraries such as Books3 and The Pile. Claims of Copyright Infringement The plaintiffs assert that Anthropic's actions constitute a violation of their copyright rights under 17 USC § 501. They are seeking compensatory damages, restitution, the return of unlawfully obtained property, attorney fees, and any other appropriate remedies. Furthermore, they are requesting a court order to prohibit Anthropic from engaging in “infringing conduct,” effectively seeking a ban on training neural networks with pirated content. A ruling in this case could set a precedent for future litigation against other developers in the AI sector. The case is being presided over by Senior U.S. District Judge William Alsup in the Northern District of California. Recently, Anthropic submitted a proposal for a pre-trial settlement, avoiding the issue of admitting liability for copyright infringement and instead focusing on a financial resolution. The company has committed to establishing a non-repayable Settlement Fund of “no less than .5 billion,” from which payments will be made based on specific claims submitted by authors within 120 days of the fund's establishment. Additionally, Anthropic has pledged to remove texts from pirated libraries from its databases. In exchange for these concessions, the plaintiffs would need to waive their claims, although they retain the right to pursue further legal action should it be discovered that the developers have once again downloaded books from torrent sites. This proposal is pending approval from Judge Alsup." max_tokens="3500" temperature="0.3" top_p="1.0" best_of="1" presence_penalty="0.1" frequency_penalty="frequency_penalty"].5 billion and removing texts from pirated libraries from its databases, with the plaintiffs needing to waive their claims in exchange. The case is presided over by Senior U.S. District Judge William Alsup.
Search