Researchers have successfully bypassed ChatGPT's guardrails, allowing the AI to disclose valid Windows product keys by disguising requests as a guessing game. The technique involved using HTML tags to hide sensitive terms from filters while still enabling AI comprehension. They extracted real Windows Home/Pro/Enterprise keys by establishing game rules and using the phrase "I give up" to trigger disclosure. This vulnerability highlights flaws in keyword-based filtering and suggests that similar techniques could expose other restricted content. The attack exploits weaknesses in AI's contextual interpretation and emphasizes the need for improved content moderation strategies, including enhanced contextual awareness and detection of deceptive framing patterns.