A second hole in AI systems has been discovered, raising concerns among cybersecurity experts about command injection vulnerabilities. Multiple AI-related vulnerabilities have emerged, including those linked to GitHub Copilot and Azure OpenAI, prompting organizations to reassess their AI strategies. It is important for organizations to understand their AI usage, the specific services they employ, and their responses to vulnerabilities. Discussions often focus on data residency, retention, and ownership, but security measures and policies of AI service providers are also crucial. Chief Security Officers are encouraged to reevaluate risk assessment methods, as vulnerabilities are categorized by severity, leading to questions about the reliability of these ratings. Organizations are urged to establish an internal framework for effective risk measurement.