Stories that are getting the most attention from our readers this month.
Software Development, Global: Security researchers manipulated GitLab’s AI assistant into converting a benign snippet into malicious code, highlighting vulnerabilities in AI-based coding aids. This raises doubts about relying on AI assistants for secure production code. The assistant responded to crafted prompts that triggered it to recommend backdoor logic. GitLab says it’s investigating the flaw, emphasizing AI disclaimers about human oversight. As generative AI tools proliferate, malicious actors could exploit them to embed vulnerabilities or produce malware at scale. Experts call for robust review processes, as “AI alone can’t be trusted to generate safe code.”