Google is introducing new AI-powered, real-time protections for Pixel users to combat the $1 trillion in annual fraud. These include Scam Detection and enhanced Google Play Protect features designed to protect users from fraudulent calls and malicious apps while maintaining user privacy.
Companies are increasingly looking for job candidates with skills in machine learning (ML) and large language models (LLMs) to fill cybersecurity jobs. LLM SecOps and ML SecOps are becoming must-have skills to address the risks associated with artificial language.
This Splunk Lantern blog post highlights new articles on instrumenting LLMs with Splunk, leveraging Kubernetes for Splunk, and using Splunk Asset and Risk Intelligence.
An analysis of Large Language Models' (LLMs) vulnerability to prompt injection attacks and potential risks when used in adversarial situations, like on the Internet. The author notes that, similar to the old phone system, LLMs are vulnerable to prompt injection attacks and other security risks due to the intertwining of data and control paths.
This post highlights how the GitHub Copilot Chat VS Code Extension was vulnerable to data exfiltration via prompt injection when analyzing untrusted source code.