0 bookmark(s) - Sort by: Date ↓ / Title /
This article examines the dual nature of Generative AI in cybersecurity, detailing how it can be exploited by cybercriminals and simultaneously used to enhance defenses. It covers the history of AI, the emergence of GenAI, potential threats, and mitigation strategies.
NIST has chosen HQC as a backup algorithm for post-quantum encryption, providing an additional layer of defense alongside ML-KEM. HQC uses different mathematical principles and is expected to be finalized in 2027.
Interrupt is a small, open-source gadget designed to teach and practice cybersecurity skills. It runs Linux and features a built-in display, making it ideal for learning and experimenting with hacking tools and techniques.
Strong passwords aren't enough; protecting your email, your digital passport, is crucial for online security. Learn how email aliases can help safeguard your online identity.
Zero trust is a cybersecurity model that assumes no entity is trustworthy by default, whether inside or outside the network, focusing on continuous verification and least privilege access.
Tenet | Description |
---|---|
Never Trust, Always Verify | No person or computing entity is inherently trustworthy, regardless of their location inside or outside the network. |
Principle of Least Privilege | Systems and data are locked down by default; access is granted only to the extent necessary to meet defined goals. |
Multifactor Authentication | Requires a credential beyond the password to ensure someone is who they say they are. |
Microsegmentation | Divides the corporate network into smaller zones, each requiring authentication to enter. |
Continuous Monitoring | Constantly monitors network activity, verifies users, and collects information to spot anomalies. |
These tenets form the core principles of a zero trust architecture, which aims to minimize the exposure of sensitive data and applications, and to limit the "blast radius" of a successful cyberattack.
A list of the top 25 most dangerous software weaknesses according to CWE in 2024, including ranks, scores, and changes from the previous year.
The OpenSSF is a community of software developers, security engineers, and more working together to secure open source software.
The article explores the concept of Large Language Model (LLM) red teaming, a practice where practitioners provide inputs to LLMs to test their boundaries and assess risks. It discusses the characteristics of LLM red teaming, including its manual, collaborative, and exploratory nature. The article also delves into the motivations behind red teaming, the strategies employed, and how the findings contribute to model security and safety.
The areas of research associated with Yinglian Xie, based on the dblp dataset, primarily focus on computer science domains such as cybersecurity, network analysis, and systems security. Key research topics include the detection and analysis of spamming botnets, anonymization techniques on the internet, and privacy protection in search systems. There is also significant work on network-level spam detection, botnet signatures, and web security. Yinglian Xie's publications span various conferences like IEEE Symposium on Security and Privacy, ACM SIGCOMM, and NDSS, highlighting a strong emphasis on both theoretical and practical aspects of security and privacy in distributed systems. Additionally, Xie has explored topics related to graph mining and anomaly detection in large networks.
In the wake of the Salt Typhoon hacks, the US government agencies have reversed course on encryption, urging the use of end-to-end encryption after decades of advocating against it. This is a major turnaround from their previous demands for law enforcement backdoors.
First / Previous / Next / Last
/ Page 1 of 0