Large language models (LLMs) that power chatbots can be used to scam humans, but these AI models are also susceptible to being scammed themselves, with varying degrees of gullibility among different models.
"We present a systematic review of some of the popular machine learning based email spam filtering approaches."
"Our review covers survey of the important concepts, attempts, efficiency, and the research trend in spam filtering."
"...a feature that activates when Claude reads a scam email (this presumably supports the model’s ability to recognize such emails and warn you not to respond to them). Normally, if one asks Claude to generate a scam email, it will refuse to do so. But when we ask the same question with the feature artificially activated sufficiently strongly, this overcomes Claude's harmlessness training and it responds by drafting a scam email."