Scammers are increasingly using artificial intelligence (AI) to make their scams more believable and effective, according to CyberSecurity Malaysia (CSM). The use of AI is making it harder to detect scams. Common AI-driven scam techniques include deepfakes, voice cloning, phishing emails and text messages, and social media manipulation. These scams can impersonate trusted individuals and manipulate social media to target specific groups with personalized messages. Scammers can also use AI to create synthetic voices that sound like someone else’s voice, and to generate emails that appear to be from a victim’s bank or credit card company. Federal Commercial Crime Investigation Department (CCID) director Comm Datuk Seri Ramli Mohamed Yoosuf has warned that deepfakes, voice spoofing, and financial market manipulation could become the future of crime. The public is urged to remain cautious and sceptical of any suspicious communications, to verify the identity of anyone contacting them, and to watch for red flags in AI-generated communications. CyberSecurity Malaysia is proactively monitoring and investigating fraudulent activities facilitated by AI technologies, and is collaborating with other organizations to raise awareness and combat various AI-focused crimes. The cybersecurity industry has also been leveraging AI to combat crimes and overcome challenges. Education and awareness campaigns have been launched to inform the public about the risks associated with AI-powered scams.