Artificial intelligence is evolving rapidly, and yet, it seems there’s a growing concern about its potential misuse. Former Google CEO Eric Schmidt recently highlighted this issue during his talk at the Sifted Summit 2025 held in London. He pointed out that AI systems can be compromised and even retrained in harmful manners.
Schmidt elaborated on how sophisticated AI models might disregard their safety protocols. He mentioned that evidence suggests it’s possible to “photo-shoot” the model in different states, allowing hackers to dismantle safeguards. “During the training, they can learn unsettling things. For instance, they might figure out how to inflict harm,” he remarked.
When AI Protections Fail
While Schmidt praised major AI companies for securing their models against dangerous inquiries, he warned that even solid defenses could be vulnerable. He stressed that “reverse engineering” could be achieved, enabling miscreants to exploit vulnerabilities. He also drew a parallel between the current AI landscape and the early nuclear age, suggesting that we need a kind of global non-proliferation initiative to mitigate risks.
The Growing Threat of AI Manipulation
This isn’t just a theoretical concern. In 2023, a modified version of ChatGPT called DAN—short for “Do Anything Now”—was developed. This “jailbroken” AI bypassed safety measures, responding to several queries without restriction. Users even had to use extreme tactics to coax it into compliance, showcasing the fragility of ethics within manipulated AI systems. Schmidt warned that if legislation doesn’t catch up, these altered models could proliferate and be misused.
Shared Fears Among Tech Giants
Schmidt’s worries aren’t unique. Elon Musk echoed similar sentiments, suggesting that there’s a non-negligible risk of a dystopian future, or as he put it, “the possibility that we will become the Terminator.” He emphasized that while the likelihood of total human extinction is low, it’s critical to minimize that risk as much as possible. Schmidt, on the other hand, views AI as an “existential risk,” highlighting that mishandling it could lead to significant harm or mortality. Yet, he also acknowledges the positive potential AI holds, like improving healthcare and education.
How to Safeguard Against AI-related Risks
To mitigate the dangers posed by insecure AI systems, here are a few tips:
1) Choose Trusted AI Platforms
Use AI tools from reputable companies known for their safety policies. Steer clear of experimental AI that claims to provide endless answers.
2) Protect Your Personal Data
Avoid sharing sensitive information with unfamiliar or untested AI platforms. Opt for data deletion services to manage your online presence. Although perfect data removal isn’t feasible, these services proactively monitor and eliminate your info from various sites—helpful for reducing your risk from potential breaches.
3) Invest in Reliable Antivirus Software
As AI-driven scams increase, robust antivirus solutions are essential. These can block malicious downloads and alerts against phishing attempts, safeguarding your device and personal data.
4) Review App Permissions
If you use AI applications, be mindful of their access. Disable unnecessary permissions, such as location tracking and full file access.
5) Stay Aware of Deepfakes
AI-generated visuals and sounds can impersonate real people. Always verify the sources of online content prior to trust.
6) Keep Software Updated
Regular updates are vital as security patches help prevent vulnerabilities that could put your data or AI models at risk.
What This Means for You
AI safety isn’t just a technical issue; it’s relevant to anyone interacting with digital systems. Whether it’s through a voice assistant or an AI-enhanced app, knowing how your data is managed and protected is key. It begins with you. Understanding the tools you employ and making informed choices is crucial for security.
Take Our Quiz
Wondering about the security of your devices and data? Take a quick quiz to evaluate your digital habits, from password management to network settings.
Key Takeaways
AI offers vast potential benefits, yet it poses significant risks if misused. The ongoing challenge is to strike a fine balance between innovation and ethical considerations. As AI technology advances, establishing secure, transparent systems under human oversight will be essential.
Do you think AI should have a hand in critical decisions, or should humans always be at the helm? Reach out with your thoughts.





