Big Tech Pledges to Make AI Safer

Big Tech Pledges to Make AI Safer

Companies like Google, Meta, OpenAI and more have voluntarily agreed to implement changes that make AI safer, according to the Biden administration.

The Pledge

AI companies including OpenAI, Google, and Meta have made a voluntary commitment to the White House to implement strategies like watermarking to make AI technology safer.

These companies also pledged to thoroughly tests systems before releasing them and share educational resources about cybersecurity risks.

The companies also pledged to focus on protecting users' privacy as AI develops. They also pledged to make sure AI is free of bias.

The Biden administration views this as a win; they've been looking to regulate the technology since it's gained popularity.

Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.

The EU already has a jump start on this. They've drafted rules where systems like ChatGPT would have to disclose AI-generated content.

big tech pledges to make ai safer data

Will It Work?

How much regulation could actually help people from getting scammed?

AI is making it easier for hackers to trick users. For example, phishing emails usually have grammatical errors and spelling errors. But AI chatbots can easily correct the errors that trip spam filters or alert users.

And with the right prompts, AI chatbots can write perfectly crafted phishing emails with no effort.

AI allows hackers to craft “spear-phishing” emails. Spear-phishing emails are phishing emails that attempt to trick a specific target into giving a password or other sensitive info.

If a hacker wanted to look at someone’s social media, they could put all their info into ChatGPT. Then, ChatGPT could create a super-believable tailored email based on posts, comments, certain topics, etc.

A watermark may just be Band-Aid on a much larger problem. If the average user won't take the time to educate themselves, they'll probably skip over the privacy and cybersecurity tutorials.

phishing ai attacks

Wrapping Up

The barrier of entry to AI chatbots is extremely low. It's easier for your privacy and cybersecurity to be at risk.

When you're checking email, verify all senders. If you're unsure the link or attachment is safe, log into your account in a separate tab.

Deepfake videos are blurry and have obscure features in hair, skin, or the environment.

The longer you’re online, the more data is out there to be monitored. Try not to mix all your internet access in one place; split up the traffic. And consider adding a VPN to your browser to make things more private.

New AI tools are created every day, so it’s also a good idea to stay informed with the latest tech and cyber security news!

Back to blog