Home Cyber Security Google Presents Bug Bounties for Generative AI Safety Vulnerabilities

Google Presents Bug Bounties for Generative AI Safety Vulnerabilities

0
Google Presents Bug Bounties for Generative AI Safety Vulnerabilities

[ad_1]

Google’s Vulnerability Reward Program provides as much as $31,337 for locating potential hazards. Google joins OpenAI and Microsoft in rewarding AI bug hunts.

Google logo at Googleplex Silicon Valley Mountain View in California.
Picture: Markus Mainka/Adobe Inventory

Google expanded its Vulnerability Rewards Program to incorporate bugs and vulnerabilities that could possibly be present in generative AI. Particularly, Google is searching for bug hunters for its personal generative AI, merchandise akin to Google Bard, which is accessible in lots of international locations, or Google Cloud’s Contact Middle AI, Agent Help.

“We imagine it will incentivize analysis round AI security and safety, and convey potential points to mild that may finally make AI safer for everybody,” Google’s Vice President of Belief and Security Laurie Richardson and Vice President of Privateness, Security and Safety Engineering Royal Hansen wrote in an Oct. 26 weblog submit. “We’re additionally increasing our open supply safety work to make details about AI provide chain safety universally discoverable and verifiable.”

Leap to:

Google’s bug bounty program: Limitations and rewards

There are limitations as to what counts as a vulnerability in generative AI; a whole listing of what vulnerabilities Google considers in scope or out of scope for the Vulnerability Rewards Program is in this Google safety weblog.

Generative AI introduces dangers conventional computing doesn’t; these dangers embody unfair bias, mannequin manipulation and misinterpretations of information, Richardson and Hansen wrote. Notably, AI “hallucinations” — misinformation generated inside a personal shopping session — don’t depend as vulnerabilities for the needs of the Vulnerability Rewards Program. Assaults that expose delicate info, change the state of a Google consumer’s account with out their consent or present backdoors right into a generative AI mannequin are inside scope.

In the end, anybody collaborating within the bug bounty must show that the vulnerability they uncover may “pose a compelling assault situation or possible path to Google or consumer hurt,” based on the Google safety weblog.

Attainable Google AI bug bounty rewards

Rewards for the Vulnerability Rewards Program vary from $100 to $31,337, relying on the kind of vulnerability. Particulars on rewards, payouts might be discovered on Google’s Bug Hunters web site.

Different bug bounties and customary assault sorts in generative AI

OpenAI, Microsoft and different organizations supply bug bounties for white hat hackers who discover vulnerabilities in generative AI techniques. Microsoft provides between $2,000 and $15,000 for qualifying bugs. OpenAI’s bug bounty program will give between $200 and $20,000.

SEE: IBM X-Power researchers discovered phishing emails written by persons are barely extra prone to get clicks than these written by ChatGPT. (TechRepublic)

In an October 26 report, HackerOne and OWASP discovered that the most typical vulnerability in generative AI was immediate injection (i.e., utilizing prompts to make the AI mannequin do one thing it was not meant to do), adopted by insecure output dealing with (i.e., when LLM output is accepted with out scrutiny) and the manipulation of coaching information.

Learn how to study to make use of generative AI

Builders and safety researchers simply beginning out with generative AI have loads of choices in relation to studying the best way to use it, from experimenting with free functions akin to ChatGPT to taking skilled programs. DeepLearning.AI has programs at each newbie and superior ranges for professionals who wish to learn to use and develop for synthetic intelligence and machine studying.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here