A reward for those who manage to find security holes. Open AI, the company that develops the ChatGpt chatbot, has launched its Bug Bounty Program: it is an initiative that will allow developers and code enthusiasts to look for flaws, bugs and problems in the group’s products such as ChatGpt itself safety. Once the bug is patched, the organization agrees to pay rewards of up to $20,000 for major vulnerabilities. And the reports, they said, could take place through the Bugcrowd platform, already alongside other similar initiatives. As Open AI has revealed, the rewards are based on the severity and impact of reported issues and range from $200 for low-level security flaws up to $20,000 for “exceptional discoveries.”
“A way to reward valuable insights from researchers”
“The OpenAI Bug Bounty program is a way for us to recognize and reward the valuable insights of security researchers that help protect our technology and our business,” OpenAI said. “We encourage you to report security vulnerabilities, bugs, or flaws that you discover in our systems. By sharing your findings, you’ll play a crucial role in making technology safer for everyone.” Among the possible criticalities of ChatGpt, there may be the techniques that hackers use to circumvent the Open AI security measures, for example regarding the creation of inappropriate content, such as research studies but also malicious code for hacking purposes. Last month, Open AI revealed a leak of payment data of ChatGpt Plus users, attributed to a bug in the open source library of the Redis client used by the platform. Due to this bug, ChatGpt Plus subscribers started seeing other users’ email addresses on the service purchase pages. To resolve the issue, the chatbot became inaccessible for several hours.