Google parent company Alphabet has reportedly warned its employees against feeding any confidential information to the tech giant’s AI chatbot Bard, as it begins global marketing for its product, it has been reported. ReutersThursday, citing sources.
The instructions were shared by the company under its own policy, seeking to protect its long-standing policy of keeping its information secure.
Chatbots — including OpenAI’s ChatGPT — are human-like bots capable of solving complex problems because they are programmed with Artificial General Intelligence (AGI). They are also capable of communicating like humans.
According to the researchers, just as human reviewers can read chats, a similar AI can reproduce data absorbed during training, creating a risk of information leakage.
Reuters Alphabet is also reported to have warned its engineers to avoid direct use of computer code that could generate chatbots.
The Google parent said Bard may make unwanted code suggestions, but it still helps programmers while noting that it aims to be transparent about the limitations of its technology.
The recent concerns highlight how the technology giant is reaching out to avoid any damage to its business after announcing its product to compete with ChatGPT.
As of February, Google told staff testing Bard before its launch not to give it inside information, Insider reported.
Now Google is rolling out Bard as a springboard for creativity in more than 180 countries and 40 languages, and its warnings have extended to its code suggestions.
Google told Reuters it had held detailed talks with Ireland’s Data Protection Commission and was addressing regulators’ questions, after Politico reported on Tuesday that the company barred this week for more information about the chatbot’s impact on privacy. The EU is postponing the launch.
Information leak
A Google privacy notice updated on June 1 also says: “Don’t include confidential or sensitive information in your Bard conversations.”
Some companies have developed software to address such concerns. For example, CloudFlare, which defends websites against cyberattacks and offers other cloud services, is marketing the ability for businesses to tag and restrict certain data from flowing externally.
Google and Microsoft are also offering conversational tools to commercial customers that will come with a higher price tag but avoid absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ chat history, which users can choose to delete.
Yusuf Mehdi, Microsoft’s chief consumer marketing officer, said: “It makes sense that companies don’t want their staff to use public chatbots for work.”
“Companies are taking a fairly conservative view,” he said of how Microsoft’s free Bing chatbot compares to its enterprise software. “Over there, our policies are much more stringent.”