ChatGPT-rival Bard AI accidentally reveals it is snooping on Gmail data; here’s how Google reacted
Google’s AI assistant Bard, designed to rival ChatGPT, has caused controversy by allegedly claiming that it was trained on users’ Gmail data. Bard was created to compete with the increasingly popular ChatGPT, which is based on the GPT-3.5 (for free users) and GPT 4 architecture, and has similar functionalities. However, there are privacy concerns around artificial intelligence and the data set it uses.
The controversy surrounding Bard was brought to light by Microsoft researcher Kate Crawford, who shared a screenshot of her conversation with the chatbot. When Crawford asked Bard about its dataset, the chatbot reportedly listed publicly available datasets from sources such as Wikipedia and GitHub, as well as internal data from Google products including Gmail and other third-party companies.
However, the incident has highlighted the limitations of generative AI tools such as Bard and ChatGPT. Both companies have warned that the chatbots may not always provide factually correct data and could “hallucinate” facts or make reasoning errors.
OpenAI, the parent company of ChatGPT, recently rolled out the GPT-4 language model, which it said has similar limitations to earlier GPT models but at a much smaller scale. The company warned users to be careful when using language model outputs, particularly in high-stakes contexts, and recommended using human review, grounding with additional context, or avoiding high-stakes uses altogether.