Don't trust AI robots for your health problems
|< /p> UPDATE DAY
As millions of people online use chatbots to search the internet, experts warn they are making mistakes about privacy and create new ones.
AI bots such as Bing (Microsoft) and Bard (Google) log everything users type in, while Bard has a setting that allows users to un-log their queries and unassociate them with their Google accounts .
Tools influenced by advertising and marketing
However, Google and Microsoft still leave room in their privacy policies for the use of logs (logs) conversation for advertising purposes, which can pose a risk when dealing with sensitive health issues.
Asked by the Washington Post, Jeffrey Chester, executive director of the digital rights group Center for Digital Democracy, says users should be wary of these tools because they are influenced by advertising and marketing. Companies use the questions and answers to train their own artificial intelligence models to provide better answers, but sometimes they use these chat logs for other purposes, such as advertising.
This means that users may later see advertisements related to their health concerns. While this may not bother some people, it's important to consider the potential harms when digital advertising and health issues intersect.
Unscrupulous IT companies
Companies such as WebMD and Drugs.com shared potentially sensitive health information, such as depression or HIV, and user IDs with outside advertising companies. Data brokers also sell lists of people and health conditions to buyers, including governments or insurers. In some cases, the chronically ill report that disturbing targeted advertisements follow them around the internet.
The amount of health information shared with Google or Microsoft should depend on the trust users place in these companies to protect their data and avoid predatory advertising.
Although OpenAI says it only records searches to train and improve its intelligent models, and does not use chatbot interactions to build user profiles or advertise, privacy experts privacy warn that data could change hands in the future, whether for the benefit of other companies or governments. So be very careful about sharing your data with AI bots.
Vulnerable to hacking risks
Human reviewers sometimes step in to check the smart bots' answers, meaning they would also see user questions. Google records certain conversations for review and annotation, and retains them for up to four years. Reviewers don't see users' Google accounts, but the company warns Bard users to avoid sharing personally identifiable information in chats. Companies that collect and store user data for long periods of time create privacy and security risks because data can be hacked or shared with unreliable business partners.
Real danger of incorrect answers
While large language models like ChatGPT are better than traditional search engines at avoiding unwanted health information, that doesn't mean users should rely on them for accurate health advice. These models have been shown to make up information and present it as fact, and their erroneous answers can be oddly plausible. They also rely on dubious sources or fail to cite them. Users should therefore be careful when relying on bots for health information.
Use confidential browsers like DuckDuckGo or Brave
Users who do not want whether their health concerns remain on a company's servers or are used for advertising purposes should use a browser that protects their privacy, such as DuckDuckGo or Brave. Before signing up for an AI robot-based healthcare service, users should also carefully read the privacy policy and consider the risks.