Innovation and Technology
What Not to Tell an AI Bot
Introduction to ChatGPT and Privacy Concerns
ChatGPT has changed the way many of us work and live our day-to-day lives. According to recent stats, over 100 million of us use it every day to process over one billion queries. But the world-conquering LLM chatbot has been described as a “privacy black hole,” with concerns about the way it treats data entered by users, which even led to it being briefly banned in Italy.
How ChatGPT Handles User Data
Its creator, OpenAI, makes no secret of the fact that any data entered may not be secure. As well as being used to further train its models, possibly leading to its exposure in output to other people, it can be reviewed by humans to check for compliance with rules about how it can be used. And, of course, any data sent to any cloud service is only as secure as the provider’s security. What all this means is that any data whatsoever entered into it should be considered public information.
Information to Avoid Sharing with ChatGPT
With this in mind, there are several things that absolutely should never be told to it – or any other public cloud-based chatbot. These include:
Illegal or Unethical Requests
Most AI chatbots have safeguards designed to prevent them from being used for unethical purposes. And if your question or request touches on activities that could be illegal, it’s possible you could find yourself in hot water. Examples of things that are definitely a bad idea to ask a public chatbot are how to commit crimes, carry out fraudulent activity, or manipulate people into taking action that could be harmful. Many usage policies make it clear that illegal requests or seeking to use AI to carry out illegal activities could result in users being reported to authorities.
Logins and Passwords
With the rise of agentic AI, many more of us will find ourselves using AI that’s capable of connecting to and using third-party services. It’s possible that in order to do this, they need our login credentials; however, giving them access could be a bad idea. Once data has gone into a public chatbot, there’s very little control over what happens to it, and there have been cases of personal data entered by one user being exposed in responses to other users.
Financial Information
For similar reasons, it’s probably not a great idea to start putting data such as bank accounts or credit card numbers into genAI chatbots. These should only ever be entered into secure systems used for e-commerce or online banking, which have built-in safety guards like encryption and automatic deletion of data once they have been processed. Chatbots have none of these safeguards. In fact, once data goes in, there’s no way to know what will happen with it, and putting in this highly sensitive information could leave you exposed to fraud, identity theft, phishing, and ransomware attacks.
Confidential Information
Everyone has a duty of confidentiality to safeguard sensitive information for which they’re responsible. Many of these duties are automatic, such as confidentiality between professionals (e.g., doctors, lawyers, and accountants and their clients). But many employees also have an implied duty of confidentiality to their employers. Sharing business documents, such as notes and minutes of meetings or transactional records, could well constitute sharing trade secrets and a breach of confidentiality.
Medical Information
We all know that it can be tempting to ask ChatGPT to be your doctor and diagnose medical issues. But this should always be done with extreme caution, particularly given that recent updates enable it to “remember” and even pull information together from different chats to help it understand users better. None of these functions come with any privacy guarantees, so it’s best to be aware that we really have very little control over what happens to any of the information we enter.
Conclusion
As with anything we put onto the internet, it’s a good idea to assume that there’s no guarantee it will remain private forever. So, it’s best not to disclose anything that you wouldn’t be happy for the world to know. As chatbots and AI agents play an increasingly big role in our lives, this will become a more pressing concern, and educating users on the risks will be an important responsibility for anyone providing this type of service. However, we should remember that we have personal responsibility, too, for taking care of our own data and understanding how to keep it safe.
FAQs
- Q: Is ChatGPT secure for all types of information?
- A: No, ChatGPT, like many AI chatbots, is not secure for sensitive or personal information. It’s best to consider any data entered as potentially public.
- Q: What kind of information should I avoid sharing with ChatGPT?
- A: Avoid sharing illegal or unethical requests, logins and passwords, financial information, confidential information, and medical information.
- Q: Can ChatGPT protect my privacy?
- A: ChatGPT’s creator, OpenAI, indicates that data entered may not be secure and could be used to train models or reviewed by humans, suggesting it cannot guarantee privacy.
- Q: What are the risks of sharing sensitive information with ChatGPT?
- A: The risks include fraud, identity theft, phishing, ransomware attacks, and legal consequences, depending on the nature of the information shared.
- Q: How can I protect my data when using ChatGPT or similar services?
- A: Only share information you’re comfortable making public, use secure methods for sensitive data like financial information, and be cautious of what you discuss, especially regarding confidentiality and privacy.
-
Resiliency7 months agoHow Emotional Intelligence Can Help You Manage Stress and Build Resilience
-
Career Advice1 year agoInterview with Dr. Kristy K. Taylor, WORxK Global News Magazine Founder
-
Diversity and Inclusion (DEIA)1 year agoSarah Herrlinger Talks AirPods Pro Hearing Aid
-
Career Advice1 year agoNetWork Your Way to Success: Top Tips for Maximizing Your Professional Network
-
Changemaker Interviews1 year agoUnlocking Human Potential: Kim Groshek’s Journey to Transforming Leadership and Stress Resilience
-
Diversity and Inclusion (DEIA)1 year agoThe Power of Belonging: Why Feeling Accepted Matters in the Workplace
-
Global Trends and Politics1 year agoHealth-care stocks fall after Warren PBM bill, Brian Thompson shooting
-
Changemaker Interviews12 months agoGlenda Benevides: Creating Global Impact Through Music
