Artificial Intelligence and a machine’s ability to mimic human speech has always fascinated humankind. Recently, OpenAI’s artificial intelligence language model known as ChatGPT has grown in prominence for its ability to learn and generate human-like responses to a wide range of inputs. While this technology has proven to be incredibly useful, there are also concerns about the potential for sensitive data to be stored within the model. Studies have found that around 11% of the data stored in ChatGPT is actually comprised of sensitive information that employees have copied into the program. This blog post will first examine these concerns before discussing how RSM recommends leveraging Microsoft Defender for Endpoint (MDE) and Microsoft Purview Data Loss Prevention (DLP) to prevent sensitive data from being copied into ChatGPT.
The Concerns Over Sensitive Data Stored in ChatGPT
One of the primary concerns over sensitive data being stored in ChatGPT is the potential for this information to be accessed by unauthorized users. Given the vast amount of data that is fed into the model, there is a risk that confidential information could be stored within the system without the knowledge or consent of the people or organizations that provided it. This could include client information, upcoming organization-specific presentations, or other sensitive proprietary data that could be used maliciously if it fell into the wrong hands.
It was recently confirmed by OpenAI’s CEO, Sam Altman, that a bug in ChatGPT caused some users to see the titles of other users’ conversations. While the bug was quickly patched, this has raised concerns over the privacy and security of the conversations that take place within the AI language model. Because there are no limits to what information an employee can enter into the chatbot, this bug highlights the need for robust security measures to be in place to protect against sensitive data potentially being leaked through use of ChatGPT.
Preventing Sensitive Data from Being Copied into ChatGPT with Microsoft Defender for Endpoint and Microsoft Purview DLP
To address these concerns, RSM recommends leveraging Microsoft’s security suite – specifically, Microsoft Defender for Endpoint and Microsoft Purview’s Endpoint DLP. Designed to provide advanced threat protection and data loss prevention capabilities, these services offer the necessary tools to prevent sensitive data from being copied into ChatGPT.
Microsoft Defender for Endpoint is a cloud-based endpoint security solution that uses AI and machine learning to provide real-time protection against advanced threats. Endpoint data loss prevention (DLP) is another key tool for preventing sensitive data from being copied into ChatGPT.
By utilizing trainable classifiers and other sensitive information detection capabilities, such as custom regular expressions, these security solutions can be used in conjunction to identify and classify sensitive data within the organization. Once identified, MDE and Endpoint DLP can apply policies and controls to prevent this data from being copied or shared inappropriately. These services help to ensure that confidential information is protected at all times and avoid having users input that sensitive data into ChatGPT.
Conclusion
With rising concerns over sensitive information being stored in ChatGPT, there are powerful tools available to prevent this from happening. By leveraging solutions such as Microsoft Defender for Endpoint and Microsoft Purview DLP, organizations can ensure that confidential information is protected at all times and prevent it from being copied into AI chatbots. As AI continues to evolve and become more prevalent in our daily lives, it will be important to remain vigilant and proactive in protecting sensitive data from potential threats.