Leveraging AI: The Risks of Oversharing and How to Handle Them Like a Pro

By - March 18, 2025

Originally posted to MaddyDahl.com.

With tailored recommendations, task automation, and precise analytics, workplace productivity is truly being reimagined by AI. However, alongside its undeniable benefits comes a pressing challenge: safeguarding data privacy. In an era where sensitive information flows seamlessly between users and AI platforms, the possibility of oversharing data—and the risks it entails—continues to be a top concern for end users and stakeholders alike. 

Every interaction with AI carries the potential to inadvertently expose confidential information. Whether it involves sensitive or proprietary details through prompts or organizational data shared carelessly during task completion, these actions can lead to serious privacy breaches. The implications of such exposures extend beyond personal inconvenience, impacting enterprise-level systems, trust, and compliance with regulations. 

This article unpacks the privacy challenges associated with AI and explores actionable strategies for secure, responsible utilization across workplaces and industries. 

Creating a Foundational Understanding of AI and Generative AI 

To ensure we are all in alignment with what AI is and what it might be capable of, let’s start with a basic definition. AI, or artificial intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. These machines can perform tasks such as problem-solving, decision-making, language understanding, and even visual perception. When thinking about AI, there are typically two sides of the coin – those for and those against its adoption. But those against may not realize all the places AI is already integrated; Facial recognition (Face ID), GPS like Google Maps, and suggested text on your smartphone are all examples of this.  

So how do we get from Google Maps to Copilot? Great question. Copilot, ChatGPT, Claude, Grok, Gemini, and Deep Seek are all forms of generative AI, a subset of artificial intelligence that focuses on creating new content, such as text, images, music, or even code. Unlike traditional AI, which might classify data or make predictions based on existing information, generative AI can produce original outputs based on patterns and examples it has learned. 

Copilot, which we will focus on primarily in this article, learns from interactions through reinforcement learning, where feedback from user interactions is used and periodic updates from Microsoft are applied to the underlying model, which is GPT-4o at the time of this writing. The publicly available version, Copilot Chat, can be utilized by users within an organizational setting, providing an Enterprise Data Protection shield to provide peace of mind to stakeholders if proprietary or sensitive data is inputted in prompts.  

Additionally, the Copilot for M365 license not only processes the prompt but also integrates data across your organization, including Teams chats, emails, SharePoint sites, and more. As you can imagine, this integration comes with its own set of concerns regarding the extensive amount of data organizations store in Microsoft ecosystems and how to best protect that information. 

My colleague and fellow MVP Seth Bacon has a great article on capturing the value from generative AI here.

Privacy Risks of AI in the Workplace 

When users unknowingly provide too much information or utilize AI tools improperly, they expose themselves—and their organizations—to significant risks. Understanding these risks is crucial to developing a comprehensive AI deployment strategy that prioritizes data protection. Let’s look into the key mechanisms through which oversharing can happen and how we can mitigate them. 

How Does Oversharing Occur? 

  1. Unintentional Disclosures Through Prompts:
    Users may inadvertently include sensitive data in their queries while seeking task-specific help. For instance, asking ChatGPT or Deep Seek (where the prompt data can be used for search optimization) to analyze proprietary files, such as client contracts or financial records, might compromise data by uploading it onto external servers without adequate safeguards. Questions about ownership, misuse, and compliance arise when third-party vendors handle such data.

    Example: “Draft a summary based on this client proposal” could result in sensitive business strategies being uploaded and accessible on vendor platforms or through other public queries on the client. 

  2. Persistent Memory Features:
    Certain AI platforms retain user interactions over time to enhance performance. Without user awareness, these retained histories could store sensitive details, increasing the likelihood of data misuse or unauthorized retrieval.
  3. Overshared sites and files across M365 (Copilot Specific):
    When adopting Copilot, our recommendation is for organizations to conduct a thorough review of their security and governance frameworks. For instance, many organizations utilizing SharePoint may not have updated their link permissions or realized the number of links created as “People in my organization.” While this provides protection against external sharing, Copilot searches intensively and will surface any information relevant to the user’s query, including “People in my organization” links. Therefore, it is crucial to utilize tools unlocked with your Copilot licenses, such as SharePoint Advanced Management, or tools in the Entra toolbox like sensitivity labels 

Best Practices for Data Privacy with AI 

Adopting AI responsibly requires an informed, strategic approach focused on secure practices. No matter the tool your organization moves forward with, the following are my top tips for ensuring secure adoption. 

For Organizations: 

  1. Promote Employee Education and Awareness
    Practical training sessions enhance understanding of safe practices, such as distinguishing between sharable and sensitive data, while encouraging use of a tool that will further improve efficiencies. Offering insights on anonymization techniques or prompts identified by pilot users can significantly reduce risks.

    Example: Training users to input prompts such as “I have a client in the healthcare sector” instead of “my client, XYZ”.

  2.  Choose Secure Platforms Consciously 
    Selecting a platform that provides enterprise data connection allows organizations to have greater control over data management. While financial limitations may affect an organization’s choices, proper training can ensure that security remains a priority.
  3. Protect against data loss and insider risk
    Establish company-wide acceptable use policies specific to AI. Regularly review reports on sensitive data and unprotected files in Copilot interactions, set up alerts for risky AI use (E5 licensing required), and configure sensitivity labels so that Copilot responses and created documents inherit these protections.  

Microsoft has also established key principles that organizations can use as a jumping-off point for adopting AI. 

For Individuals: 

  1. Only use the dedicated AI tools as determined by your organization
    As you are now aware, there are numerous factors to consider when selecting the appropriate tool. Honor the choice made by the stakeholders at your organization and if you still have interest in a tool that wasn’t selected, check out tip #3.
  2. Stay informed about the latest security practices
    To safeguard against potential AI risks, it is crucial for individuals to stay informed about the latest security practices and updates. Regularly participate in training sessions provided by your organization and apply these strategies to ensure safe and responsible AI usage.
  3. Curious about all that AI can do? Have your own personal accounts
    If you are enthusiastic about exploring the latest features of these tools, please utilize your personal account to test them. Ensure compliance with your company’s security protocols while identifying the differences in each tool and share with us your own pros and cons!
  4. Monitor and report any unusual AI behavior
    Do your part: if you see something, say something! Keep an eye out for any irregularities in AI interactions that might indicate risks. Immediately report any suspicious behavior or data exposures to your IT team to prevent data breaches and keep the AI environment secure. 

The rise of AI is accelerating productivity across sectors—from finance and healthcare to retail and marketing. However, these advancements come with significant challenges, especially regarding user privacy and data security. Ensuring proper governance can help unlock AI’s potential while mitigating risks of oversharing sensitive information. 

As workplaces become more reliant on AI, maintaining a balanced approach focused on innovation and security will define the leaders of tomorrow’s digital era. The key question is not whether you’ll adopt these tools—but how effectively and responsibly you’ll use them for long-term impact. 

Want to host a live training for your organization on the capabilities of AI or Copilot? Reach out to me at maddy.dahl@rsmus.com, or learn more from my bio.

Maddy is a Senior Associate in the Modern Work Practice and leads the End User Training and Adoption workstream. As a Microsoft Certified Trainer, Maddy has helped clients not only enable the various technologies within the Microsoft 365 suite, but ensure end users are eager to adopt the change. Maddy has spoken at community events in the Twin Cities area and seeks to bring collaboration and connection to the communities she is a part of.

Contact our team to learn more!

Receive Posts by Email

Subscribe and receive notifications of new posts by email.