AI Privacy

Is ChatGPT Safe for Confidential Work? What You Need to Know

Artificial intelligence is rapidly changing the way people work, study, and communicate. From drafting reports and analysing data to summarising complex documents, tools like ChatGPT help people move faster and think more clearly. Ignoring AI altogether may feel like the safest option — but refusing to engage with it can quickly become a disadvantage. Many of the people around you are already using AI to improve efficiency and sharpen their thinking.

At the same time, much of everyday life involves confidential information — whether that is client data, legal documents, medical records, personal finances, research notes, or sensitive communications. This creates a natural tension. AI tools are powerful and convenient — but how do they handle the information you provide?

This article explores how ChatGPT works from a data perspective, where the genuine risks lie, and how you can use AI responsibly. The goal isn't to discourage adoption, but to help you protect yourself and the people you work with — while still gaining the full benefits that modern AI tools can offer.

What “Confidential Work” Actually Means

Before asking whether ChatGPT is safe for confidential work, it's important to define what “confidential” really means. It's not just dramatic “top secret” material. In practice, it often includes everyday information that is regulated, contractually protected, or simply private.

Under GDPR, personal data includes any information that can identify an individual — from names and addresses to financial or employee records. Handling it carries legal obligations.

Commercially sensitive information such as pricing models, intellectual property, supplier agreements, or internal financial data may not always be regulated, but it is often critical to competitive advantage.

Legal privilege, medical records, internal strategy documents, and client contracts are also routinely protected by strict legal, regulatory, or contractual duties.

For many people, confidential information is not exceptional — it is routine. That's why understanding how AI tools handle information isn't about fear; it's about taking responsibility for your own data.

A Note on Data Retention and Legal Proceedings

In late 2023, The New York Times and other major publishers filed copyright litigation against OpenAI, which led to significant rulings on data preservation. In May 2025, a federal court ordered OpenAI to preserve all ChatGPT output log data — including conversations users had already deleted — for potential use in legal discovery. The court ultimately required OpenAI to produce 20 million anonymised chat logs.

This illustrates a broader principle: when you submit information to a third-party AI platform, that information becomes subject to legal and regulatory frameworks beyond your control. It may be retained, disclosed, or used for purposes you did not intend. This creates a meaningful risk when using consumer-grade AI products.

What Is an Enterprise AI Offering?

An enterprise AI offering is a version of an AI platform designed specifically for organisations that operate under regulatory, legal, or contractual obligations. It goes beyond a standard consumer subscription and introduces formal governance, security controls, and contractual protections.

While consumer AI tools focus on accessibility and ease of use, enterprise offerings focus on accountability and compliance.

Why Enterprise AI Exists

Large organisations cannot rely solely on consumer-grade terms of service. Regulated industries must be able to answer specific questions when adopting AI tools:

  • Where is the data processed and stored?
  • Is it retained by the provider, and for how long?
  • Who can access it, and under what conditions?
  • Can we demonstrate compliance if audited or challenged by a regulator?

Enterprise AI offerings are designed to provide these answers through formal mechanisms: data processing agreements, contractual compliance commitments, regional data residency options, and — at the highest level of control — zero data retention.

What Is Zero Data Retention (ZDR)?

Zero Data Retention (ZDR) is a data-handling model in which an AI provider processes your request but does not store the content of that request after it has been completed.

In simple terms:

  • Your prompt is sent for processing.
  • The model generates a response.
  • The request and response are not retained by the provider once the transaction is finished.

There is no persistent storage of the conversation for training, logging, or later reuse. It shifts the risk model from “stored externally” to “processed transiently.”

The Limitation for Smaller Organisations and Individuals

The challenge is that enterprise agreements are typically structured for larger organisations with procurement teams and negotiated contracts. Individuals and small businesses often cannot access these arrangements directly.

This is why many people find themselves using consumer tools for sensitive work — even when the governance model was not designed with their privacy needs in mind.

Understanding this distinction is central to deciding whether a given AI tool is appropriate for confidential work.

Bringing Enterprise-Level Privacy to Everyone

Chapman AI Ltd operates under an enterprise agreement that includes Zero Data Retention access. To make this architecture accessible beyond large corporations, the company developed CloakAI.

CloakAI is a privacy-first web application designed for anyone handling sensitive information. Its structure is intentionally simple:

  • Conversations are stored in an encrypted vault that only the user has the key to access
  • The CloakAI relay does not store chat content
  • Requests are processed under Zero Data Retention terms
  • Data is not retained for model training
  • Users maintain direct control over their history

This approach brings enterprise-grade privacy architecture to individuals and small organisations that would otherwise lack access to negotiated enterprise terms.

The result is not just “using AI more carefully,” but changing the architecture entirely — from shared consumer infrastructure to privacy-by-design deployment.

Conclusion

Understanding the differences between consumer and enterprise AI offerings is essential for anyone working with sensitive information. The issue is not whether AI should be used — it is how it should be used. By recognising the distinctions in data handling, contractual safeguards, and governance, you can make informed decisions about when a tool is appropriate, and when a more privacy-focused alternative should be considered.

Ready to use AI with confidence?

CloakAI brings enterprise-grade privacy to anyone handling sensitive work.