Using AI to Help with Marketing Materials: Is it a Breach of Your NDA?
- natalia6323
- Jan 26
- 6 min read
Updated: 55 minutes ago

As artificial intelligence tools become more powerful, faster, and easier to use, this question is coming up constantly in the M&A and capital-raising world:
“I’m working on a live deal under NDA. Can I upload information into ChatGPT or Grok to help draft marketing materials, CIM language, or investor outreach?”
It’s a fair question — and one we’ve heard directly from our affiliated investment bankers, placement agents, and M&A advisors. AI can be an incredible productivity tool. But when NDAs, confidential deal data, and regulated information are involved, how you use AI matters just as much as whether you use it at all.
Below is the clear compliance answer — followed by practical mitigation strategies that allow you to use AI tools more safely and responsibly.
The Short Answer
Uploading confidential information protected by a Non-Disclosure Agreement (NDA) into public AI models like ChatGPT or Grok is generally considered a breach of confidentiality and a significant security risk.
Even when the intention is efficiency or better marketing language, transferring sensitive deal data to third-party AI platforms means you lose control over that information.
Let’s break down why.
What Actually Happens When You Upload NDA-Restricted Data
1. Direct Breach of the NDA
Most NDAs prohibit disclosure of confidential information to any third party.
AI platforms are third-party service providers. When you input:
Company financials
Deal structure or valuation
Strategic plans
Customer, supplier, or employee data
…you are disclosing that information outside the protected deal ecosystem.
That exposure can create:
Contractual liability
Legal disputes
Reputational damage
Loss of trust with counterparties
Even if the disclosure feels “invisible,” from a legal standpoint, it still counts.
2. Information May Be Used to Train AI Models
By default, many AI platforms — including consumer versions of tools operated by OpenAI and xAI — may use user inputs to improve or train future models.
This means:
Proprietary language
Unique deal structures
Strategic positioning
…could theoretically become part of the broader training corpus and resurface in responses to other users.
Even if that risk feels remote, it is incompatible with the confidentiality obligations most NDAs impose.
3. Risk of Data Leaks (Even Without Training)
Confidential data entered into AI tools can be exposed through:
Third-party data breaches
If an AI platform’s servers are compromised, stored conversations may be accessed by attackers.
Platform bugs or human error
There have been documented cases where chat histories, titles, or shared links were exposed unintentionally.
Persistent storage
Data may be retained on external servers even after a user deletes a chat, creating long-term security risk outside your control.
Once sensitive information leaves your environment, you cannot fully claw it back.
4. Regulatory and Compliance Violations
If deal materials contain:
Personally Identifiable Information (PII)
Financial account data
Health or employment records
Uploading that information into public AI tools can trigger violations of laws such as:
For regulated professionals, this raises not just legal risk — but supervisory and enforcement risk as well.
FINRA's Stance
As compliance professionals here at Britehorn Securities, we're not here to offer up legal advice, but to make sure our representatives operate within the bounds of SEC, FINRA, and state regulations. So what's FINRA's view?
FINRA’s rules are technology-neutral — meaning that any technology you use as part of business activity is subject to the same rules you’d apply to traditional tools. That includes GenAI tools like ChatGPT or Grok.
However, FINRA’s latest oversight reports explicitly mention AI as an emerging risk area. FINRA expects firms not only to comply with existing rules when using AI, but to anticipate risks such as cybersecurity, data privacy, outsourcing risk, and new communications methods driven by AI. While not specific to NDA-protected data or generative AI model training, this shows regulators are actively watching how firms implement AI — including how data governance and compliance obligations are applied.
FINRA emphasizes that using third-party technology — including AI platforms — does not outsource responsibility. Broker-dealers and their registered representatives remain responsible for the security, integrity, and compliance of any technology they use, even if it’s cloud-based or provided by an external vendor.
Mitigation: How to Use AI More Safely on Live Deals
The answer is not “never use AI.”
The answer is use it correctly.
1. Sanitize Data Before Input
Before using AI to assist with marketing materials:
Remove company names
Remove financial figures
Remove transaction structure details
Remove dates, locations, and identifiers
Instead, use:
Generic placeholders
Hypothetical scenarios
High-level descriptions
2. Check and Adjust Privacy Settings
If AI tools are approved for limited use, ensure privacy controls are enabled:
ChatGPT
Disable chat history and training in settings to prevent conversations from being used for model improvement.
Grok
Use Private Chat mode (ghost icon), which prevents data from being used for training.
These steps do not eliminate risk, but they meaningfully reduce it.
3. Opt for Enterprise or Team Versions
The best option is to use zero-retention, enterprise versions of AI tools. Enterprise-grade AI tools are fundamentally different from consumer versions. Options such as ChatGPT Team and ChatGPT Enterprise typically offer:
Zero-retention or limited-retention policies
No training on customer data
Enhanced security controls
For firms working on live M&A transactions, enterprise environments are the minimum acceptable baseline if AI is to be used at all.
4. Always Do a Thorough Human Review
Anything that an AI tool creates should be reviewed by a qualified professional before it is published or made available to potential buyers or investors.
It's not the AI's responsibility to ensure that you follow FINRA Rule 2210 (Communications with the Public) — representatives must still ensure communications are accurate, balanced, and compliant via human review. As a broker-dealer, we are certainly here to help with that — but the initial responsibility lies with each individual registered representative.
5. Use AI for Structure, Not Substance
A best-practice rule of thumb:
Let AI help with format, tone, and structure — not confidential content.
Safe uses include:
Rewriting generic marketing language
Improving readability
Creating outlines or templates
Editing already-approved, non-confidential text
Unsafe uses include:
Feeding raw CIM drafts
Uploading financial models
Sharing NDA-protected deal summaries
6. Consider Amending Your NDA
Classic NDAs weren’t written with GenAI in mind and need to be tweaked to address compiler/training/retention issues. This isn't as simple as adding "service providers" to the permitted information recipients, however. You should detail the right protections (non-public models, no training, zero retention, deletion commitments, absolutely no use of sensitive personal information, etc.).
However, two big caveats:
Permission ≠ immunity. Even if the NDA allows it, you can still create problems under privacy laws, trade secret handling, or contractual obligations to other parties. (The NDA just answers, “Is this disclosure permitted under this contract?”)
“Allowed” clauses are often conditional. Many clauses effectively say, “You may use tools only if they meet X controls.” If you use a consumer/public AI tool whose terms allow retention/training, you may still be outside the permission.
To reiterate, this is not legal advice, and you should consult your attorney for this. However, we do recommend having that conversation with your attorney if you are using AI to help with marketing materials.
The Bottom Line
AI is a powerful tool — but in M&A and capital-raising work, speed can never come at the expense of confidentiality.
Uploading NDA-protected information into public AI platforms:
Is typically a breach of the NDA
Creates lasting data security risk
Can trigger regulatory and compliance exposure
With proper sanitization, controls, and enterprise-grade tools, AI can still be used responsibly — but only within clearly defined guardrails.
Here at Britehorn Securities, we encourage human innovation with compliance built in from the start — and using the appropriate tools to boost effectiveness and productivity within proper guidelines.
If you have questions about acceptable AI use, NDA boundaries, or how to structure internal policies around emerging tools, our compliance team is always available to help! Contact us today to learn more.
This article is provided for general informational and educational purposes only and does not constitute legal, tax, or regulatory advice. The discussion herein is based on current interpretations of applicable laws, rules, regulations, and guidance, including FINRA rules, SEC guidance, and relevant judicial decisions, which are subject to change. Individual facts and circumstances may materially affect the analysis or outcome.
Registered representatives and associated persons should consult with their own legal, tax, and compliance advisors regarding the appropriateness of any compensation structure, personal services entity arrangement, or commission assignment agreement. Nothing in this article should be construed as a recommendation to rely on any specific regulatory interpretation, no-action letter, or legal structure without independent professional review.
Britehorn Securities does not provide legal or tax advice and reserves the right to require additional documentation, supervisory controls, or structural modifications to ensure compliance with applicable federal and state securities laws, FINRA rules, and internal supervisory procedures.