The Question Nobody Wants to Answer Directly

Should you tell people when you use AI in your work?

The question makes people uncomfortable because the honest answer is nuanced in a world that prefers simple rules. Some people insist that all AI use should be disclosed. Others argue that AI is just a tool and you do not disclose using a calculator or spell check.

Both positions are wrong. The right answer depends on the context, the expectations of the other party, and the nature of the work. Here is a practical framework for making that decision.

Why This Matters More Than You Think

The ethics of AI disclosure are not academic. They have real consequences:

Trust. When someone discovers you used AI without disclosure in a context where they expected human effort, trust erodes. And trust, once lost, is expensive to rebuild.

Legal liability. In some contexts — legal documents, medical advice, financial analysis — undisclosed AI use creates legal exposure. The AI might be wrong, and if the consumer expected human expertise, you bear responsibility.

Market dynamics. If everyone uses AI without disclosure, the market cannot accurately price human expertise versus AI-assisted work. Clients paying for expert analysis deserve to know if part of that analysis was AI-generated.

Professional reputation. Being known as someone who uses AI thoughtfully and transparently is an asset. Being caught using AI deceptively is a career risk.

The Disclosure Framework

I evaluate every AI-assisted task on two dimensions:

Dimension 1: Expectations

What does the other party expect? This varies by context:

High expectation of human work:

  • Original creative writing published under your name
  • Expert analysis that clients pay for specifically because of your expertise
  • Academic or educational submissions
  • Legal, medical, or financial advice
  • Personal communications (heartfelt emails, handwritten notes)

Lower expectation of human work:

  • Marketing copy and content
  • Internal documentation and reports
  • Code written for your own projects
  • Email drafts and business communication
  • Data analysis and research summaries

Dimension 2: Stakes

What happens if the AI is wrong and the error goes undetected?

High stakes:

  • Decisions affecting people's health, finances, or legal standing
  • Published information that readers will rely on as factual
  • Contractual deliverables where accuracy is guaranteed
  • Content that represents your expertise and reputation

Low stakes:

  • Internal brainstorming and ideation
  • Draft content that goes through human review
  • Personal productivity tasks
  • Exploratory research and analysis

The Decision Matrix

Combine both dimensions:

High expectations + High stakes = Always disclose. This is non-negotiable. If someone is paying for your expertise and the stakes are high, they need to know AI was involved. Period.

High expectations + Low stakes = Disclose unless the norm has shifted. If you are writing a blog post and your audience expects human-written content, disclose. But if AI-assisted content is the industry norm and your audience is aware, disclosure is optional.

Low expectations + High stakes = Disclose and verify. Even if nobody expects purely human work, high-stakes output requires transparency and human verification. "This analysis was AI-assisted and reviewed by [human expert]" is the right framing.

Low expectations + Low stakes = Disclosure optional. Internal efficiency, personal productivity, and routine tasks do not require disclosure. Nobody needs to know you used AI to draft your meeting agenda.

Specific Scenarios

Content Creation

If you publish content under your name, should you disclose AI assistance?

My approach: I use AI extensively in content creation. I do not claim that every word was typed by my fingers, but I ensure that every idea, every opinion, and every recommendation comes from my experience and judgment. The AI is a drafting tool. The thinking is mine.

I do not add "written with AI" disclaimers to every article. But if someone asks, I am honest about my process. And I never publish AI output without substantial editing and fact-checking.

The standard I hold myself to: if the AI disappeared tomorrow, could I write this same article from my own knowledge, just slower? If yes, AI assistance is a productivity tool. If no, I am passing off AI knowledge as my own, which is dishonest.

Client Work

If you are a consultant or service provider using AI:

  • Disclose if the client is paying for your time. If the engagement is billed hourly and AI reduces the hours, the client should know. Charging forty hours for ten hours of AI-assisted work is fraud.
  • Frame it as a capability if the client is paying for outcomes. "I use AI tools to deliver higher quality work faster" is honest and positions AI use as a benefit.
  • Never deliver raw AI output as your own expert analysis. Clients pay for your judgment. AI-generated analysis without your review and refinement is not what they hired.

Code and Software

AI-assisted coding is already the norm. Most developers use AI tools daily. Disclosure expectations vary:

  • Open source contributions — disclose if the project's contribution guidelines require it.
  • Professional work — disclosure is generally not expected. You are hired for the outcome, not the method.
  • Code you sell as a product — no disclosure needed. Users care about the product quality, not how it was written.
  • Code review context — your team should know if AI generated the code so they can review accordingly.

Hiring and Job Applications

This is a gray area that is evolving rapidly:

  • Resume writing — AI-polished resumes are common and generally acceptable. AI-fabricated experience is fraud.
  • Take-home assignments — using AI without disclosure feels dishonest if the purpose is evaluating your individual skills. Using it when the role involves AI-assisted work is arguably demonstrating a relevant skill.
  • Cover letters — AI-generated cover letters are easy to spot and signal low effort. If you use AI, make sure the output is genuinely personalized.

The Honesty Principle

When in doubt, apply this test: would the other party feel deceived if they learned how this was produced?

If the answer is yes, disclose. If the answer is no, use your judgment. If the answer is "I am not sure," lean toward disclosure. The cost of unnecessary disclosure is near zero. The cost of discovered deception is high.

The Evolving Norm

AI disclosure norms are changing fast. What felt like cheating two years ago is standard practice today. The direction is toward greater acceptance of AI-assisted work, which means disclosure will become less necessary over time for routine tasks.

But the core principle remains: when someone's expectations, decisions, or trust depend on knowing how work was produced, transparency is not optional.

How I Handle It Personally

I am transparent about my AI use without being performatively modest about it:

  • I tell clients that I use AI tools as part of my workflow
  • I use AI extensively for first drafts and research but ensure every output reflects my thinking and judgment
  • I verify facts and claims independently, regardless of whether I or an AI produced them
  • I do not disclose AI use on routine communications, internal documents, or personal productivity
  • When asked directly, I am completely honest about my process

This approach has never caused a problem. In fact, most people are curious about the process and impressed by the output quality.

FAQ

If AI use is a productivity tool like email or spreadsheets, why disclose at all?

Because the output characteristics are different. A spreadsheet computes what you tell it to compute. AI generates novel text and analysis that might contain errors, biases, or fabricated information. The risk profile is different, which means the disclosure calculus is different.

Will disclosure norms stabilize or keep changing?

They will stabilize, but not yet. We are in a transitional period where expectations vary widely. Being more transparent than strictly necessary is the safe strategy during transitions.

What about competitors who do not disclose AI use?

Competitors who use AI without disclosure and produce good work are making a reasonable choice in many contexts. Competitors who use AI to produce misleading or low-quality work without disclosure are creating risk for themselves. Focus on your own standards.

Should companies have AI disclosure policies?

Yes. Clear policies protect both the company and individual employees from making inconsistent decisions. The policy should address client work, public content, internal communications, and hiring.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.