Introductory note from Anthony Baruffi, Founder, Baruffi Private Wealth…
Over the past year, I’ve reviewed and utilized generative AI tools to assist me with tasks ranging from portfolio research to day-to-day firm operations. They’re astonishingly powerful—and, if used carelessly, astonishingly dangerous. The CFA Institute article summarized below cuts through the hype and shows exactly where hidden model biases lurk and how to neutralize them. I’m sharing the key points—with a few prompt “shields” and workflow tips—so you, your business, and even your personal life can enjoy the upside of AI while steering clear of costly missteps.
The following article is based on “AI Bias by Design: What the Claude Prompt Leak Reveals for Investment Professionals,” Dan Philps, PhD, CFA & Ram Gopal, CFA Institute Enterprising Investor, 14 May 2025.
Generative AI—ChatGPT, Claude, Gemini, Copilot—can digest 50-page contracts before lunch and brainstorm a marketing plan on the drive home. But a leaked 24 000-token “system prompt” (the invisible rule book Claude follows) shows these tools also hard-wire human biases into their answers.
Below is a plain-English tour of the risks, plus simple prompts and habits you can use—whether you’re valuing a company, running a shop, or planning next weekend.
Why this matters beyond Wall Street
If you are a … | Hidden AI risk | Real-world impact |
Investor | Over-confident summaries skip footnotes | Mispriced risk in your portfolio |
Business owner | AI clings to first framing (“launch ASAP!”) | Blind spots in market or compliance checks |
Everyday user | Tool favors newest over best sources | Advice that ignores durable facts (tax rules, safety standards) |
The seven biases revealed—and how to disarm them
Quick fix: Copy each Mitigation Prompt into chat before your real question.
Bias | What the leak showed | Mitigation Prompt |
Confirmation | Model echoes your wording, even if wrong | “If my framing is inaccurate, correct it before answering.” |
Anchoring | Clings to first impression | “Challenge my assumptions and offer alternative views.” |
Availability | Overweights recent docs | “Rank sources by evidential strength, not recency.” |
Fluency | Smooth tone hides uncertainty | “Include probability ranges or confidence levels.” |
Simulated reasoning | Neat logic that’s really post-hoc | “Show only reasoning actually used, no decoration.” |
Temporal gap | Implies it knows events after Oct 2024 | “State your knowledge-cutoff date clearly.” |
Truncation | Trims nuance to stay short | “Be comprehensive unless I ask for a summary.” |
Four habits that keep AI helpful—and honest
- Double-loop your queries.
Run the same question twice—once without mitigation prompts and once with them—then compare. Gaps highlight hidden bias. - Log the entire exchange.
Export the chat or copy it into your CRM/notes. The paper trail matters if auditors, regulators, or future-you asks, “Where did this number come from?” - Mix human and machine insight.
Investors: Pair AI earnings-call recaps with primary filings.
Owners: Let a colleague vet AI-drafted contracts.
Consumers: Cross-check AI health or legal tips with professionals. - Run quarterly “bias drills.”
Feed the model a historical case (a stock that imploded, a product recall) and see whether it surfaces key red flags. Adjust prompts or tool choice accordingly.
Copy-ready “Master Prompt”
“Use an analytical tone. Correct inaccurate framing. Present dissenting as well as consensus views. Rank evidence by relevance, not recency. Quantify uncertainty with ranges or probabilities. Be comprehensive—do not truncate unless asked. State your knowledge-cutoff date and avoid simulating events after it.”
Paste this at the top of a new chat; you’ll feel the rigor immediately.
Big picture: Scale ≠ wisdom
Today’s AIs are optimized for usability—short, fluent answers that make us feel smart—not for truth or completeness. Bigger data alone won’t fix that. Progress requires sharper human oversight—the same qualities that separate disciplined investors, savvy owners, and smart everyday users from the pack.
Final takeaway
AI is like a power tool: enormous leverage in skilled hands, dangerous shortcuts in careless ones. Layer a few targeted prompts and disciplined review habits, and you’ll harvest AI’s speed without paying the price of hidden bias.
Source: Dan Philps, PhD, CFA, and Ram Gopal, “AI Bias by Design: What the Claude Prompt Leak Reveals for Investment Professionals,” CFA Institute Enterprising Investor, 14 May 2025. (≈840 words)