<?xml version="1.0" encoding="utf-8"?>

AI transforms both how we build products and how products work. As a product professional, you may use AI tools like ChatGPT for writing specs, analyzing user feedback, or generating ideas. Or you may design AI-powered features like recommendations, search, or automated support. Both uses raise ethical questions. When using AI tools, you might accidentally share sensitive user data or rely on biased suggestions. When building AI features, you might create systems that discriminate or manipulate.

This means there are always two sides to consider: ethical use of AI in your PM workflow (data privacy, over-reliance, verification) and ethical design of AI features (transparency, fairness, control). Here, understanding when AI helps versus hinders, how to validate AI-generated insights, and how to ensure AI features serve all users fairly, helps you leverage AI's power while avoiding its pitfalls.

Exercise #1

Using AI tools responsibly

ChatGPT or similar AI tools can write user stories, create PRDs, and draft stakeholder emails in seconds. But using AI tools in your development process requires careful boundaries to maintain quality and accountability. Never let AI tools make product decisions. Use them to generate options, explore ideas, and speed up documentation, but ensure product judgment remains human.

When using AI to draft product requirements, always verify technical feasibility and business logic. Those brilliant features suggested might be technically impossible or violate regulations. Protect your company's confidential information. Don't paste customer data, revenue figures, or strategic plans into public AI tools. Many teams learned this lesson when Samsung engineers accidentally leaked proprietary code through ChatGPT.[1] Use enterprise versions with data protection agreements or anonymize sensitive information before using AI assistance.

Pro Tip: Create a team policy on what information can and cannot be shared with AI tools.

Exercise #2

Protecting data during user research synthesis

AI has made analyzing hundreds of user interview transcripts and extracting insights incredibily easy. Yes, it can spot patterns humans might miss, but those transcripts could contain personal information, medical details, and private opinions users shared trusting your confidentiality.

To protect this data, always anonymize before uploading. Replace real names with User1, User2. Remove company names, locations, and any identifying details. Strip out sensitive information like health conditions or financial situations. What seems like harmless context to you could be personally identifiable when combined with other data.

Use enterprise AI tools with data agreements when handling sensitive research. Tools like Microsoft 365 Copilot[2] or Claude for Business[3] provide contractual guarantees about data handling. For extra sensitive research (healthcare, financial services), consider on-premise AI solutions that never leave your infrastructure.

Exercise #3

Validating AI-generated insights

Believe it or not, AI can confidently state that "73% of millennials prefer voice interfaces over touch." This insight could reshape your entire roadmap, if it's actually true. AI tools often hallucinate statistics and misrepresent research. This is why you need to always demand sources for AI-generated insights. When AI cites statistics, research papers, or market trends, verify them independently. Check if quoted studies actually exist, if numbers match original sources, and if interpretations are accurate. Many teams have been burned by compelling but completely fabricated AI insights.

Cross-reference AI analysis with multiple sources. If an AI tool says your competitor raised $50M, check Crunchbase, press releases, and SEC filings. Use AI insights as starting points for investigation, not endpoints for decision-making. Build verification into your workflow — assign someone to fact-check before insights reach leadership.

Exercise #4

Build AI features users can trust

Build AI features users can trust

Your team is adding AI-powered features to your product — maybe automated recommendations, content generation, or predictive actions. But users are increasingly skeptical of AI. Building trust requires deliberate design choices that put user control and transparency first.

Start with low-stakes, high-value features. If you're building AI for a finance app, begin with spending insights, not investment advice. Design for user control at every step. Show AI suggestions as options, never automatic actions. Include clear "Turn off AI features" settings.

When AI generates content, mark it clearly, because users feel betrayed when they discover AI involvement after the fact.

Exercise #5

Transparency in AI capabilities

AI features often ship with impressive accuracy claims like "95% success rate!" But these metrics hide crucial details. Users deserve to know not just how often AI succeeds, but when it's likely to fail and why. List out your AI features’ specific strengths and weaknesses clearly. For example, your AI proofreader may excel at grammar and spelling but also explicitly warn users it may misunderstand creative writing or technical jargon. It could also show confidence levels for each suggestion — high for comma splices, low for tone recommendations. This honesty helps users calibrate their trust appropriately.

Additionally, build transparency into the interface itself. Instead of burying limitations in documentation, integrate them into the user experience.

Exercise #6

Avoiding over-reliance on AI

Avoiding over-reliance on AI

Consider this scenario: after seeing Claude write excellent PRDs, your team stops writing original documents. Every user story, technical spec, and strategy doc starts with AI generation. 6 months later, your product vision feels generic, and team members struggle to think strategically without AI prompts.

Use AI to enhance, not replace, product thinking. Start with human ideas and use AI to expand, challenge, or refine them. Write your first draft, then ask AI for alternatives. Brainstorm features yourself, then use AI to spot gaps. This maintains your product instincts while leveraging AI's processing power.

Schedule regular "AI-free" sessions. Run some sprint plannings, design reviews, and strategy sessions without any AI assistance. This keeps core product skills sharp and ensures your team can function if AI tools become unavailable. It also helps identify where AI truly adds value versus where it's become a crutch.

Exercise #7

Inclusive AI feature design

AI models inherit biases from their training data. When Pinterest's search algorithm consistently showed makeup tutorials for light skin tones[4], or when voice assistants struggled with accents[5], these weren't random failures. They revealed systematic exclusion baked into the AI.

To tackle this bias, test proactively across demographic dimensions. Before launching any AI feature, evaluate its performance across age groups, languages, regions, and abilities. Create test sets that specifically include edge cases: non-native speakers for text analysis, diverse photo libraries for image recognition, and varied accents for voice features. Track performance metrics separately for each group, because aggregate success rates hide discrimination.

Build feedback loops specifically for bias detection. Add easy reporting mechanisms like "This doesn't work for me" with optional demographic context. Monitor these reports for patterns. When certain user groups consistently report problems, that's systematic bias requiring immediate attention.

Complete lesson quiz to progress toward your course certificate