<?xml version="1.0" encoding="utf-8"?>

AI systems occasionally make unexpected choices or generate surprising content, creating unique design challenges that traditional interfaces rarely face. Safety nets serve as essential protective mechanisms that maintain user confidence when working with AI. Well-designed undo functionality gives users peace of mind, allowing exploration without fear of irreversible consequences. When AI produces potentially problematic outputs, like biased text or misleading information, thoughtful alert systems help users identify issues without creating unnecessary alarms. These warnings work best when paired with override mechanisms that let users correct AI decisions when needed.

Behind the scenes, carefully designed logging and audit trails create accountability while respecting privacy. The most effective safety nets balance protection with empowerment, giving users both security and agency when interacting with sometimes unpredictable AI systems. These patterns build appropriate trust while encouraging exploration of AI capabilities.

Exercise #1

Understanding AI safety nets

Understanding AI safety nets

Traditional software follows set rules, but AI systems are different. While regular apps do exactly what they're programmed to do, AI can create outputs that designers never specifically planned for. AI systems often work with probabilities instead of certainties, making mistakes that look different from regular software bugs. They might generate content that seems correct but contains subtle errors, or make recommendations based on patterns users can't see. This unpredictability grows as AI gets more freedom to make decisions. Safety nets for AI are protective features that help users feel confident while letting the system show what it can do. These safeguards include:

  • Version tracking to record AI-generated content over time
  • Review spaces where users can check and approve outputs before they go live
  • Warning systems for potentially problematic content
  • Override controls that let users modify or reject AI suggestions when needed

Good safety nets don't just prevent problems, they build trust by showing that the system respects user choices and puts their goals first, rather than just focusing on efficiency.

Exercise #2

Designing version histories for AI-generated content

Version histories take on special importance in AI workflows because of how AI creates and refines outputs. When AI generates content like text responses, code, or images, each version might contain some elements users want to keep and others they want to change. In conversational AI interfaces, the chat history naturally preserves previous responses, allowing users to scroll back to reference earlier outputs. However, some AI tools require users to manually save outputs they want to keep or take screenshots of preferred versions, particularly when generating multiple variations that might be overwritten. Some AI design tools offer "variations" features that generate alternatives while preserving the original, but structured comparison tools remain limited.

Ideally, AI interfaces would provide clearer ways to track what changed between versions and why, capturing the feedback or prompt modifications that led to new outputs. For professional contexts where AI assists with design or content creation, a more sophisticated version management would help teams track which parameters were adjusted between versions and what feedback prompted changes. As AI becomes more integrated into workflows, better version tracking will become essential.

Exercise #3

Creating staging areas for AI outputs

Creating staging areas for AI outputs

Staging areas provide an intermediate step between AI generation and implementation, giving users a chance to review outputs before they're finalized. In most current AI interfaces, these staging areas are straightforward, featuring simple controls like those seen in writing assistants and content generation tools. Typical staging interfaces show the AI's output in a highlighted or boxed area, clearly separating it from user content. They offer basic action buttons such as "Accept," "Insert," "Discard," or "Try again." Some interfaces include minimal modification options like "Improve it," "Shorten it," or "Make it assertive" that let users refine outputs without starting over. These controls are usually presented as buttons or dropdown options directly adjacent to the generated content. Most current interfaces don't explain AI reasoning or show confidence levels. The staging process is designed to be quick and unobtrusive, requiring minimal user effort while maintaining user control.

Exercise #4

Designing effective alert patterns

Designing effective alert patterns

AI systems today use simple warning messages to set proper user expectations, but their implementation varies by application type. Generative AI tools that create original content (like chatbots, image generators, or code assistants) typically show brief disclaimers with phrases like "AI may make mistakes" or "Please verify important information." These alerts appear consistently rather than changing based on what you're discussing. Interestingly, AI tools that perform more bounded tasks like summarizing texts, suggesting email responses, or correcting grammar rarely display similar warnings, even though they can also make errors. This inconsistency reflects how companies assess risk differently between systems that generate completely new content versus those that modify existing content. Unlike traditional error messages that only show up when something breaks, AI disclaimers in generative tools appear from the start and stay visible. They typically use neutral wording and subtle visual design, often appearing in smaller text, lighter colors, or separated from the main content. These consistent disclaimers serve both as practical warnings and as legal protection.

Pro Tip: Consider adding appropriate disclaimers even for AI tools that edit or summarize content, not just those that generate original material.

Exercise #5

Implementing override mechanisms

Implementing override mechanisms

Override mechanisms give users simple ways to adjust AI outputs when they aren't quite right. Most current AI interfaces offer straightforward controls that let users request modifications without starting over. These controls typically appear as buttons, dropdown menus, or visual selection options next to the generated content.

For text content, common override options include basic commands like "Improve it," "Explain," or "Try again." More specific adjustments might include "Shorten it," "Make it more descriptive," or "Sound professional."

For image generation, override mechanisms often include visual suggestions for:

  • Background environments (Forest, Desert, City, Space)
  • Visual style adjustments (Realistic, Cartoon, Painterly, Sketch)
  • Character appearance options (different faces, clothing, poses)
  • Lighting and mood settings (Sunny, Night, Dramatic, Soft)
  • Subject descriptions ("A yoga teacher on a beach," "A business meeting in an office")

These override options save users time by preserving the valuable parts of AI outputs while allowing targeted improvements. Instead of rejecting an entire AI response because one aspect isn't right, users can select specific refinements. The best override designs use clear labels and visual previews that help users quickly understand what each option will do.

Pro Tip: Design override controls with clear, action-oriented labels that help users quickly understand what changes they'll get.

Exercise #6

Balancing logging and privacy

Balancing logging and privacy

AI systems keep records of how people use them, which raises questions about privacy. The new EU AI Act, which started coming into effect in 2024, is creating more formal rules about AI systems, including requirements for keeping logs, especially for high-risk AI systems.

According to the EU AI Act, companies that provide AI systems must follow specific requirements, with penalties up to €35 million or 7% of global annual turnover for serious violations. For high-risk AI systems, providers need to implement proper logging systems that track how the AI works, though the law balances this with the need to respect user privacy.

Popular AI tools today typically have settings that let users manage some aspects of their data. Most explain that sharing data helps improve the systems, while also mentioning privacy considerations. These settings let users make choices about whether their conversations can be used to train the AI.

As new regulations like the EU AI Act are fully implemented (most provisions apply from August 2026), companies will need to think carefully about what data they collect and how they protect user privacy while still maintaining necessary records for accountability and safety.[1]

Pro Tip: Stay informed about AI regulations in your region, as they increasingly affect what data companies can collect and how they must manage it.

Complete this lesson and move one step closer to your course certificate