As seen on Ticker News: When AI Goes Bad https://tickernews.co/where-to-watch/

  • Home
  • When AI Goes Bad
  • Business Consulting
  • Leadership Program
  • The Approach
  • Media Appearances
  • Sports/Health Topics
  • Books
  • More
    • Home
    • When AI Goes Bad
    • Business Consulting
    • Leadership Program
    • The Approach
    • Media Appearances
    • Sports/Health Topics
    • Books
  • Home
  • When AI Goes Bad
  • Business Consulting
  • Leadership Program
  • The Approach
  • Media Appearances
  • Sports/Health Topics
  • Books

Dr Davis McAlister

Dr Davis McAlisterDr Davis McAlisterDr Davis McAlister

Leadership • Resilience • Ethical AI • Talent Strategy

Leadership • Resilience • Ethical AI • Talent StrategyLeadership • Resilience • Ethical AI • Talent StrategyLeadership • Resilience • Ethical AI • Talent StrategyLeadership • Resilience • Ethical AI • Talent Strategy

When AI Goes Bad: The Human Cost

 AI now touches hiring, safety, and everyday decisions. When it misfires—bias, hallucinations, false flags—the damage is immediate: reputations, careers, and trust collapse overnight. Drawing on military interrogation principles, crisis leadership, and real‑world AI use in talent systems, Dr. Davis McAlister shows leaders how to keep humans in the loop, pressure‑test assumptions, and build resilient orgs that don’t break when tech does. Your team will leave with a practical playbook to reduce risk, communicate clearly, and make better decisions—in real time. 


Key Outcomes

  • Spot high‑risk AI failure points in your workflow before they explode.
     
  • Deploy “human‑in‑the‑loop” safeguards that actually work.
     
  • Crisis‑proof your comms: what to say in the first 24 hours.
     
  • Ethical guardrails: fairness, transparency, and auditability.
     
  • A resilient decision cadence leaders can run under pressure.


 

Who It’s For: Executives, HR, Talent Acquisition, Healthcare leaders, Ops/Safety, Risk/Legal, Education administrators.


Formats: 45–60 min keynote • 90–120 min workshop • Half‑day leadership lab

Book Dr. McAlister

Do You Need Help With:

 

  • AI implementation strategy (with ethics)
     
  • Training teams on responsible AI use
     
  • Fixing broken hiring pipelines due to AI bias
     
  • Crisis communication when AI fails


Keep Reading!

Book Dr. McAlister

When AI Goes Bad: 5 Scary Consequences Leaders Can’t Ignore

What Can Happen

 AI is no longer “future tech.” It’s in hiring, approvals, safety checks, and the daily comms that shape reputations. When it works, we barely notice. When it fails, the fallout is human: careers derailed, trust shattered, teams confused, legal exposure mounting. The good news? Leaders can prevent most of it by putting humans back in the loop and stress‑testing the decisions machines influence. Here are the five failure patterns I see most, plus the safeguards I teach executives and teams to use immediately.


  • Reputation Whiplash – Bad outputs spread fast; corrections lag.
    Guardrail: verification before publication; named owner for each high‑impact claim.
     
  • Bias at Scale – Skewed data = skewed outcomes (hiring, lending, discipline).
    Guardrail: bias audits, reject lists, human review for edge cases.
     
  • Operational Blind Spots – Over‑trusting automation creates brittle systems.
    Guardrail: “trust but verify” checkpoints; manual override drills.
     
  • Legal & Compliance Exposure – Opaque systems create discovery risks.
    Guardrail: documentable policies; appeal/rectification paths.
     
  • Communication Meltdowns – No plan for the first 24 hours.
    Guardrail: pre‑approved crisis templates; single source of truth; leader on camera. 

How Can I Help

 

Technology should extend human judgment—not replace it. 


The leaders who win treat AI as a powerful but fallible teammate, then build systems that hold when pressure hits. 


If you’d like a practical walkthrough for your org, I deliver keynotes, workshops, and executive sessions on responsible AI, resilient decision‑making, and crisis communication. 

Book DR. Mcalister

When AI Goes Bad Keynote | Dr. Davis McAlister – AI Risk, Ethics & Leadership

  

 A practical keynote for executives and HR on preventing AI failure, reducing risk, and leading through crisis.
 

#AI keynote speaker #AI ethics speaker #AI in hiring #AI bias #leadershipresilience #crisiscommunication

Book Dr. McAlister

Dr. Davis McAlister

Copyright © 2025 Dr. Davis McAlister - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept