By George Villagran | October 8, 2025
Let’s stop pretending every AI deployment is progress. Most aren’t profitable, many aren’t legal, and too few are trusted by employees. Executives who treat AI like a tech project will lose. Those who treat it as a business transformation with governance, ROI discipline, and cultural adoption will own the decade.
We are living in a moment of AI exuberance: boardrooms, investor decks, and consulting reports all point to artificial intelligence as the defining technology wave of this decade but as with every wave, there is a chasm between promise and delivery. For executive leadership, the task is not simply to “do AI,” but to do it in a disciplined, realistic, and risk aware way.
This article offers a fresh reminder: yes, invest in AI but with your eyes open. It sketches the emerging legal risks (especially around training data), explores the real world ROI challenges, and lays out what leaders should monitor and demand to turn AI investments into material, sustainable value.
A Cautionary Parallel: The Dot-Com Bubble
Before we dig into AI specifics, it’s worth revisiting the dot com boom (and subsequent bust) of the late 1990s. Companies were raising capital on vision statements, inflated user metrics, and vague growth promises. When many failed to deliver sustainable revenue or margins, the bubble collapsed.
We risk repeating a variation of that story in AI: massive hype, aggressive investment, and grandiose projections about productivity leaps only to find that many pilots never scale, many models degrade, or many deployments stall due to culture, data, or legal friction. The dot com aftermath taught us that hype without discipline is dangerous. In AI, that danger is magnified, because you are investing in complex systems, opaque models, and dependencies on massive datasets.
As an executive, you must right from the start insist on guardrails, metrics, accountability, and regular pressure testing.
Key Headwinds: AI Data Litigation & Training Data Risk
One of the most underappreciated risks in today’s AI surge is legal exposure tied to data sourcing. Models are only as good (and as defensible) as the data on which they are trained.
1. Copyright and IP lawsuits
Major AI and content companies are embroiled in litigation over whether copyrighted works were ingested without permission. Nearly every major generative AI player (OpenAI, Google, Meta, Anthropic, etc.) has been named in such suits. In some cases, courts are ordering disclosure of training datasets, demanding transparency about how data was collected, cleaned, and labeled. In one high profile settlement, authors claimed billions of dollars of harm, based in part on AI models’ outputs echoing protected works.
2. Data scraping, privacy, and contract risk
Beyond IP, companies face lawsuits and regulatory scrutiny for scraping web data, violating terms of service, infringing on privacy or biometric rights, or using personally identifiable data without consent. AI defendants have argued over whether training data must be disclosed in litigation, raising issues of trade secrets, model confidentiality, and audit logs.
3. Algorithmic harm and liability
As AI makes decisions (in hiring, credit, medical, insurance, supply chain, operations), litigation is emerging around algorithmic discrimination, bias, unfair practices, and harms.
In regulated industries (finance, health, insurance), AI-inflicted errors or misclassifications may expose firms to legal, regulatory, or reputational risk.
What this means for leadership:
The ROI Reality: Why Many AI Projects Underperform
When you scratch the surface of AI investment stories, you’ll find that many organizations struggle to turn pilot gains into enterprise outcomes. Some recurring C-suite frustrations:
A recent survey suggests that many firms that haven’t yet achieved full AI ROI expect meaningful cost savings only by 2027. Another report argues that while experimentation is rising, meaningful business impact remains elusive in many cases.
Academic literature underscores the tension: AI projects require substantial upfront investment with uncertain returns and traditional ROI models may not fully capture the tail benefits or learning curve dynamics.
To bridge that gap, firms must adopt new financial disciplines and governance.
The Six-Step ROI Playbook (adapted)
Many large vendors and consultancies (e.g. SAP) propose frameworks:
Executives should insist on annual reviews vs baseline (not just “improvement vs pilot”) and track a small set of core KPIs (e.g. margin impact, error reduction, time saved) beyond vanity metrics (e.g. number of “AI features launched”).
Adoption & Cultural Friction: The Hidden Drag
Even technically strong AI systems will flounder without employee buy-in, process redesign, and culture change. According to McKinsey, most employees are open to using AI, but leadership often fails to create momentum.
Some of the most common adoption barriers include: messy or siloed data, legacy systems, unclear economics, fear of obsolescence, trust in AI, and lack of domain champions.
Research into how individuals perceive AI limitations shows that trust evolves slowly as users gain hands on experience, and that embedding those insights into formal governance accelerates adoption.
Key levers for adoption success:
What Executives Should Insist On: A Checklist
Below is a distilled checklist that executive leadership should use when reviewing or sanctioning AI investments:
Domain |
Key Executive Questions / Requirements |
Strategy & Governance |
Is the AI investment aligned with clear, measurable business objectives (growth, cost, risk)? |
Has an AI governance committee (with legal, compliance, domain) been established? |
|
Data & Legal Risk |
Are all training data sources documented, licensed, or audited? |
Is there traceability from model output back to training input and decision logic? |
|
Has legal vetted the AI pipeline for IP, privacy, terms-of-use, and liability exposure? |
|
Financial Discipline |
Are use-case level ROI projections modeled over 3–5 years, including maintenance, drift, infrastructure? |
Are there stage gates or kill switches if ROI thresholds aren’t met? |
|
Architecture & Scalability |
Are data pipelines, integrations, model retraining, and versioning built from day one? |
Is the solution modular so parts can be upgraded or swapped? |
|
Adoption & Change Management |
Who are the internal champions? What training and incentives exist? |
What metrics will measure real adoption not just deployment? |
|
Monitoring & Feedback |
Are there dashboards tracking performance vs baseline? |
Are models held to SLA’s, error thresholds, drift detection? |
|
Are audit logs and explainability features enabled? |
|
Risk Mitigation |
Do you have contingency plans if models misbehave or produce bias? |
Has risk appetite been defined (which use cases to exclude)? |
As with any transformative technology, you don’t want to be passive. You want to act as investor, auditor, and conductor steering steadily through the hype toward sustainable, defensible outcomes.
The Bottom Line
The rise of AI is real, and its utility is manifest in early deployments across industries. But the gap between “proof of concept” and “enterprise scale material impact” remains wide, and the legal, cultural, and architectural risks are real and growing.
If the dot com era taught us anything, it’s that technology excess without discipline leads to value destruction. In the AI era, those perils are magnified by model opacity, ongoing maintenance, and questions of legal defensibility.
Executive leadership must demand not just flashy pilots, but measurable outcomes, legal guardrails, adoption metrics, and continuous governance. If you build those foundations, AI investments become more than speculative bets they become engines of competitive advantage.