When AI Goes Wrong: Who’s Responsible?

Jan 15, 2025 - Written By Hans Snel

When AI Goes Wrong: Who’s Responsible?

Imagine this scenario: your company implements an AI-powered customer service system. It performs brilliantly for months, but then it suddenly makes decisions that cost you valuable customers, damages your reputation, causes a security breach, or leaks personal information. Who’s responsible in such a case? The AI provider? Your implementation team? Your data scientists?

The Legal Gaps in AI Governance


Traditional legal frameworks, like the UK Supply of Goods and Services Act 1982, weren’t designed to address the complexities of evolving systems like AI. The new EU AI Act, however, introduces obligations for developers, providers, and users to navigate these challenges. In the UK, similar discussions are underway, with regulatory initiatives like the UK’s AI Regulation White Paper aiming to establish principles for AI governance. But with these laws still evolving, waiting for perfect legislation isn’t an option businesses must act proactively to ensure compliance and mitigate risks


Adding to this momentum, Prime Minister Keir Starmer has recently unveiled an ambitious plan to position the UK as a global leader in AI (BBC article). His strategy emphasizes not just the transformative potential of AI but also the importance of ethical and operational frameworks to govern its use effectively. With governments recognizing the need to catch up with AI’s rapid evolution, businesses cannot afford to wait for perfect legislation—they must act proactively to ensure compliance and mitigate risks.


At Hightrees, we’ve observed that many organizations adopt cutting-edge AI technologies, like ChatGPT or open-source components, without fully considering the contractual framework. This is especially common in development environments where the approach often is: “Build first, think later.” The result? When issues arise, such as data leaks or biased decisions, these businesses find themselves scrambling to assign accountability, resolve ethical concerns, or untangle vendor dependencies.


The Hidden Trap of AI Lock-in


Once AI becomes integral to your business, switching providers can seem impossible. Your system learns from your data, adapts to your processes, and becomes uniquely valuable. Without safeguards, you risk being locked into a dependent relationship with your AI provider.

The EU’s Digital Markets Act (DMA) targets anti-competitive practices, but regulatory protection alone isn’t enough. In the UK, similar concerns are being addressed through initiatives like the Digital Markets, Competition and Consumers Bill. Forward-thinking organizations are negotiating contracts to avoid lock-in and reinforce ethical and operational transparency, including:



  • Clear Ownership Rights: Retain ownership of trained models and algorithms to avoid undue dependency.


  • Data Portability Guarantees: Ensure seamless data migration, aligning with UK GDPR requirements.


  • Realistic Exit Strategies: Include practical terms to maintain operational stability and ethical oversight after transition.



  • Price Protection Mechanisms: Prevent exploitative pricing while ensuring transparency in cost structures.


  • Ethical Transparency in Transition: Demand supplier transparency during system adaptations to mitigate hidden biases or risks.


  • Audit and Challenge Rights: Secure the ability to periodically review supplier practices for adherence to ethical standards.


These protections ensure operational resilience while addressing ethical risks that could impact your reputation or operations.


Ethics Aren’t Optional Anymore


ISO 42001, though not yet universally adopted, underscores the global push for ethical AI practices. In industries like healthcare, aligning with ethical frameworks is now standard. Translating these principles into enforceable contracts requires careful consideration. Contracts therefore should include:


  • Regular Bias Assessments: Periodic evaluations to comply with anti-discrimination laws like the Equality Act 2010.


  • Transparent Decision-Making: Documentation and mechanisms to audit AI decisions.


  • Procedures for Ethical Concerns: Defined roles, responsibilities, and timelines for addressing ethical issues.


  • Rights to Challenge and Review: Access to underlying data and decision logic for critical reviews.


  • Continuous Improvement Obligations: Suppliers must proactively address risks and incorporate state-of-the-art ethical practices.


  • Indemnification for Ethical Breaches: Clauses holding suppliers liable for ethical violations, with remedies and compensations.


Organizations that adopt these provisions can stay ahead of ethical risks while demonstrating accountability in their AI implementations.


Staying Ahead of the Game


The legal landscape for AI is evolving rapidly, with regulations like NIS2 and the Supply Chain Act adding complexity. Yet, with the right approach, these challenges can become opportunities. Consider:


  • Do your contracts align with emerging AI regulations?
  • Can you demonstrate compliance with ethical AI principles?
  • Are you protected against vendor lock-in?
  • Do you have mechanisms to address AI liability?


At Hightrees, we bridge the gap between technological innovation and legal compliance. We’ve seen firsthand how organizations can avoid pitfalls by proactively addressing these issues through effective contracting strategies.


As governments like the UK push forward ambitious AI plans (BBC article), the time to act is now. Don’t wait for problems to arise or regulations to catch up. Contact Hightrees Consultants today to discuss your AI contracting strategy. Let’s make your AI journey innovative, secure, and legally sound.


Comments (0)

Contact Us

Share by: