Jan 15, 2025 - Written By Hans Snel
Imagine this scenario: your company implements an AI-powered customer service system. It performs brilliantly for months, but then it suddenly makes decisions that cost you valuable customers, damages your reputation, causes a security breach, or leaks personal information. Who’s responsible in such a case? The AI provider? Your implementation team? Your data scientists?
The Legal Gaps in AI Governance
Traditional legal frameworks, like the UK Supply of Goods and Services Act 1982, weren’t designed to address the complexities of evolving systems like AI. The new EU AI Act, however, introduces obligations for developers, providers, and users to navigate these challenges. In the UK, similar discussions are underway, with regulatory initiatives like the UK’s AI Regulation White Paper aiming to establish principles for AI governance. But with these laws still evolving, waiting for perfect legislation isn’t an option businesses must act proactively to ensure compliance and mitigate risks
Adding to this momentum, Prime Minister Keir Starmer has recently unveiled an ambitious plan to position the UK as a global leader in AI (BBC article). His strategy emphasizes not just the transformative potential of AI but also the importance of ethical and operational frameworks to govern its use effectively. With governments recognizing the need to catch up with AI’s rapid evolution, businesses cannot afford to wait for perfect legislation—they must act proactively to ensure compliance and mitigate risks.
At Hightrees, we’ve observed that many organizations adopt cutting-edge AI technologies, like ChatGPT or open-source components, without fully considering the contractual framework. This is especially common in development environments where the approach often is: “Build first, think later.” The result? When issues arise, such as data leaks or biased decisions, these businesses find themselves scrambling to assign accountability, resolve ethical concerns, or untangle vendor dependencies.
The Hidden Trap of AI Lock-in
Once AI becomes integral to your business, switching providers can seem impossible. Your system learns from your data, adapts to your processes, and becomes uniquely valuable. Without safeguards, you risk being locked into a dependent relationship with your AI provider.
The EU’s Digital Markets Act (DMA) targets anti-competitive practices, but regulatory protection alone isn’t enough. In the UK, similar concerns are being addressed through initiatives like the Digital Markets, Competition and Consumers Bill. Forward-thinking organizations are negotiating contracts to avoid lock-in and reinforce ethical and operational transparency, including:
These protections ensure operational resilience while addressing ethical risks that could impact your reputation or operations.
Ethics Aren’t Optional Anymore
ISO 42001, though not yet universally adopted, underscores the global push for ethical AI practices. In industries like healthcare, aligning with ethical frameworks is now standard. Translating these principles into enforceable contracts requires careful consideration. Contracts therefore should include:
Organizations that adopt these provisions can stay ahead of ethical risks while demonstrating accountability in their AI implementations.
Staying Ahead of the Game
The legal landscape for AI is evolving rapidly, with regulations like NIS2 and the Supply Chain Act adding complexity. Yet, with the right approach, these challenges can become opportunities. Consider:
At Hightrees, we bridge the gap between technological innovation and legal compliance. We’ve seen firsthand how organizations can avoid pitfalls by proactively addressing these issues through effective contracting strategies.
As governments like the UK push forward ambitious AI plans (BBC article), the time to act is now. Don’t wait for problems to arise or regulations to catch up. Contact Hightrees Consultants today to discuss your AI contracting strategy. Let’s make your AI journey innovative, secure, and legally sound.