AI SaaS agreements: how do you regulate ownership, data and liability?

AI is everywhere. Think of software that can derive health parameters from a simple facial measurement in seconds. Or platforms that automatically generate marketing content. Other applications help users to sort their mailboxes intelligently, or use algorithms to optimise complex planning problems (shout-out to the companies that recognise themselves here, yes, I mean you!).
Increasingly, this type of technology is offered in the form of Software as a Service (SaaS) agreement. This makes it scalable and accessible, but it also raises important legal questions: who owns the software, the data and the output? What happens if the AI makes a mistake? And can you use customer data to make the model smarter?
In this article, I will try to list the most important legal considerations for AI SaaS agreements, so that you, as an entrepreneur, know immediately what to look out for.
SaaS in a nutshell
A SaaS agreement revolves around establishing the conditions under which customers and their users gain access to the software offered. The agreement regulates permitted use (in the form of a licence), sets out the agreed service levels (such as uptime), the price agreements (e.g. monthly fee or annual contracts, with or without automatic renewal) and other practical provisions.
So far, nothing new. But AI adds an extra layer of complexity.
Intellectual property: software vs. output
In AI, there are actually two types of intellectual property (IP): on the one hand, the model and the software (i.e. the code, algorithms and architecture of the software) and, on the other hand, the output of the AI tool itself (such as texts, models, or whatever else is generated with the AI tool).
In general, the ownership rules for the first category are clear: ownership remains with the supplier unless it is explicitly transferred. In a SaaS context, transfer is exceptional, because the business model is based on retaining the software and offering usage rights to multiple customers.
In terms of output, the customer will usually want to own the results obtained with the AI application so that they can be used freely, while the provider wants to protect their model and prevent customers from claiming rights to the algorithm itself. Contracts therefore often stipulate that the provider retains all rights to the AI model and that the generated output becomes the property of the customer.
However, it becomes more difficult when the software remains necessary to continue using the output. This is where it gets really interesting. After all, in many AI applications, the output is not static but dynamic. Its value is linked to continuous access to the software, such as in a predictive model that provides new insights on a daily basis or a planning algorithm that constantly recalculates when new data becomes available.
If the customer no longer has access after termination, they lose the ability to continue using or updating that output. In that case, there is “legal ownership” of the output, but no practical usability.
Contractually, one of the following options is usually chosen:
- Permanent licence on output: stipulate that the customer may continue to use the results generated during the term of the contract, even after termination.
- Export obligation: provide for the export of the output and relevant data in an open format. This can be a snapshot or periodic export.
- Model handoff: for certain customers, you can agree that they will receive a copy of the trained model for a fee to host themselves.
Data and training: who is allowed what?
AI runs on data – and that is where contracts often go wrong.
In principle, customer data remains the property of the customer. This should be explicitly stated in the agreement, including a guarantee that the customer can export their data upon termination.
However, it may be useful for the provider to use customer data to make the AI model smarter. This can be valuable, but is often a sensitive issue. Therefore, it must always be clearly agreed: will customer data be used for training or not?
Some providers work with an opt-in (where the customer gives explicit consent), while others (especially the big players) work with an (sometimes well-hidden) opt-out: data is used unless the customer actively opts out.
Privacy legislation also applies. As soon as customer data contains personal data, a data processing agreement (DPA) is also required. Not every provider provides this as standard, but ideally you should also make the appropriate arrangements in this regard. In a B2B context (e.g. via an API or enterprise subscription), you can arrange this with a separate document. For consumer applications, this is usually done through terms and conditions and privacy settings.
Liability: what if AI makes mistakes?
An AI tool can do amazing things, but it can also completely miss the mark. Just as my own experiments with AI-generated content sometimes yield unexpected results (I'm not yet able to work with image creation software, let's leave it at that), AI systems can cause considerable damage in critical applications. Think of biased outcomes in recruitment processes, hallucinations of legal AI referring to non-existent case law, or inconsistencies in medical diagnosis software.
The EU has therefore implemented a fundamental change of course: software and AI systems are now subject to the same product liability rules as traditional products.
The question we are often asked is: who is liable if a customer suffers damage due to incorrect AI output?
Whereas (software) providers could previously rely primarily on contractual liability limitations and disclaimers, much stricter product liability now applies. This means that providers may be liable for damage caused by defects in their AI systems, even without proven fault.
The new EU Directive 2024/2853 explicitly states that cybersecurity shortcomings, unpredictable behaviour of machine learning algorithms and inadequate risk management can also qualify as product defects. This creates a significantly increased liability risk for providers of AI tools, which can no longer be dealt with entirely on a contractual basis.
As an AI provider, risk management naturally starts with technical development, implementing testing procedures, bias detection, and transparent documentation of the training data used (and, of course, legally obtained 😉). In addition, there is an obligation to provide information about known limitations, risks, and proper usage.
Finally, contractual measures remain crucial for risk mitigation. Providers can still include liability limitations for indirect damage, consequential damage and damage above certain amounts, provided these are reasonable and proportionate. Clear service level agreements, explicit descriptions of use within and outside the intended parameters by the user and clear divisions of responsibilities between provider and customer are essential.
The directive entered into force on 8 December 2024, and EU countries have until 9 December 2026 to transpose it into national law.
Conclusion
All in all, AI SaaS agreements resemble regular SaaS agreements, but they come with additional legal challenges. It is not just about uptime, support and price, but above all about ownership of data and output, and liability. Providers who properly document these issues protect themselves and gain the trust of customers and investors.
For AI providers, it is time to take action. Start with a thorough audit of existing products and identify potential liability risks. Review contractual terms and conditions and provide clear documentation processes.
It goes without saying that we are happy to help you with this 😉
Leïla Drake