Locations
The speed of the evolution of AI in recent years has led to much discussion on the likely impact that AI will have on businesses, from changes in transaction management or in recruitment and employment practices to managing the intersection between AI and existing legislation such as copyright law, employment law and privacy and GDPR issues.
Much of the discussion relates to anticipated changes, for example in the context of M&A transactions how AI will be used drive change in VDR and DD processes. Change in transaction management practices will inevitably lead to change in how risk is apportioned between buyers and sellers in the transaction documents. For example, the negotiation of what is meant by "fairly disclosed" and whether this should include not only human analysis but also AI driven analysis as part of the VDR and DD process.
As many businesses are now either developing AI systems or providing access to AI systems either internally for use by employees or deploying AI systems within their customer facing business we are seeing the review of AI risk and valuation metrics becoming more prevalent in M&A transactions as it has become an area for more targeted consideration as part of the transaction.
Different AI systems give rise to different systemic risks, for example the acceleration of generative AI systems continue to highlight governance issues, issues with respect to the accuracy and reliability of training data and also risks relating to the ownership of such training data and breaches of intellectual property rights or privacy rights. This has been evidenced by ongoing litigation around these LLMs and which in the context of transaction management gives rise to the need for due diligence on these issues and the assessment of the viability of the business model and its valuation. For example, the cost and impact on the business model of potential third party litigation with respect to breach of copyright claims could be substantial.
In addition, the new EU AI Act also gives rise to a number of further risks which need to be considered in the context of a transaction where the target is deploying or using AI systems or general purpose AI models within the EU. The EU AI Act, like GDPR, seeks to regulate AI systems accessible within the EU regardless of where the business deploying or using the AI system is based. As such, the EU AI Act will affect a wide range of target businesses and, like GDPR, becomes an issue to be considered in any transaction, in particular as the fines for breach of the EU AI Act can be the higher of EUR35 million or 7% of worldwide turnover or in certain instances the AI system itself can be prohibited, such as those employing subliminal techniques, exploitation of vulnerabilities, untargeted web-scraping of images for facial identification databases and emotion inference systems in the workplace or education institutions. Even if the AI model is not prohibited, the new EU AI Act introduces requirements to keep records on the data used to train a generative AI system and to ensure this does not breach third party copyright. The DD review of governance, internal risk management compliance and record keeping and the impact on valuation for AI systems has now become an important part of any transaction particularly as AI becomes more widely used by businesses generally and as M&A in the AI sector is likely to increase.
For certain AI systems that have been developed it may not be possible to explain exactly how the AI system is designed and works or to explain the data that has been used to train it. The EU AI Act however now requires transparency and record keeping to be maintained on these issues. This is likely to give rise to uncertainty and risks in the context of an M&A transaction with a Target group developing, deploying or using AI systems where the provisions of the EU AI Act can not be complied with. Alternatively this is likely to result in additional costs for such a Target group if it becomes necessary to reverse engineer certain development processes to ensure compliance and record keeping.
As a result of the provisions introduced by the EU AI Act coupled with the compliance with existing legislation such as intellectual property, privacy and employment, we are now advising clients on M&A transactions on the following issues:
- Target company's technology, services and products – identify AI assets
A technical and legal DD review is required for the Target group's technology, products and services to identify whether these incorporate AI and whether these fall within prohibited AI, high risk, general purpose or low risk. It is important to make these distinctions as not all data processing or analytics constitute AI (eg certain chatbots or searches) and it will be important for a target company to explain what they consider to constitute AI or not and to provide a roadmap for the future development of the technology, product or service in order to consider whether this will continue to be or will likely become AI in the future and to consider how this impacts on risk and valuation. The assets that may be comprised within an AI system may include algorithms, training data, a neural network, related topology, websites, apps and interfaces together with any AI outputs, all of which may be subject to differing intellectual property rights and with some, not being eligible for specific patent or copyright protection, being protected as trade secrets provided they are kept confidential. As such, a careful review is necessary to identify the AI assets relevant to the transaction in order to consider related risks.
- Development team
Where the Target group is developing and deploying AI technology, products or services an assessment of the development team would be required. This will review not only core skills of the software developers and data scientists but also their terms of engagement with the Target group, the protections that the Target group have for retaining key employees (such as bonuses or share options as well as career development opportunities) alongside the protections that the Target group have for the developed technology, product or services (such as confidentiality, intellectual property and non compete provisions). This can often lead to the requirement to take protective steps prior to signing the transaction. For example, we are more often than not identifying issues where critical intellectual property relating to the underlying business model is not fully within the perimeter of the transaction and remedial steps are required. In the context of AI, the algorithms may likely be trade secrets rather than protected by other intellectual property rights and require robust obligations to be put in place with employees and contractors.
- AI development and testing
Coupled with a review of the development team, it is necessary to consider how the Target group has identified risks in the AI technology, product or service to date. It will be important to review how the quality of the data used as well as how rights in underlying data have been managed. The Target group should provide details on the provenance of data sets for training, testing and benchmarking and how this has been obtained. It will be important to understand whether the Target group has any governance systems in place related to the development of the AI system, whether any AI risk management frameworks are in place or impact assessments are being undertaken. This will help the buyer to understand the steps to mitigate potential risks that have been deployed such as with respect to responsible use, accuracy, reliability, data security and identifying discrimination and bias and to assess the risks for litigation or with respect to the regulation of the technology.
The rights to the data sets should be traceable and clear such that no third party rights have been breached, either in terms of data privacy but also third party intellectual property rights in the underlying data. Due diligence may therefore identify risks related to the Target group's data management and compliance. If the AI technology, product or service development includes personal data this will require more detailed analysis as in certain circumstances sanctions for illegally obtained personal data can include the deletion of entire databases and algorithms which could pose an existential threat to the Target group and its business model or risks of significant fines for breach of data protection regulation. Equally, where the AI technology, product or service development includes scraped data where third parties have intellectual property ownership rights in the data there can be significant risks of costly litigation which may also have a material adverse impact on the use of the algorithms and AI system.
It will also be important to review how the Target company seeks to protect the confidentiality and rights in the AI system, including the know how, algorithms and software that constitute the system but also the data, confidential information and any database contents. Consideration will also need to be given to whether AI is itself used to generate any code or data (and the ownership rights of any such output) and the extent to which AI systems incorporate any open source software and whether the AI system complies with the related terms of use for the open source software used (eg copyleft) and how this may impact on the business model and valuation.
A buyer will also need to review whether the AI technology, product or service presents any cyber security risks with respect to any software vulnerabilities, security audits and any risk mitigation measures that the Target group have adopted specific to the AI systems and to ensure that the underlying intellectual property rights in the AI system itself are not vulnerable to attack.
Any industry specific issues will also need to be addressed, for example where an AI system or tool is used to automate recruitment processes specific industry related guidelines regarding best practices alongside regulatory compliance will also need to be reviewed.
- Contracts
The DD contract review process will need to capture a specific review for those Target groups utilising and deploying AI systems and tools to analyse the risks arising from the AI systems and tools developed or provided to it and with respect to ownership of the system inputs and outputs as well as responsibility for any liabilities that may arise. A review is also necessary for any in bound and outbound contracts with suppliers or customers where such AI systems and tools will be utilised to understand risks around the rights to use customer data and ownership around the same and any AI system outputs that use such data. The review process will also need to identify any potential restrictions or liabilities for the Target group with respect to the use of AI systems and tools with its suppliers and customers.
The contract review process will need to identify the rights to AI inputs, training data and to AI system improvements. As AI systems can be complex in the manner in which they use data inputs to refine the model, the analysis of data rights, intellectual property rights and confidentiality with respect to training data, the AI system and improvements can be complex.
The review will also need to identify rights to AI outputs as rights can be ambiguous or uncertain, in particular when it comes to intellectual property rights in the outputs. It will be necessary to consider the data used, rights that existed in the data used and contractual rights with respect to the developer of the AI system, the Target group as a deployer or user and of any customer or other user of the AI system alongside the regulatory position with respect to ownership of data outputs.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Emerging Technologies insights from the experts at Fieldfisher.
Subscribe now- Regulatory review
It has been increasingly common for jurisdictions to introduce foreign direct investment regulations and for these regulatory regimes to protect various aspects of technology. As such it is likely that an M&A transaction involving a Target group that develops, deploys or uses an AI system will be subject to an FDI review and this may have an impact on the structure of a transaction as well as the timing and costs.
In addition, concentration of control over data used to train an AI system is within the scope of anti-trust regulation as can be acquisitions within the technology sector generally that may have an affect on competition in the sector. As such, M&A transactions involving a Target group developing, deploying or using an AI system may face anti trust scrutiny which may impact on the structure of a transaction as well as timing and costs.
- Split exchange and completion
If there is to be a gap between signing and completion, a buyer will need to consider whether there are specific AI actions that a Target group can not take without consent in order to preserve value between signing and completion. These may include, for example, the Target group not materially changing any of the training data, the terms of use governing the Target group’s AI-powered products or services or any governance, risk management or compliance policies and procedures. Where the Target group is not itself developing AI systems but could acquire or license-in AI systems developed by others, a buyer may seek assurances that the Target group will not acquire or license-in any AI systems or onboard a new AI system provider without the buyer's consent.
- MAE
A buyer will need to consider the definition for material adverse event as a closing condition in the context of the AI systems being developed, deployed or used by a Target group and whether there are specific material risks inherent in the transaction which should be provided for as a closing condition. In addition, any exclusions outside the control of a party, such as how the risk of change of law will be apportioned will also need to be considered given that in some jurisdictions further AI regulation is anticipated.
- Representations, Warranties and Indemnities
As outlined above there are a number of areas which will be the focus for DD as part of any M&A transaction involving a Target group developing, deploying or using AI systems. In order to manage the risks associated with such businesses, a buyer will rather than rely on general representations and warranties now seek specific protections in the form of specific AI risk representations, warranties and/or indemnities. These will also focus the Target group's responses during DD in order to identify risks that are likely to impact on valuation or which may require action to be taken prior to signing.
Core areas for protection include IPR, that the AI system or the training data does breach third party ownership rights or data protection, that the AI system does not breach regulations relating to personal data, or litigation risks, that there are no claims or litigation with respect to the AI system or its use and that all intellectual property rights in the AI system and its output are held by the Target group. Based on the existing extensive IP challenges and ongoing litigation with respect to AI systems, an IP non infringement representation and warranty with respect to an AI system, its improvements and outputs is often a key and difficult issue. Other typical representations and warranties include compliance with laws, data use restrictions with third parties and the adoption and compliance with internal policies, governance processes and impact assessments. Given the changing AI regulatory landscape, a general compliance with laws representation and warranty could require more detailed analysis and allocation of risk and responsibility.
Where the AI system is fundamental to the business being acquired or poses a fundamental risk buyers will likely require specific AI representations and warranties as "fundamental" warranties for unknown risks or as indemnities for known risks, both with longer survival periods and higher caps. A buyer will also require security for redress through consideration holdback or retention provisions, set off against deferred payments or contingent payments or to the extent possible a variation on the purchase price payable at closing. Given the introduction of the record keeping provisions in the EU AI Act which require the data sets used to train AI systems to be tracked and for data sets used not to infringe third party intellectual property rights there is increased importance on Target groups to employ governance and risk management processes and for risks to be more far reaching into the future where such governance and risk management has not been in place. The impact that ascertained and known risks or that contingent and unknown risks will have on valuation is likely to be an area of discussion during a transaction. For example, obligations under the EU AI Act, Digital Markets Act, the Data Act, cyber security and GDPR compliance can have an impact on the business after the transaction as well as prior and this risk may need further consideration and the associated costs captured in the valuation.
- W&I
W&I insurance may be available to cover liability under representations and warranties for the acquisition of a Target group, however such policies are subject to certain exclusions and limitations and as with data privacy and security, AI risks will be subject to ongoing scrutiny by underwriters and the further development of W&I products with respect to such risks. Such risks may be subject to the payment of additional premium or excluded from cover and as such buyers will need to consider how to best obtain security for these risks and the impact of any additional costs and which party to the transaction bears responsibility for the same.
- Valuation methodology
Target groups developing AI systems are currently usually young or start-up phase companies which may not have sufficient historical financial data available for the purposes of determining a reliable valuation. As these innovative companies with AI focussed business models can also lack peer companies to be used as a benchmark for determining equity value and as the AI market will continue to develop in terms of regulation but also in terms of the ethical and governance issues which could impact on business models the risks around valuation also need to be considered by potential buyers. The transaction can provide for valuation risks and uncertainties around future performance by providing for part of the purchase price to be paid over time and depending on the achievement of certain economic performance targets. In addition, some buyers may prefer to structure part of the consideration with an issue of shares rather than a cash payment, such that the sellers will obtain shares in the buyer and continue to share risk in the future performance of the acquired business. It is therefore likely that M&A transactions involving AI assets will use consideration structures that include some deferred milestone-based structure dependent on the achievement of certain targets, such as technical development stages, customer acquisition or sales figures within a contractually agreed timeframe. As identified above, there are inherent risks in AI driven business models and which are likely to result in known and unknown risks being identified at the end of a DD process. It is more likely that this will result in buyers requiring M&A transactions to incorporate completion accounts adjustment procedures to account for these risks and liabilities.