Post

Public procurement conditions for trustworthy AI and algorithmic systems

A European standard for public procurement of AI and algorithmic systems will help make transparency more practical.

Governments increasingly use AI and algorithmic systems. The City of Amsterdam for example uses algorithmic systems in some of their primary city tasks, like parking control and acting on notifications from citizens

In the last couple of years, many guidelines and frameworks have been published on algorithmic accountability. A notable example is the Ethics guidelines for trustworthy AI by the High-Level Expert Group on AI advising the European Commission, but there are many others. What all these frameworks have in common is that they name transparency as a key principle of trustworthy AI and algorithmic systems. But what does that mean, in practice? We can talk about the concept of transparency, but how can it actually be operationalized?

That’s why the City of Amsterdam took the initiative to translate the frameworks and guidelines into a practical instrument: contractual clauses for the procurement of algorithmic systems. In this post, you can read all about these procurement conditions and how you can be involved in taking them to the next level: a European standard for public procurement of AI and algorithmic systems.

Terms & Conditions

Starting late 2019, the City of Amsterdam joined forces with several Dutch and international experts, ranging from legal and procurement experts to suppliers and developers, resulting in a set of standard procurement conditions, and an accompanying explanatory guide. The procurement conditions can be used by any organisation wanting to make provisions to ensure that they can use the systems in a trustworthy way.

By choosing procurement conditions as a means to operationalize the ethics and accountability frameworks, we hit two birds with one stone. First of all, it provides clear guidance to suppliers, who according to the World Economic Forum, “understand the challenges of algorithmic accountability for governments, but look to governments to create clarity and predictability about how to manage risks of AI, starting in the procurement process.” Secondly, and maybe more importantly, procurement conditions demand clear definitions, both of key concepts like ‘algorithmic system’ and ‘transparency’, as of the conditions themselves.  

Transparency in practice

Although the procurement conditions aim to tackle several issues related to the procurement of algorithmic systems, like vendor lock-in, the main novelty is that they provide a separation between information needed for algorithmic accountability on the one hand and company-sensitive information on the other. The conditions distinguish between three main types of transparency that the supplier should provide:

  • Technical transparency provides information about the technical inner workings of the algorithmic system; for instance the underpinning source code. For many companies, this type of information is proprietary and often considered a trade secret, it’s their ‘secret sauce’. Therefore, unless it is the procurement of open source software, technical transparency will only be demanded in case of an audit or if needed for explainability (see below). 
  • Procedural transparency provides information about the purpose of the algorithmic system, the process followed in the development and application and the data used in that context; for instance, what measures were taken to mitigate data biases. Procedural transparency provides a government with information that enables them to objectively establish the quality and risks of the algorithms used and perform other controls; to provide explainability (see below); and to inform the general public about algorithmic usage and the manifold ways on how it affects society. Procedural transparency is mandatory in every procurement.
  • Explainability means that a government should be able to explain to individual citizens how an algorithm arrives at a certain decision or outcome that affects that citizen. The information provided should offer the citizen the opportunity to object to the decision, and if necessary follow legal proceedings. This should in any event include a clear indication of the leading factors (including data inputs) that have led the algorithmic system to this particular result and the changes to the input that must be made in order to arrive at a different conclusion. Providing this information becomes mandatory for any relevant product or service procured by the city under the new rules. 

The procurement conditions and their explanatory guide give a detailed account of the situations in which each of these types of transparency applies. 

Image credit: European Commission

Towards a European standard for public procurement of AI and algorithmic systems

The ambition for this project has always been to show that it is possible to operationalize general guidelines for AI ethics and to encourage others to do so as well. That’s why we hope these conditions will become the inspiration for a European standard for public procurement of AI and algorithmic systems. We took some steps towards that ambition already:

From February 2020 to June 2020, the European Commission held a public consultation on their AI white paper. The City of Amsterdam and Nesta, together with the Mozilla Foundation, AI Now Institute and the City of Helsinki, published a position paper as a response to that consultation, asking the EC to facilitate the development of common European standards and requirements for the public procurement of algorithmic systems. 

Recently, the European Commission proposed in their Coordinated Plan on AI that Member States and the Commission should support public administrations in procuring trustworthy AI by developing a set of minimal capabilities for algorithms to be used in contract conditions.

Within the Netherlands, the conditions are now being implemented by several municipalities, regional governments and government agencies, collecting feedback from suppliers and working towards a version 2.0.

Join us!

On June 25th, DG GROW hosts a webinar, titled Public Procurement of AI: building trust for citizens and business. At this webinar, we go in depth about the legal aspects of the conditions as well as the political context that has lead up to them; with a focus on tips and tricks on how you can put these conditions into practice within your organization. You can download the procurement conditions and its explanatory guide directly through the link.

Do you want to help improve the conditions or have feedback after use? Please let us know through this form how you want to be involved!