Post

Exploring an NGI Trustmark

Trustmarks are a well-established mechanism which help consumers make more informed decisions about the goods and services they buy. We all know the fairtrade stamp on our bananas, trust environmental certifications, and value Better Business Bureau stickers. Where we haven’t seen the trustmark used much yet, or at least not very effectively, is within the […]

Trustmarks are a well-established mechanism which help consumers make more informed decisions about the goods and services they buy. We all know the fairtrade stamp on our bananas, trust environmental certifications, and value Better Business Bureau stickers. Where we haven’t seen the trustmark used much yet, or at least not very effectively, is within the space of responsible technology and software. 

After a series of highly public scandals which have put in question the trustworthiness of the technology and tools we rely on (from privacy violations and data misuse to large data breaches), there is a rising demand among the general public for ethical, responsible alternatives. It is however not always easy for consumers to find these alternatives, partially due to a lack of easy-to-understand and -find information (among a deluge of apps, how do we know which ones are most careful with handling our data, for example?), but also because of the lack of maturity of the marketplace for these types of tools to begin with (few have been able to gain real traction).

Trustmarks could help solve these issues. A stamp of quality for products that, for example, follow high security standards, do not track and sell the data of their users or use ethical production processes, could make it easier for consumers to pick out these tools in a crowded marketplace, and simultaneously raise awareness about how some of these values are not embodied by many of today’s most popular tools. Furthermore, a trustmark could support the creation of an ecosystem and market around ethical tools, which can struggle as being “responsible” often means compromising on user friendliness, effective marketing and above all profitability. 

Exploring the Trustmark idea in the digital space

On September 25 2019, the NGI Forward held a short workshop on trustmarks as part of the NGI Forum, the Next Generation Internet’s flagship community event. This document outlines the key messages and take-aways from this workshop. 

In this small workshop we brought 16 participants together to explore trustmarks in more depth, and examine their potential value and how they could be practically employed. Before trustmarks can be put to the test, there are a lot of open questions left to be answered. In this workshop, we surfaced many of the key issues that still need to be resolved and different potential solutions. 

Many of the participants in the workshop reported already being involved in the development of some sort of digital trustmark. There are a number of trustmark type initiatives emerging in areas such as the responsible use of data, Internet of Things (IoT) and cyber security. For example the Trustable Technology Mark (https://trustabletech.org/) developed for IoT devices or Sitra’s work on the concept of a ‘Fair data label’ to inform consumers about services’ compliance with basic principles and standards of data protection and reuse. Many of these initiatives are asking the same kinds of questions the workshop set out to explore, how could a trustmark for internet related products or services provide value, what factors make a trustmark a success and which areas should a trustmark cover? Many of these projects have already faced some key challenges, which are explored more below. 

How could a trustmark be useful? 

The main benefit of the trustmark model is the opportunity to empower consumers to make informed decisions about the product or services they are using and it also helps companies to prove their products or services are ‘trustworthy’. It is clear that consumers increasingly have trust issues around the digital products and services that they use, whether those be privacy concerns or potential harms emerging from automated algorithm- based decision making (such as targeted ads or curated social media news streams). Trustmarks may also be able to add additional value, not just for consumers but also for companies and the EU’s drive to make the next generation internet (NGI) more ‘human-centric’.

Trustmarks could help create a market for responsibly created, trustworthy products. This could help encourage the creation of more products and services that compete with existing business models that are largely based on data exploitation and monetisation, and offer a ‘responsible’ alternative. Trustmarks could also help further raise awareness among consumers of the many issues digital products and services can create.  At the same time a new market for responsible, trustworthy products, services and business models may help embed ‘human-centric’ values into the next generation of innovations. Introducing greater transparency around products, services or business models is one of the central ways trustmarks could help facilitate this change. Trustmarks could also improve trust in the digital economy, a critical step in making the most of the digital economy and providing improved private and public services.  

Challenges:

Scope

Successful existing trustmarks cover a wide range of things, from adherence to health and safety standards to ethical business practices. They often focus on one area rather than covering every element that may benefit from indicating ‘trustworthiness’.  A narrower focus can help with consumer engagement as it is easier to convey a single idea over several different metrics outlining many different aspects of what a ‘good’ product is. However, too narrow a focus may not cover all necessary issues, thereby giving consumers a false impression of trustworthiness of the overall solution. This difficult balancing act around getting the scope and remit of a trustmarks right, is particularly challenging for digital and internet products as the issues we have seen emerge around them are so multifaceted. Data collection and use, cybersecurity, accessibility, physical elements of a product, hardware and software etc. Could a useful comprehensive NGI trustmark be created that covers anything from a social media picture app, an IoT sensor to AI algorithms? 

To identify some of the important areas an NGI trustmark could cover, workshop participants focused on individual high-level issues, such as sustainability or responsible data use, rather than attempting to construct a comprehensive trustmark, which the group both agreed would not be particularly useful, nor viable to debate in the short time available for the workshop. 

However even focusing on narrower areas identified many different open questions and concerns that merit further exploration. Participants found there were differing needs, risks and norms across sectors and verticals, for example retail and health, which meant that standards for “good” would likely differ significantly across solutions and applications. 

Metrics and evaluation

For trustmarks to work, we require reliable and easily transferable ways to measure and evaluate how well a product, service or business model meets the relevant requirements. For some areas discussed during the workshop, for example CO2 consumption or energy use as part of sustainability, it would be fairly easy to develop appropriate metrics (particularly as there are already other product trustmarks that do this), but for other, perhaps more subjective, areas like data handling, bias and discrimination, or ethical practice developing such metrics is much more difficult and fuzzy. 

Assessment may also be hampered by two additional factors;

  1. Software is continuously being updated and changed. How can we make sure that after repeated tweaks, products or services still meet the trustmark’s basic requirements? Is it viable for any governance system to oversee such a vast, rapidly changing landscape? 
  2. ‘Black box’ systems, which generally refer to complex AI algorithms in this context, limit the ability to be open and transparent. We may not know what the system is doing or how it achieves the outputs it creates. Alternative metrics may be required in these instances (for example focusing on data handling or data sources), or the trustmark could focus only on explainable systems. 

Another related question around how the trustmark works is whether it is used to define a set of minimum requirements or it is used to identify ‘best practice’. Minimum standards make it easier for more companies or products to acquire a trustmark, but also mean that the solutions championed do not necessarily push the bar for good behaviour. Minimum standards might even reward bad behaviour in some cases, where companies are encouraged to only do the bare minimum. 

Governance model

How to govern trustmarks is one of the biggest challenges in making them a success. Building trust in a trustmark requires the involvement of well-respected institutions, and, as many participants noted, can be very expensive. Especially the auditing and review of solutions, is an open question. 

The digital landscape is vast: if demand from the private sector for the trustmark increases, this could potentially involve hundreds of thousands of companies. There are many ways of doing assessments, either through self assessment or auditing by an independent auditing body (often the outcome is somewhere in between the two). Participants indicated that the focus should be more on independent self-assessment to avoid false self-reporting. However this creates other challenges in terms of resourcing and ability. Any governing body with assessment responsibilities would need to be resourced appropriately to carry out its functions. In light of the growth of the digital economy and ongoing auditing needs as software is updated this may be significant. This raises the question of how the trustmark would be paid for. If it is paid for by companies who apply it may put additional barriers in the way of smaller companies, startups or free, open source software. 

The governance of the trustmark also needs to be tied to a trusted organisation itself, in order to help strengthen support and credibility of the trustmark. Participants felt that the European Commission was in a strong position to play this kind of role. Participants also indicated that many initiatives have stalled or failed to come to fruition due to a lack of funding or support from a larger independent institution. 

Business models and consumers

A trustmark’s success will be heavily dependant on how effectively it can help disrupt entrenched business models and create a market for alternative, responsible companies. This will be particularly difficult in the data economy where many different companies have vested interests and lobbyists will play an influential role. 

Perhaps most important of all however is consumer engagement. If consumers are apathetic about an NGI related trustmark then it will never achieve any of the potential goals set out above. Workshop participants did not consider this to be a big challenge however, as many polls and public engagement exercises have already demonstrated the public’s interest in areas like privacy, data use and sustainability concerns. Trustmarks can be used in several ways, identifying potential impacts on the user or environment, a way to educate consumers or through eliciting a ‘feel good’ response (eg fairtrade approach).

Themes

Participants also brought up a variety of other important topics trustmarks could potentially be used for: 

  • Sustainability: The sustainability of the internet itself, software and hardware are becoming a topic of ever greater salience, though public awareness about the large environmental footprint of many of their connected devices and internet use remains limited. One possible way of encouraging technology companies to adopt more sustainable ways would be to design a trustmark around these issues (which could everything from CO2 emissions from data centres, energy efficiency, ability to recycle a device, etc.). 
  • Privacy and data use: Trustmarks could be given out to companies whose tools handle their users’ data in a particularly secure way, allow for data portability, otherwise make valuable datasets available to third parties in a responsible way, or use particularly transparent models for consent, to name just some examples of concrete interventions we could evaluate on in this realm. 
  • Cybersecurity: Also cybersecurity is often touted as a potential focus of a trustmark, particularly in the Internet of Things space. Has a solution of device successfully undergone a security audit? How transparent is the company about cyber breaches and underlying vulnerabilities? How securely do they store users’ data? Though this is an interesting area, lack of transparency might make it hard in practice to certify tools. 
  • AI ethics: Using trustmarks to formalise AI ethics principles in specific tools often came up as a possible application. Could we give trustmarks to solutions that offer transparency about the inner-workings of their algorithms? Make serious efforts to reduce bias? Subjectivity and lack of agreement about what “ethical” means, will require intensive efforts to build a coalition around this topic.