The European Commission is launching its strategy for Europe's digital future. “The debate on the surveillance society, the mapping of citizens and the profiling of consumers is an important consideration as we move into this future, writes lawyer Carolina Brånby.
In many respects, Sweden and Europe are aligned with the times. Yet to stay at the forefront of development demands new visions and new guidelines. We stand on the threshold of a high-tech society, where AI systems are shaping how we build our societies, how we produce and how we consume. To strengthen Europe’s competitiveness and confidence in new technology, the European Commission is launching its strategy for Europe's digital future.
The debate on the surveillance society, the mapping of citizens and the profiling of consumers is an important consideration as we move into this future. Europe seeks to invest in ethics in order to see development that is based on delivering good for humanity and that strengthens confidence in the digital world.
In Europe, and in the rest of the world, there is currently a discussion over the ethical use of AI. Already, number of organisations and companies have already established their own ethical codes. In Europe, guidelines for creating trustworthy AI have been developed. In Sweden, IT and telecoms companies have, for example, published an industry code to deliver responsible AI that contributes to a humane society, builds trust in the technology and delivers sustainability.
Already, through laws in areas such as product safety and product liability we have managed to regulate the dangerous and unsuitable. These regulations protect the inhabitants and the environment. In the digital environment, it is primarily personal data that needs to be protected. In Europe, there has been a comprehensive data protection regulatory framework in place - with GDPR at its core - since 2018. Privacy issues are the most controversial aspects of digitalisation. For manufacturing, products and services, the issue of trust is vital. Ensuring privacy, and combining this with appropriate security levels on IT systems, is fundamental to trust. In addition, people need to be able to understand how and what this new technology and new systems can create.
Of the demands placed on ethical AI, transparency is probably the most important to build and maintain trust. However, technological advances have now reached a level where it is increasingly difficult for the average person to understand the latest developments; at the same time, it is challenging for authorities to monitor and supervise. Politicians are now looking at ways to regulate both the development and outcomes of AI systems. Although technology has been developing for decades, a new level of complexity is now emerging in the systems. There are now systems capable of formulating and making decisions that can modify processes, environments and the structure of communities. While technology may seem to compete with the policy of controlling social development, it is an important tool that policy makers are able to use to solve current challenges and develop a modern society.
Therefore, political interventions should not ignore the possibilities of further innovation, avoiding the shackling of current technology through outdated legislation and practices. It is important to remain impartial towards the technology itself, and instead focus on principle-based rules that themselves are technology-neutral and thus relevant for longer. In addition, as technology continues to advance, there is a considerable advantages of self-regulation over legal interventions.
AI applications encompass so many aspects that AI itself has proved difficult to define. Because of the strong political will to make Europe more digitised and more competitive through the deployment of AI solutions and applications, it is important to divide its use according to different contexts and different users of services. For example, is the application destined to be for industrial production, for a platform for handling corporate customers, administration and finance or for providing customer service to consumers? Or is it public activity that provides a service or decision to citizens?
The requirements related to AI use should therefore be clearly divided, depending on the context and users – is it the user industry, consumers or government? There are considerable differences between applications for streamlining production methods and those for administration, or for customising treatments or training. Creating a horizontal approach that addresses all industries will not strengthen competitiveness; rather, it will create new regulatory burdens, with all that it entails in terms of uncertainty, time and costs.
The European Commission's strategy for creating Europe's digital future proposes different rules depending on the sector and the types of risk associated with AI use. It is a proposal that can benefit from further work. If we are to establish Europe's digital future, the vision must be clear, in order that we do not limit the available digital opportunities.
The Commission's Strategy for Data, and its White Paper, “On Artificial Intelligence - A European approach to excellence and trust”, are the first steps to the goal of a European society driven by digital solutions. These should be solutions that put people first, open up new opportunities for businesses and accelerate the development of reliable technologies that promote an open and democratic society and a strong and sustainable economy.
The White Paper on AI contains a plan for building faith in this technology based on excellence and trust. According to the Commission's proposal, there should be clear rules that apply to high-risk AI systems, without putting too much burden on the less risky. For high-risk applications, such as for the police, transport and healthcare sectors, AI systems should be transparent, traceable and guarantee individual privacy. Strict EU rules for consumer protection, for dealing with unfair business models and for protecting personal data and privacy should continue to apply. All AI applications will be welcome in the European market, as long as they comply with EU rules.
The aim of the data strategy is to ensure that the EU becomes a leader and role model for a future data-driven society. For this reason, it needs an internal market for data. In a data space, information is made available so that it can flow freely within the EU and between sectors for the benefit of companies, researchers and the public sector. Data that does not constitute personal information should be made accessible to all.
The coming digital Europe will be shaped by actions in three areas: