- Digital political campaigning can pose serious risks for citizens and democracies, such as fuelling discussions that are not grounded in fact, polarised public views, and a lack of transparency from political and non-political organisations.
- Gabriela Borz and her colleagues at Babes-Bolyai University in Cluj-Napoca, Romania and University of Strathclyde in Glasgow, Scotland recently set out to identify the risks of digital political campaigning and explore the effectiveness of one EU soft regulation designed to tackle associated disinformation.
- Their findings suggest that the EU’s strengthened code of practice on disinformation is fostering constructive dialogue between institutions and global digital platforms.
In 2024 alone, 100 executive and legislative elections are set to take place worldwide, along with 9 referendums. Before they cast their votes, citizens will undoubtedly be exposed to a variety of content outlining the views of competing politicians and parties.
While political campaigning is far from a new phenomenon, modern technologies have radically transformed how it is carried out, opening up new opportunities for sharing political ideas and agendas. Political parties worldwide are now investing sizeable financial resources into digital political campaigning as it can help them to reach voters faster and more efficiently.
Widespread digital campaigning strategies include microtargeting, the use of personal data shared by internet users to target potential voters with text messages, artificial intelligence- or AI-generated content, and videos that are more likely to spark their interest, persuading them to vote for a particular candidate or referendum. On the positive side, these activities can generate more political engagement and trust in democracies; when abused, they pose significant risks for both citizens and democracies.
Dr Gabriela Borz and her colleagues at the University of Strathclyde in Glasgow, Scotland and Babes-Bolyai University, Cluj-Napoca, Romania have recently carried out a study aimed at assessing the effectiveness of EU soft regulation targeting digital campaigning. This study is part of a broader project led by Dr Borz called DIGIEFFECT – aimed at identifying and exploring the risks associated with digital political campaigning in the EU.
The risks of digital political campaigning
Past research studies and media articles have outlined various concerns with digital political campaigning, the most pressing of which is disinformation. Targeting citizens with content that only explores one side of a socio-political argument can prevent them from becoming aware of both sides, or compel them into taking propaganda and political opinions as fact.
Moreover, a large amount of content shared online is generated by AI models and is not always fact-checked, which can result in people developing convictions that are not based in reality. Digital political campaigning can also cause polarisation – a growth in the ideological distance between people with different political views.
Polarisation typically manifests itself as a shift of voters towards extreme right or left political positions, which can hinder social cohesion. Together with disinformation, this can give rise to arbitrary political debates, distorted election outcomes, a lack of transparency from political parties, platforms, consultancies, data brokers or worse, foreign interference in elections.
Regulating digital campaigning
In recent years, many European countries and institutions have started introducing or adapting legislation designed to limit the adverse effects of digital political campaigning. Among these are the UK government’s Elections Act 2022, Code Electoral in France, and the Interstate Media Treaty in Germany. Several countries also adopted specific soft laws such as the Dutch Code of Conduct on Transparency of Online Political Advertisements (Figure 1).
Since 2016, the European Union (EU) has introduced about 24 laws focusing on digital political campaigning, including 9 hard laws, 13 soft laws, and 2 legislative proposals. These binding and non-binding legal provisions focus on various issues, including third-party technological threats, data processing, online disinformation, advertising, and AI (Figure 2).
One of the soft laws introduced by the EU is the Code of Practice on Disinformation (CPD), a general guideline for digital platforms and technology companies, issued in 2018 and updated in 2022. This guideline is considered a soft law as it based on the principle of co-regulation and self-regulation, meaning that it gives platforms the responsibility to govern themselves and to contribute to the development of regulation, rather than only fining them if they fail to comply with binding rules.
As part of their recent study, Dr Borz and her colleagues assessed the effectiveness of this EU soft regulation of digital campaigning, by analysing annual reports that online platforms sent to the EU Commission. They did this using a new framework they developed, which can help to understand the extent to which digital platforms and tech companies comply to regulations.
A framework to assess company responses to EU soft regulation
The key objective of the study was to explore how popular online platforms, including Facebook (Meta), Google, Microsoft, Mozilla, and Twitter (X), responded to the original CPD and the strengthened CPD (SCPD) released in 2022. They particularly tried to determine whether these companies had committed to the rules outlined in these guidelines and if they made concrete actions to comply with them.
Political parties worldwide are now investing sizeable financial resources into digital political campaigning, as it can help them to reach voters faster and more efficiently.
While some companies may theoretically agree to comply with soft regulations, they might only do this to benefit their public image, meaning that their commitment might not translate into concrete action. The framework that Dr Borz and her colleagues describes the compliance of companies to regulations as a continuum, spanning from symbolic commitment to formal commitment, and, ultimately, implementation.
In this context, symbolic commitment occurs when a company agrees to abide by some principles in writing. Formal commitment, on the other hand, entails a written pledge to implement these principles via concrete actions, for instance by introducing new policies or procedures.
Implementation occurs when a company reports taking these promised actions, ultimately resulting in corporate governance changes linked to the regulation a company agreed to comply with. When it comes to mitigating the risks of digital campaigning, indications that a company has decided to take promised actions could include reports of new policies to close bot accounts, placing restrictions on advertising, or introducing new fact-checking procedures.
The response of online platforms to the CPD
Dr Borz and her colleagues analysed the content of CPD and identified five general themes within it – transparency, rights and freedom of expression, integrity, empowerment, and shared/coordinated responsibility. The team compared these themes to the expenses and actions present in the companies’ annual reports, while also considering whether these were mentioned generally, as planned action, or as changes that were already implemented by online platforms.
The researchers found that Google was the only company to give approximately the same emphasis to all CPD themes except for one (freedom of expression) in their annual reports. Microsoft also appeared to respond quite well to the regulation, while the annual reports of the other companies identified in the study had fewer mentions of CPD themes.
The team’s analyses revealed that rights and freedom of expression was the least mentioned theme in annual reports, while integrity of services, the empowerment of users, and coordinated actions with third parties were the most frequently responded on with specific actions.
Overall, this recent study suggests that while not all companies took significant actions to comply with the CPD, this soft regulation successfully initiated a dialogue between online platforms and the European Commission regarding digital political campaigning, which could potentially contribute to future risk identification and policymaking.
How did you find the CPD and SCPD impacted the practices of major online platforms?
Most platforms responded to the (S)CPD with practices which mainly included either the allocation of funds to new initiatives and partnerships or set up new corporate teams related to elections or disinformation. For example, in 2019, Meta launched an escalation channel for the administration of government and political processes and a responsible innovation team for developing risk assessment frameworks. In 2019, Microsoft partnered with and funded the Oxford Internet Institute. Microsoft is also the founder and co-chair of coalition of content provenance and authenticity and has created a cross-company team to ensure compliance to the 2022 Code of Practice. In 2023, Google created a user experience team within the Google advertising transparency team and provided financial contribution to the International Fact Checking Network. The latter has proved extremely efficient in efforts to debunk disinformation narratives during the 2024 European elections campaign. Other operational practices at the organisational level include the 2019 Twitter (X) report on the set-up of an internal election group which leads the X’s electoral integrity work in the EU.
Based on your results, would you say the EU soft regulation was effective, and why?
Our investigation of platform compliance with EU laws shows that, yes, soft regulation on digital campaigning can be effective for the following reasons:
1. It creates a necessary dialogue between the regulator and global actors such as digital corporations.
2. The actions of platforms are in line with reported commitments especially for those issues which are of interest to companies (ie, user experience). However, more needs to be done in the area of democratic rights and freedoms and a dialogue between governments and platforms can raise awareness on their importance for democracy.
3. It can also be effective as a tool for regulators (be they the EU or national governments) to keep pace with rapid technological developments in order to create benchmarks and disseminate examples of good practice. Regulators need the input and expertise of corporations using digital technologies. Governments need to understand the political and social impact of electoral activities involving technology before they regulate them.
4. At times, soft regulation can be the first step towards adopting hard law in areas which constitute a high risk for the individual and democratic system overall.
How many companies demonstrated their commitment to the regulation through actions and what were their actions?
All platforms investigated showed some implementation of specific actions after the 2018 and 2022 Code of Practice on Disinformation. For example, Google news lab provided in-person training on a range of digital tools to a large number of journalists across Europe. As a response to disinformation, specific platform actions related to the 2024 European Parliament elections, for example, included content removal from platform, link to a debunk (Meta) made by an independent fact-checker, account suspension, or information for the user that the multimedia content is presented out of context, community driven content moderation (notifications on X), disapproval of election adverts where there was no verification of the advertiser (Google). These operations and financial responses need to be intensified before any forthcoming elections.
What insight could your study offer for future policymaking and efforts at regulating digital campaigning?
At the international level, there is a need to develop global norms and methodologies related to the positive and negative implications of digital campaigning. At the transnational level such as the EU, there is a need for a single, more comprehensive, unified, rather than scattered, electoral regulation. A unified framework for digital and hybrid electoral purposes should encompass the multitude of digital actions across all possible actors involved in digital and hybrid campaigning and the assessment and mitigation of positive and negative effects spanning from those actions. At the national level, there is an urgent need to update the national electoral laws in order to keep the pace not only with the negative effects (overspending, campaigning outside the legal electoral period, disinformation, etc) but also with the multitude of actors involved in the digital electoral campaigning process (political parties, platforms, campaigners, data brokers, external foreign actors).
The regulators should also consider that the scope of all actors goes beyond winning elections and it can also include profit or external manipulation. Soft laws in the form of code of practice or guidelines can be a first step towards keeping the pace with the technological transformation of electoral politics. An emphasis on risk governance when developing regulation can also be a solution for governments which aim to solve societal problems created by digital campaigning and achieve policy effectiveness.