Advancing a ProSocial Design Governance Agenda for Digital Platforms

Read the complete paper on SubStack. A PDF version with full citations can be also found on Drive.

Executive Summary 

In the last two decades, governments around the world have grappled with how to govern the increasingly unaccountable technology platforms that citizens interact with every day and that reside outside their jurisdictions. Imagining how to govern this influence and power without infringing upon fundamental rights has been a key challenge for lawmakers. 

Despite evolutions in policymaking mechanisms, most countries continue to lack coherent and comprehensive frameworks for how to effectively govern current and emerging digital and AI technologies run by companies headquartered outside their borders. This dynamic introduces particular governance challenges for smaller “secondary market” economies, such as Canada, which often lack the clout and collective economic power enjoyed by the EU to push back on US state and company interventions in policymaking processes. Even the EU is not immune from these forces once they have passed regulations, and are now faced with significant pressure to roll back or delay regulations in response to Trump’s “tariff diplomacy.”

Domestic policy in countries such Canada is influenced directly via mechanisms such as bilateral trade negotiations like the Canada - US - Mexico (CUSMA) Agreement which limits how Canada can govern digital spaces. They also face indirect influence via lobbying by Big Tech companies, wielding asymmetrical resources and access to the US administration in order to prevent, minimize and capture federal policy governing digital and AI technologies globally.

These pressures are critical factors in the context in which Canada and many other countries are attempting to introduce new regulations for platforms, in order to reckon with formidable challenges from the US. Addressing these structural imbalances requires a systems thinking approach and strategic coordination with like-minded allies to overcome the barriers to effective digital and AI governance and threats to Canada’s sovereignty via our over-reliance on US-based technology platforms. 

This policy brief explores critical considerations and recommendations to strengthen approaches to digital and AI governance in Canada grounded in a System Thinking approach, which requires interoculars to look at the value chain of issues that impact on society. This approach looks further up the value chain as to how companies are funded, what business models they choose, the incentives they create, and whether they contribute to anti-competitive markets. This brief proposes four ideas to advance the building blocks for the advancements of healthier technology futures which recommend that Canada: 

(1) Ensuring that public interest and voice are rebalanced in critical decisions on digital and AI policy. The rapidly evolving nature of lobbying activities, including resources and interventions aimed at influencing digital and AI policy in Canada are poorly understood. Ensuring public voice is balanced in decision-making processes requires understanding how these activities have evolved and to what effect;

(2) Advance prosocial design governance approaches to digital and AI policy, which focus on platform design choices, algorithms (such as recommender systems) and business models that enable harm through problematic incentives and which address upstream from current content-level responses to online harms that prioritize content moderation; 

(3) Establish foundational policies such as transparency obligations and researcher protections to enable Canadian researchers and regulatory bodies to monitor impacts on societies arising from the largest and most consequential platforms in our digital ecosystems

(4) Address the concentration of power of unaccountable companies and strengthen market conditions that can enable prosocial technologies to emerge and compete against dominant ‘antisocial’ incumbents which currently enjoy structural advantages. This involves addressing the ‘Deception Economy’ which uses algorithmic obfuscation, deceptive practices and ineffective security to enable profitable but harmful outcomes for consumers such as price fixing, targeting youth, enabling deceptive markets such as scams and harmful/ illegal content. 

Before exploring recommendations, this paper first explores the limits and failures of self-governance mechanisms that have defined regulatory efforts over the last two decades. 

Next
Next

Foundational Principles for Advancing Responsible Technology