Foundational Principles for Advancing Responsible Technology

A Google Doc version of the report with citations can be found here.

Renee Black - July 2 2025

Summary 

In the last two decades, governments around the world have grappled with how to govern the increasingly unaccountable technology platforms that citizens interact with every day and that reside outside their jurisdictions. Imagining how to govern this influence and power without infringing upon fundamental rights has been a key challenge for lawmakers. 

Despite evolutions in policymaking mechanisms, most countries continue to lack coherent and comprehensive frameworks for how to effectively govern current and emerging digital and AI technologies run by companies headquartered outside their borders. This dynamic introduces particular governance challenges for smaller “secondary market” economies, such as Canada, which often lack the clout and collective economic power enjoyed by the EU to push back on US state and company interventions in policymaking processes. Even the EU is not immune from these forces once they have passed regulations, and are now faced with significant pressure to roll back or delay regulations in response to Trump’s “tariff diplomacy.”

Domestic policy in countries such Canada is influenced directly via mechanisms such as bilateral trade negotiations like the Canada - US - Mexico (CUSMA) Agreement which limits how Canada can govern digital spaces. They also face indirect influence via lobbying by Big Tech companies, wielding asymmetrical resources and access to the US administration in order to prevent, minimize and capture federal policy governing digital and AI technologies globally.

These pressures are critical factors in the context in which Canada and many other countries are attempting to introduce new regulations for platforms, in order to reckon with formidable challenges from the US. Addressing these structural imbalances requires a systems thinking approach and strategic coordination with like-minded allies to overcome the barriers to effective digital and AI governance and threats to Canada’s sovereignty via our over-reliance on US-based technology platforms. 

In this context, governments needs to established foundational principles that create the enabling conditions for effective oversight, transparency and governance. In particular this requires establishing foundational policies such as transparency obligations and researcher protections to enable Canadian researchers and regulatory bodies to monitor impacts on societies arising from the largest and most consequential platforms in our digital ecosystems.

Introduction 

The largest technology companies in the world - funded often by venture capitalists and private equity firms, and enabled by decades of laissez-faire economics - have enabled the rapid acceleration of digital and AI platforms in ways that have reshaped markets, concentrated power, and reduced accountability. These companies do provide many benefits to society, but they have also enabled harmful outcomes to people, communities and democracies through design choices, algorithms and business models that incentivized problematic and antisocial behaviours, and through tools that enable malicious actors to misuse platforms to deceive and harm. 

In response to these issues, platforms have established two primary forms of mechanisms aimed at safeguard platforms. The first mechanism was the establishment of in-house Trust and Safety teams, especially in the largest companies, which have primarily dealt with content-level issues, which inevitably runs into free speech challenges. 

Over time, the growing costs related to trust and safety work, combined with the recognition that many platforms are called to respond to a common set of harms and risks enabled through their platforms, has led to the establishment of a second set of mechanisms: specialized and technology company funded expert coordination mechanisms that aim to respond to societal issues that commonly arise in platforms. 

Two of the more prominent mechanisms - The Global Internet Forum to Counter Terrorism (GIFCT) - which addresses extremism online - and the Tech Coalition - which addressing child sexual abuse material (CSAM) online - are examples of such initiatives. These nonprofit organizations work collaboratively via rapid response activations, policy working groups, and shared technical solutions to address issues in scope arising on platforms. Yet, while both Trust and Safety teams and expert coordination mechanisms have produced useful initiatives over the years, companies limit the scope of what these entities are able to cover, and are unwilling to provide data that might shed light on the systemic factors enabling harm.

A key issue is that “problems” and “solutions” are framed in ways that intentionally limit the potential range of interventions. In their frame, the problems arising on platforms lie primarily with “bad actors” who “misuse platforms” by posting “harmful content.” Some of this content is illegal or contravenes platform policies and is therefore subject to takedown, whereas other content falls into the categories of “awful but lawful” and “controversial but subjective” which often remains online. 

Content moderation will always be necessary on some level, but focusing primarily on content-level issues misses systematic factors further up-stream that enable harm. Technical approaches such as GIFCT’s hasher-matcher tool helps companies use privacy-respecting mechanisms to share harmful content found on their platforms that others may also want to remove. Yet while these tools help, they fail to address factors that incentivize problematic behaviors including recommender systems that promote toxic content or design choices like infinite scroll that promote extended use. It also fails to look at business models like advertising in which an unaccountable and harmful market monopoly has developed in which harm occurs. Without clear attention on these practices, content level efforts will always fail to keep up with online risks and harms, which makes these approaches by definition unsustainable.  

Yet, rather than acknowledging responsibility and moving up the value chain of responsibility, the largest companies have routinely undermined experts hired to work in Trust and Safety roles, limited the scope of what those teams are allowed to address. In several high profile waves, these companies have significantly dismantled significant parts of Trust and Safety teams, replacing them in some cases with AI automation or with Community Notes. Such mechanisms continue to fail to remove inappropriate content such as content that is part of foreign interference operations while also removing legitimate content and users without explanation and with poor mechanisms for redress. 

Big Tech routinely argues that ‘self-regulation’ is sufficient to deal with online harms in arguments to governments considering regulations for digital and AI platforms. They invest significant resources into strategies, relationships and activities aimed at undermining global policy efforts, often using their own platforms’ scale and reach to disseminate anti-regulation narratives to undermine and block bills from being introduced, even in cases when there is strong bipartisan support. It is in this context that trust and safety professionals, researchers, nonprofits, whistleblowers and policymakers are increasingly politicized and threatened for their work in creating regulation to uphold public interest. 

To be sure, Trust and Safety and coordination mechanisms have faced criticism including that companies have potentially coordinated inappropriately with governments on potentially content take-down efforts. Yet content moderation will always be needed and such tactics appear cynically aimed at dividing and conquering than addressing public good.

Indeed a key and recent evolution in the Trust and Safety space has come not from technology companies themselves, but rather from civil society initiatives, such as ROOST.tools. Created and launched by former Trust and Safety workers, this open source toolkit aims to put safety back in the hands of the people and mark an important development. And, they need to be complemented by smarter and upstream approaches that address systemic risks baked into how systems operate and shape behaviour.

Foundational Obligations for Effective Platform Oversight and Governance 

A prerequisite for understanding and creating effective policy to address the societal impacts of platforms is the establishment of foundational policies that promote transparency. Foundational obligations refer to the baseline measures needed for regulators and public interest researchers to assess and make evidence-based recommendations - including for policy - related to the impacts of the largest platforms including what role they might play in promoting antisocial outcomes such as polarization and addiction. 

This concept aligns with the obligations established under the European Union (EU) designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSE) for companies in the Digital Services Act (DSA) such as Facebook and Google, whose reach and impact can have significant reach and consequences on people, society and democracy. It also aligns with the proposed Platform Accountability and Transparency Act (PATA) introduced in the US by Senator Chris Coons and with support from scholar Ethan Zuckerman. Three key foundational obligations are needed for the purpose of identifying and monitoring harms - such as polarization or foreign interference in elections) arising on platforms and creating evidence-based policies and interventions. 

These obligations recognize and address the problem with allowing platforms to ‘grade their own homework’ by ensuring that independent actors such as designated regulators and approved researchers are able to conduct research that validate or challenge annual transparency report findings. 

Indeed, proposed obligations can help redress an important dynamic between public interest actors such as researchers and policy makers pushing for policies and interventions to address harms and large companies asking for proof of causality (of platform-enabled harms) while refusing to provide access to data. Transparency and data access is essential to being able to demonstrate causality. 

Proposed foundational obligations fall into three categories: 1) Transparency Reporting Obligations; 2) Strengthened Access to Data for Accredited Researchers, and; 3) Safe Harbour Protections for Independent Researchers.

  1. Transparency Reporting Obligations - Similar to the DSA and PATA, Canada needs mandatory Transparency Reporting Obligations for large online platforms. Such obligations include mandatory Annual Reports for the largest technology platforms in order to document and monitor societal impacts of platforms on people, societies and democracies. Reporting obligations would be guided by input from federal regulators - in consultation with researchers - and would include current and emerging risks that impact people, societies and democratic integrity such as: 

    1. Current and emerging harms arising on platforms, along with enabling factors that enable harms to persist;

    2. Measures taken by platforms to identify risk, assess enabling factors and implement mitigation mechanisms which should also include assessments for differential vulnerability factors such as for children and seniors.

    3. Significant platform alterations and experiments that may be unrelated to harm monitoring and mitigation efforts, but that may impact outcomes;

    4. Verifiable Metrics that demonstrate success or failure of enacted measures, with proposed theories and explanations for outcomes observed;

    5. Annual plans that outline how platforms intend to evolve platform design and harm mitigation efforts before the next Annual Report.  

  2. Strengthened Access to Data for Accredited Researchers - Access to data for independent reviewers such as regulators and accredited researchers is needed so that reports and results provided by platforms can be independently reviewed and verified or else challenged. This access is critical to moving beyond a context in which technology companies are ‘grading their own homework’ without independent oversight by enabling researchers and regulators to assess the extent to which platforms are acting in good faith in response to known and emerging Trust and Safety issues. Also in alignment with DSA, it should enable regulators to introduce necessary remedies to strengthen oversight if companies appear to not be acting in good faith. Additionally, access to data can make it possible for researchers to conduct independent research on emerging issues that platforms may not have reported so far but that could have current or emerging relevance to public interest actors and regulators. Moreover, companies must be obligated to provide complete and representative datasets, with significant penalties and fines for misleading regulators through incomplete or unrepresentative data. 

  3. Safe Harbour Protections for Independent Researchers - Protection for independent researchers is also essential. In recent years, a new category of ‘independent’ and ‘non-permissioned researchers has emerged with the mandate of conducting research outside of “permissioned” (i.e. authorized access mechanisms) which can both be restrictive in what data they provide and which can perversely incentivize researchers to limit the scope of critical review in order to maintain access. Independent and non-permissioned researchers aim generally to employ consent-based and privacy respecting practices to independently collect data in order to understand the extent to which platforms are acting in good faith and adhering to legal obligations related to trust and safety commitments outlined in annual reports. In one example, researchers were able to demonstrate that ad libraries supplied by platforms did not contain a complete list of ads viewed during an election cycle, and that the more representative data set collected through non-permission research contained ads from foreign influence operations. Unsurprisingly, however, many independent researchers - and especially those whose results contradict the finding of platforms - find themselves targeted by companies, including through lawsuits, harassment, intimidation and account deactivation. Foundational policies should include explicit protections for independent researchers who use privacy-respecting methods to independently assess adherence to transparency reporting obligations. 

Conclusion

Current models of technology governance are unsustainable, and inadequately address the enabling systems that lead to harm. They fail categorically to address vast wealth inequality, impose the externalities of technology on society and undermine systems and institutions of governance. Advancing healthier, inclusive and human-centred technology futures requires a multifaceted and system-thinking approach. A more thoughtful and intentional approach to digital and AI policy can focus on establishing foundational policies including transparency obligations, access to data and researcher protections in order to effectively monitor digital and AI impacts on society. 

Citation 

Black, Renee "Foundational Principles for Advancing Prosocial Design Governance," GoodBot, May 5 2025. https://www.goodbot.ca/tech-policy/foundational-principles 

Note

In October 2024, GoodBot participated in ProSocial Tech Design Governance workshops in Brussels and Florence as part of its work with the Council on Technology and Social Cohesion. This document emerged as GoodBot’s contributions to a draft of a “Blueprint for Prosocial Tech Design Regulation” and was incorporated into the final report.

Next
Next

Imagining Professional ProSocial Design Governance Mechanisms