A SUBMISSION TO THE ETHI COMMITTEE - AI Challenges, Regulation and the Deception Economy
By Beau Hayward, Kaitlin Bustos and Renee Black
EXECUTIVE SUMMARY
The modern internet has undergone a dramatic shift from an open space for making connections, accessing information and organic discovery to a closed space of “walled gardens” in which deception has become normalized, incentivized and monetized. In these spaces, algorithmic amplification, deceptive design practices and harmful monetization have enabled systematic harms impacting people, societies and markets.
Deceptive and disloyal design practices seek to influence technology users’ decisions in order to extract value from consumers through manipulative or deceptive practices. In effect, these practices are designed to remove users’ agency to extract money, capture attention, and collect data in ways that prioritize profits over well-being, even when evidence points to the role deceptive practices play in enabling a wide range of harms, emotional distress, financial loss and erosion of trust.
Advancements in Artificial Intelligence (AI) now act as a force multiplier for deceptive practices, enabling platforms to grow more sophisticated by automating the removal of agency from users and ultimately shaping human behavior and digital experiences at scale. Examples of how such practices lead to consequential impacts are many. Engagement-
based algorithms exploit psychological vulnerabilities to capture attention and distort our understanding of one another. Sycophancy seeks to create dependencies, leading sometimes to serious mental health consequences, including suicide. Anthropomorphism convinces users that they are engaging in real relationships, compelling users to share their most intimate thoughts with little consideration for risks and often at the expense of human relationships. Privacy consent forms are designed to be illegible and rely on an established status quo of “consent”. They set up customers into sharing more data than they normally would.
The failure to introduce safeguards that address how and when these practices enable harm exposes citizens to digital ecosystems where attention is prioritized, data extraction is maximized, deception is normalized and safety is optional. These asymmetric practices benefit companies and enable malicious actors, while placing the burden of safety on people, businesses and society. This is what GoodBot calls the Deception Economy.
Digital Harms, Design Practices & Regulation
Moving toward healthier online ecosystems that restore consumers’ agency and our ability to do collective sensemaking requires understanding how these practices shape online behaviors and experiences, and exploring how these practices can be regulated toward reducing harmful online outcomes. It also requires transparency obligations, conflict- of-interest rules and liability mechanisms to hold companies accountable when they fail to appropriately safeguard platforms, especially when they benefit from known harms.
Figure 1: Deceptive practices and digital harms
The Ask: We call on the Committee to establish a regulatory framework that codifies AI-driven design harms and mandates joint cross-agency enforcement to:
Guarantee personal data ownership, control and portability (Interoperability).
Create a unified regulator for independent oversight of digital and AI platforms.
Enforce transparency through mandatory algorithmic auditing, design governance assessments and monetization models
Adopt transparent and robust content moderation rules inspired by global best practices (e.g. EU Digital Services Act).