When Your AI Assistant Becomes an Advertiser
Hi there,Welcome back to Untangled. Itâs written by me, âCharley Johnsonâ, and âsupportedâ by members like you. This week Iâm sharing my conversation with Miranda Bogen (Director, AI Governance Lab, Center for Democracy & Technology) about what happens when your AI assistant becomes an advertiser.As always, please send me feedback on todayâs post by replying to this email. I read and respond to every note.Donât forget to sign up for âThe Untangled Collectiveâ â itâs my free community for tech & society leaders navigating technological change and changing systems, and âthe next event is coming up!đĄUntangled HQđŠNEW: Iâm teaming up with Aarn Wennekers (complexity expert and author of Super Cool & Hyper Critical) to launch âStewarding Complexityâ, a private, confidential gathering space for boards, executive teams, and organizational leaders to step outside formal governance structures, speak candidly with peers, and practice making sense of complexity â together. âIf thatâs you, join us!âđšNot New, But Important: Every organization I speak with is facing the same two questions: How do we build strategy for uncertaintyâand what should we actually do about AI?My course, âSystems Change for Tech & Society Leadersâ provides a structured approach to navigating both, helping leaders move beyond linear problem-solving and into systems thinking that engages emergence, power, and the relational foundations of change. âSign up for Cohort 6 today!âBecause why not: hereâs a free âdiagnostic frameworkâ I use in the course to help you assess how your organization understands and uses technology across its strategy, programs, and operations.đïž Some LinksHow Certain Is It?Iâve written âa lot about why embracing uncertainty mattersâ. Chatbots do the oppositeâthey collapse uncertainty into confident-sounding responses, packaging blind confidence as a feature. But what if we designed these tools differently? What would it take to preserve uncertainty rather than erase it? A ânew paperâ tackles this challenge, arguing we need to protect the messier, harder-to-quantify forms of uncertainty that professionals navigate through conversation and intuition. Their proposed fix? Create systems where professionals collectively shape how different forms of uncertainty get expressed and worked through.Blackbox Gets SubpoenaedJob applicants are suing Eightfold AI, claiming its hiring screening software should follow Fair Credit Reporting Act requirementsâgiving candidates the right to see what data is collected and dispute inaccuracies.Eightfold scores job applicants 1-5 using a database of over a billion professional profiles. Sound familiar? Itâs essentially what credit agencies do: create dossiers, assign numeric scores, and determine eligibility.The lawsuit argues: if it works like a credit agency, it should be regulated like one. As David Seligman of Towards Justice put it: âThere is no A.I. exemption to our laws. Far too often, the business model of these companies is to roll out these new technologies, to wrap them in fancy new language, and ultimately to just violate peoplesâ rights.âThreatening ProbabilitiesEvery time a chatbot threatens or blackmails someone, my inbox fills with âproofâ of sentience.But a ânew paperâ shows these behaviors arenât anomaliesâtheyâre just extreme versions of normal human interaction: price negotiation, power dynamics, ultimatums. Our surprise comes from assuming chatbots should only reproduce socially sanctioned behavior, not the full spectrum of how humans actually act.Threats and blackmail donât signal consciousness. They signal the model is drawing from the complete statistical distribution of human behaviorâincluding the parts we donât like to acknowledge. Itâs probabilities all the way down, even when theyâre uncomfortable ones.đ§¶When Your AI Assistant Becomes an AdvertiserOpenAI just announced it will start testing ads in ChatGPTâs free tier. The press release was carefully wordedâreassuring users that âads will not change ChatGPT answersâ and that âyour chats are not shared with advertisers.â But as Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, pointed out in a recent conversation, these statements are misleading and miss the entire point. Whatâs coming is a fundamental shift in who these systems serveâand what that means for people, privacy, and inequality.To understand why this matters, we need to look at three things: how AI changes advertising signals, what âprivacyâ really means in this context, and why this could be harder to detect than anything weâve seen before.The Signal ProblemThe question is: what happens when your AI assistant becomes an advertiser?Answering that question, according to Miranda, starts by recognizing that advertising is all about high fidelity signals of intentâdata that accurately predicts what you want to buy or do. When an ad interrupts your experience on Facebook, itâs hoping that youâll care; that perhaps something you clicked awhile back will still be relevant. Thatâs not a great signal. Searching offers a better signal. Youâre typically using Google because you want something.But ChatGPT is different. Youâre not searching for information. Youâre often thinking out loud, revealing what matters to you, what youâre struggling with, what youâre planning or hoping for. Each conversational turn reveals deeper context about your intentâcreating rich data for advertisers.Now, OpenAI wants those signals but, if you read the press materials, theyâre clearly concerned about losing users. For example, they bend over backwards to say that your chats wonât be âshared with advertisers.â But according to Miranda, this is technically accurate but completely misleading. The platform doesnât need to send advertisers a list of your conversations. Thatâs the whole point of advertising infrastructureâOpenAI will target ads on behalf of advertisers, shielding your specific data while making the connection happen anyway.The press release also promises you can âturn off personalizationâ and âclear the data used for ads.â But there are multiple layers of personalization happening simultaneously (e.g. raw chat logs, explicit memory stored about you, etc.) and itâs unclear what exactly OpenAI is referring to. Plus, even if you did turn off all personalization and erased all memory in the system, the amount of information a chatbot has about you in a specific context window offers plenty of signal for advertisers.The Relationship ProblemOn Facebook or Google, itâs clear youâre dealing with an advertiser. Your intent is your own. The experience is transactional. But as Miranda argues, when your AI assistant or AI co-worker starts subtly suggesting new products or services, something fundamentally different is happening.Itâs closer to influencer marketing where paid recommendations come wrapped in the veneer of authentic social connection. But an influencerâs audience typically knows that theyâre being paid to sponsor a product. With an AI assistant, the lines start to blur. It has been helping you draft emails, think through career decisions, process relationship struggles. Youâve built relational trust with it over months, so when it suggests a therapist, lawyer, or contractor, you might perceive it as trusted advice without knowing, of course, which providers paid to be in the pool the AI draws from. The persuasion is invisible, wrapped in the same helpful tone the AI uses for everything else.The Visibility ProblemPersonalized ads and privacy harms are a big albeit old problem. These tools will of course propagate discrimination, exploit people at vulnerable moments, reinforce stereotypes and biases, and shape what opportunities people see (and donât!). But this evolution of the advertising model brings something new: these harms will be even harder to identify.Why? Because these systems are being built to connect with each other. AI agents will call other tools, connect with your bank and service providers, exchange information across an ecosystem of interconnected systems. There will be money and incentives flowing through this network in ways that are nearly impossible to track.As Miranda put it:âEven just tracking where any of this is happening, where exchanges of money and incentives are happening behind the scenes and where that might be shaping peopleâs experiences will just be even more challenging to keep up with over time.âIf your inner monologue so far is âthis all sounds very bad,â well, I get it. But we didnât end the conversation without imagining alternative business models and policy solutions. Listen to the end for these, and hear what Miranda would do to shift power back to users if she were advising our next (fingers crossed!) President four years from today.đ Before you go: 3 ways I can help* âAdvising:â I help clients develop AI strategies that serve their future vision, craft policies that honor their values amid hard tradeoffs, and translate those ideas into lived organizational practice.* âCourses & Trainings:â Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change.* â1:1 Leadership Coaching:â I can help you facilitate change â in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com