Untangled podcast

The Age of Algorithmic Deference.

21.3.2026
0:00
40:13
15 Sekunden vorwärts
15 Sekunden vorwärts

Hi there,

Welcome back to Untangled. It’s written by me, ​​Charley Johnson​​, and ​​supported​​ by members like you. ​Help me make it better?​

This week, I’m sharing a conversation I had with Hilke Schellmann — Emmy Award-winning investigative journalist, NYU professor, and author of The Algorithm — about her recent reporting on AI in hospitals. If you read ​my newsletter​ applying the STEWARD framework to AI in health care, you know her work was the spine of that piece. This conversation builds off of that, and goes a li’l deeper.

On to the show!

🏡 Untangled HQ

This Week

* WEAVER: I opened enrollment for Cohort 7 of ​Systems Change for Tech & Society Leaders​. You can get 40% off through March 27 with the promo code UNTANGLED40.

* Community: Kate and I hosted “Navigating Challenging Personalities at Work.” Join The Facilitators’ Workshop if you don’t want to miss the next event.

* Help me, help you: I launched a ​short survey​ to help me improve Untangled. Complete it and get a free email course. (Most participants are completing it in under 2 minutes.)

Coming Up

* STEWARD: Next week I’m presenting my STEWARD framework to the ​Technology Association of Grantmakers Inclusion By Design Leadership Cohort. ​Be the first to hear when Stewarding AI launches.

* Untangled Collective: Power is everywhere. In the org chart, yes — but also in the intake form nobody questions, the metric everyone optimizes for, and the meeting that always ends the same way. ​Learn how to map it and identify and what you can actually do about it.​

🧶 Deep Dive

The Age of Algorithmic Deference.

In my conversation with Hilke Schellmann, we opened with the story that anchors her piece: Adam Hart, a nurse at St. Rose Dominican Hospital in Nevada, at the bedside of a patient flagged by a sepsis alert. An algorithm generated an order to administer intravenous fluids. Hart noticed a dialysis catheter and knew fluids would harm her. His charge nurse tells him to comply. He refuses. A physician overhears, steps in, and orders dopamine instead — raising her blood pressure without adding fluid volume. The patient was fine. Nobody in that room had ill intent. In fact, the system worked as it was designed -- and that’s the problem. What stayed with me from this part of the conversation was Hilke’s reflection that Hart’s actions took genuine courage. Because it did! The charge nurse treated the algorithm with legitimacy and neutrality, and the alert became a verdict. Hart had years of experience and judgement underpinning his conviction -- but what about nurses earlier career, less confident in their own judgment?

Then there’s Melissa Beebe and the BioButton at UC Davis — a wearable chest sensor that tracked vital signs continuously and generated alerts Beebe found vague, way too frequent, and hard to act on. Beebe asked to understand why the device was producing the outputs it was. She was a union rep with seventeen years of experience asking a completely reasonable question. But because we live in a culture obsessed with innovation -- and not one obsessed with patient outcomes -- she was labeled as resistant to technology. Hilke and I talked about what she was actually raising and why it wasn’t heard — and about what happens when it isn’t. Tools arrive with press releases and fanfare, get piloted for a year, quietly get shelved. Nobody shares what went wrong. And, as a result, the next health system starts from scratch.

Mount Sinai offered a different picture. They brought AI development in-house, stopped trusting vendor promises, and found that the real work shifted from algorithm selection to trust, adoption, and workflow fit. Their most successful tool — a wound-care prediction model — came from a bedside nurse who identified the problem, helped build the solution, and trained her own colleagues. The catch: this only works if you have deep pockets and in-house expertise. Smaller and rural hospitals don’t. As Hilke argued, a two-tier system is developing, and the most vulnerable patients are on the wrong side of it.

We went back to Hart’s story to pull on something implicit throughout: the hospital system never trained staff on what these systems actually are and what they aren’t. Which led us into the question of what must remain human. Knowing a patient’s baseline. Reading the room. Catching the slurred speech that doesn’t show in the labs or on the monitor. These tools don’t have access to that data.

Workflow was the final thread. In most of the cases Hilke documented, the AI was simply added to an existing practice rather than prompting a redesign. Nobody asked what should happen when the alert is wrong, who has the authority to override it, or what a legitimate override even looks like. Those questions need to be answered before deployment — not discovered afterward.

We closed with what Hilke would change about how AI is being implemented in work contexts. Her answer: stop treating stakeholder participation as an afterthought. Start treating it as a design requirement.

🖇️ Some Links

The myth of the crowd: People are now betting real money on who gets voted off Survivor — a show that was filmed months ago and exists entirely on a hard drive somewhere. The New York Times ​reports ​this is creating obvious incentives for “insider” information, which is a very polite way of saying: someone who knows a producer is about to become very wealthy. Whether that counts as market manipulation apparently depends on your definition of “market,” “manipulation,” and possibly “reality.” (More on prediction markets)

Growth over kids: ​Meta knew.​ That’s the thing that should make you put down whatever you’re holding. Internal documents — surfaced during New Mexico’s lawsuit — show that Meta’s own people repeatedly flagged that Instagram’s recommendation and contact systems were steering teenagers toward predatory accounts and enabling serious harm. They documented it. They had meetings about it. And then they ran the numbers on what stronger safety defaults would cost in growth and engagement. They chose growth and engagement over the safety of young people — and they always will.

Pro-worker AI: ​A new paper​ sorts technological change into five categories, only one of which — “new task-creating” — is unambiguously good for workers. The other four range from “fine, probably” to “you’re being replaced by a script.” The authors note that pro-worker AI is chronically underinvested, which will surprise no one who has noticed that “we built a tool that makes humans more capable and irreplaceable” does not slap the same way AGI hype does. (More on AI & labor.)

📧 Learn With Me

My ​email courses​ break big, messy topics into small, digestible, actionable steps and practices -- everyone comes with practical tools and frameworks I’ve created that you can apply immediately. (Or just complete​ the short survey​ and get one for free!)

💫 Work With Me

Here are 4 ways I can help:

* ​​​​​Facilitation:​​​​ I can help facilitate your team through complex and fraught dynamics, so that they can achieve their purpose.

* ​​​​​Advising:​​​​ I can help you navigate uncertainty, make sense of AI, and facilitate change in your system.

* ​​​​​Organizational Training:​​​​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders)

* ​​​​​1:1 Leadership Coaching:​​​​ I can help you facilitate change — in yourself, your organization, and the system you work within.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

Weitere Episoden von „Untangled“