Engineering Enablement by DX podcast

AI and productivity: A year-in-review with Microsoft, Google, and GitHub researchers

0:00
42:00
Manda indietro di 15 secondi
Manda avanti di 15 secondi

As AI adoption accelerates across the software industry, engineering leaders are increasingly focused on a harder question: how to understand whether these tools are actually improving developer experience and organizational outcomes.

In this year-end episode of the Engineering Enablement podcast, host Laura Tacho is joined by Brian Houck from Microsoft, Collin Green and Ciera Jaspan from Google, and Eirini Kalliamvakou from GitHub to examine what 2025 research reveals about AI impact in engineering teams. The panel discusses why measuring AI’s effectiveness is inherently complex, why familiar metrics like lines of code continue to resurface despite their limitations, and how multidimensional frameworks such as SPACE and DORA provide a more accurate view of developer productivity.


The conversation also looks ahead to 2026, exploring how AI is beginning to reshape the role of the developer, how junior engineers’ skill sets may evolve, where agentic workflows are emerging, and why some widely shared AI studies were misunderstood. Together, the panel offers a grounded perspective on moving beyond hype toward more thoughtful, evidence-based AI adoption.

Where to find Brian Houck:

• LinkedIn: https://www.linkedin.com/in/brianhouck/ 

• Website: https://www.microsoft.com/en-us/research/people/bhouck/ 


Where to find Collin Green: 

• LinkedIn: https://www.linkedin.com/in/collin-green-97720378 

• Website: https://research.google/people/107023


Where to find Ciera Jaspan: 

• LinkedIn: https://www.linkedin.com/in/ciera 

• Website: https://research.google/people/cierajaspan/


Where to find Eirini Kalliamvakou: 

• LinkedIn: https://www.linkedin.com/in/eirini-kalliamvakou-1016865/

• X: https://x.com/irina_kAl 

• Website: https://www.microsoft.com/en-us/research/people/eikalli


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro

(02:35) Introducing the panel and the focus of the discussion

(04:43) Why measuring AI’s impact is such a hard problem

(05:30) How Microsoft approaches AI impact measurement

(06:40) How Google thinks about measuring AI impact

(07:28) GitHub’s perspective on measurement and insights from the DORA report

(10:35) Why lines of code is a misleading metric

(14:27) The limitations of measuring the percentage of code generated by AI

(18:24) GitHub’s research on how AI is shaping the identity of the developer

(21:39) How AI may change junior engineers’ skill sets

(24:42) Google’s research on using AI and creativity 

(26:24) High-leverage AI use cases that improve developer experience

(32:38) Open research questions for AI and developer productivity in 2026

(35:33) How leading organizations approach change and agentic workflows

(38:02) Why the METR paper resonated and how it was misunderstood


Referenced:

Measuring AI code assistants and agents

Kiro

Claude Code - AI coding agent for terminal & IDE

SPACE framework: a quick primer

DORA | State of AI-assisted Software Development 2025

Martin Fowler - by Gergely Orosz - The Pragmatic Engineer

Seamful AI for Creative Software Engineering: Use in Software Development Workflows | IEEE Journals & Magazine | IEEE Xplore

AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work - Microsoft Research

Unpacking METR’s findings: Does AI slow developers down?

DX Annual 2026

Altri episodi di "Engineering Enablement by DX"