
Is your engineering team wasting budget and sacrificing latency by pre-computing data that most users never see? Chalk co-founder Elliot Marx joins Andrew Zigler to explain why the future of AI relies on real-time pipelines rather than traditional storage. They dive into solving compute challenges for major fintechs, the value of incrementalism, Elliot’s thoughts on and why strong fundamental problem-solving skills still beat specific language expertise in the age of AI assistants.
Join our AI Productivity roundtable: 2026 Benchmarks Insights
*This episode was recorded live at the Engineering Leadership Conference.
Follow the show:
- Subscribe to our Substack
- Follow us on LinkedIn
- Subscribe to our YouTube Channel
- Leave us a Review
Follow the hosts:
Follow today's guest(s):
OFFERS
- Start Free Trial: Get started with LinearB's AI productivity platform for free.
- Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
LEARN ABOUT LINEARB
- AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
- AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
- AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
- MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
More episodes from "Dev Interrupted"



Don't miss an episode of “Dev Interrupted” and subscribe to it in the GetPodcast app.







