
Time-Series Data Quality and Reliability for Manufacturing AI: Bert Baeck - Co-Founder and CEO, Timeseer.AI
Most data-quality initiatives focus on things like freshness or schema. That works for IT data, but not for sensor data. Sensor data is different. It reflects physics. To trust it, you need contextual, physics-aware checks. That means spotting: → Impossible jumps → Flatlines (long quiet periods) → Oscillations → Broken causal patterns (e.g., valve opens → flow should increase) It’s no surprise that poor data quality is one of the biggest reasons manufacturers struggle to scale AI initiatives. This isn’t just data science, it’s operations science. Think of data quality as infrastructure: a trust layer between your OT data sources and your AI tools. Making that real requires four building blocks: 1. 𝐒𝐜𝐨𝐫𝐢𝐧𝐠 – Physics-aware anomaly rules, baselines 2. 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 – Continuous validation at the right cadence (real-time or daily) 3. 𝐂𝐥𝐞𝐚𝐧𝐢𝐧𝐠 & 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧 – Auto-fix what you can; escalate what you can’t 4. 𝐔𝐧𝐢𝐟𝐨𝐫𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 & 𝐒𝐋𝐀𝐬 – Define “good enough” and enforce it before data is consumed Why it matters: ✅ Data teams – Less cleansing, faster delivery ✅ AI models – Reliable inputs = repeatable results ✅ Ops teams – Catch failing sensors before downtime ✅ Business – Avoid safety incidents, billing errors, bad decisions In the latest episode of the AI in Manufacturing podcast, I sat down with Bert Baeck, Co-Founder of Timeseer.AI, to discuss time-series data quality and reliability strategies for AI in manufacturing applications.
Weitere Episoden von „Industry40.tv“
Verpasse keine Episode von “Industry40.tv” und abonniere ihn in der kostenlosen GetPodcast App.