AWS Bites podcast

153. LLM Inference with Bedrock

6/3/2026
0:00
43:25
Manda indietro di 15 secondi
Manda avanti di 15 secondi

If you’re curious about building with LLMs, but you want to skip the hype and learn what it takes to ship something reliable in production, this episode is for you.We share our real-world experience building AI-powered apps and the gotchas you hit after the demo: tokens and cost, quotas and throttling, IAM and access friction, marketplace subscriptions, and structured outputs that do not break your JSON parser.We focus on Amazon Bedrock as AWS’s managed inference layer: how to get started with the current access model, how to choose models, how pricing works, and what to watch for in production.We also go deep on structured outputs: constrained decoding, schema design that improves output quality, and how to avoid “grammar compilation timed out”.


In this episode, we mentioned the following resources:


Do you have any AWS questions you would like us to address?

Leave a comment here or connect with us on X/Twitter, BlueSky or LinkedIn:


- ⁠https://twitter.com/eoins⁠ | ⁠https://bsky.app/profile/eoin.sh⁠ | ⁠https://www.linkedin.com/in/eoins/⁠

- ⁠https://twitter.com/loige⁠ | ⁠https://bsky.app/profile/loige.co⁠ | ⁠https://www.linkedin.com/in/lucianomammino/

Altri episodi di "AWS Bites"