
789: Do More With AI - LLMs With Big Token Counts
0:00
33:42
Join Scott and CJ as they dive into the fascinating world of AI, exploring topics from LLM token sizes and context windows to understanding input length. They discuss practical use cases and share insights on how web developers can leverage larger token counts to maximize the potential of AI and LLMs.
Show Notes 00:00 Welcome to Syntax!
01:31 Brought to you by Sentry.io.
02:42 What is a token? Quizgecko GPT-4 Token Counter.
04:22 Context window sometimes called “max tokens”. OpenAI Platform Models.
Claude Models.
10:42 Understanding input length.
11:59 Models + services with big token counts. Gemini Docs.
13:22 Generating open API documentation for a complex API.
17:29 Generating JSDoc style typing. Drop-In stolinski GitHub.
21:07 Generating seed data for a complex database. bytedash w3cj GitHub.
24:34 Summarizing 8+ hours of video.
29:35 Some things we’ve yet to try.
31:32 What about cost? Google AI for Developers Cost.
Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads
Wes: X Instagram Tiktok LinkedIn Threads
Scott: X Instagram Tiktok LinkedIn Threads
CJ: X Instagram YouTube TwitchTV
Randy: X Instagram YouTube Threads
Więcej odcinków z kanału "Syntax - Tasty Web Development Treats"
Nie przegap odcinka z kanału “Syntax - Tasty Web Development Treats”! Subskrybuj bezpłatnie w aplikacji GetPodcast.