TechFirst with John Koetsier podcast

AGI: will it kill us or save us?

0:00
29:57
Retroceder 15 segundos
Avanzar 15 segundos

Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.


In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.


We cover:

• Is AGI inevitable? How soon will it arrive?

• Will AGI kill us … or save us?

• Why decentralization and blockchain could make AGI safer

• How large language models (LLMs) fit into the path toward AGI

• The risks of an AGI arms race between the U.S. and China

• Why Ben Goertzel created Meta, a new AGI programming language


📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.


⏱️ Chapters


00:00 – Intro: Will AGI kill us or save us?

01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference

02:47 – Is AGI inevitable?

05:08 – Defining AGI: generalization beyond programming

07:15 – Emotions, agency, and artificial minds

08:47 – The AGI arms race: US vs. China vs. decentralization

13:09 – Risks of narrow or bounded AGI

15:27 – Decentralization and open-source as safeguards

18:21 – Can LLMs become AGI?

20:18 – Using LLMs as reasoning guides

21:55 – Hybrid models: LLMs plus reasoning engines

23:22 – Hallucination: humans vs. machines

25:26 – How LLMs accelerate AI research

26:55 – How close are we to AGI?

28:18 – Why Goertzel built a new AGI language (Meta)

29:43 – Meta: from AI coding to smart contracts

30:06 – Closing thoughts

Otros episodios de "TechFirst with John Koetsier"