AI-chemy 2: This Time It's Personal (Part 2)
18:09Dr. Anya Fink from CNA’s Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector. Report CNA: A Technological Divorce: The impact of sanctions and the end of cooperation on Russia’s technology and AI sector.
AI-chemy 2: This Time It's Personal
23:50Andy and Dave discuss the latest in AI news and research, including an update from DARPA on its Machine Common Sense program, demonstrating rapidly adapting to changing terrain, carrying dynamic loads, and understanding how to grasp objects [0:55]. The Israeli military fields new tech from Camero-Tech that allows operators to ‘see through walls,’ using pulse-based ultra-wideband micro-power radar in combination with an AI-based algorithm for tracking live targets [5:01]. In autonomous shipping [8:13], the Suzaka, a cargo ship powered by Orca AI, makes a nearly 500-mile voyage “without human intervention” for 99% of the trip; the Prism Courage sails from the Gulf of Mexico to South Korea “controlled mostly” by HiNAS 2.0, a system by Avikus, a subsidiary of Hyundai; and Promare’s and IBM’s Mayflower Autonomous Ship travels from the UK to Nova Scotia. In large language models [10:09], a Chinese research team unveils a 174 trillion parameter model, Bagualu (‘alchemist pot’) and claims it runs an AI model as sophisticated as a human brain (not quite, though); Meta releases the largest open-source AI language model, with OPT-66B, a 66 billion parameter model; and Russia’s Yandex opens its 100 billion parameters YaLM to public access. Researchers from the University of Chicago publish a model that can predict future crimes “one week in advance with about 90% accuracy” (referring to general crime levels, not specific people and exact locations), and also demonstrate the potential effects of bias in police response and enforcement [13:32]. In a similar vein, researchers from Berkeley, MIT, and Oxford publish attempts to forecast future world events using the neural network system Autocast, and show that forecasting performance still comes in far below a human expertise baseline [16:37]. Angelo Cangelosi and Minoru Asada provide the (graduate) book of the week, with Cognitive Robotics.
Verpasse keine Episode von “AI with AI: Artificial Intelligence with Andy Ilachinski” und abonniere ihn in der kostenlosen GetPodcast App.
the sentience of the lamdas
41:02Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan’s “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems. Visit our CNA.org to explore the links mentioned in this episode.
RAI, consumers’ co-operative
44:23CNA colleagues Kaia Haney and Heather Roff join Andy and Dave to discuss Responsible AI. They discuss the recent Inclusive National Security seminar on AI and National Security: Gender, Race, and Algorithms. The keynote speaker, Elizabeth Adams spoke on the challenges that society faces in integrating AI technologies in an inclusive fashion, and she identified ways in which consumers of AI-enabled products can ask questions and engage on the topic of inclusivity and bias. The group also discusses a variety of topics around the many challenges that organizations face in operationalizing these ideas, including a revisit of the findings from recent medical research, which found an algorithm was able to identify the race of a subject from x-rays and CAT scans, even with identifying features removed. Inclusive National Security Series: AI and National Security: Gender, Race and Algorithms Inclusive National Security webpage Sign up for the InclusiveNatSec mailing list here.
Top Gan: Swarmaverick
37:26Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can’t explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7.
El Gato Altinteligento
42:55Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25]. In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it’s all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus’s Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. Apply: Sr. Research Specialist (Artificial Intelligence Research) - ESDA Division Further Reading
Leggo my Stego!
28:33Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24]. Careers: https://us61e2.dayforcehcm.com/CandidatePortal/en-US/CNA/Posting/View/1624 “RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm EST on June 7th at https://www.eventbrite.com/e/ai-and-national-security-gender-race-and-algorithms-tickets-332642301077?aff=Podcast.”
The Amulet of NeRFdor
38:10Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.’ [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?’ which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] Professor Chad Jenkins from the University of Michigan returns to discuss the latest developments, including the upcoming Department of Robotics, and a robotics undergraduate degree. [19:10] https://www.cna.org/CAAI/audio-video
Bridge on the River NukkAI
34:40Andy and Dave discuss the latest in AI news and research, including DoD’s 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available. The European Parliament’s Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft (including that the EU should not regulate AI as a technology, but rather focus on risk). Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback. Researchers from Collaborations Pharmaceuticals and King’s College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods. And Kevin Pollpeter, from CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict. https://www.cna.org/news/AI-Podcast
A PIG GR_PH
34:48Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine’s defense ministry has begun to use Clearview AI’s facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15]. https://www.cna.org/CAAI/audio-video