How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE
0:00
NaN:NaN:NaN
Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI
technologies we have today might pose. With language models, the kind of
technology that powers ChatGPT and other chatbots, there are harms that
might result from regular use of these systems, and then there are harms
that might result from malicious use. Regular use would be how you and I
might use ChatGPT and other chatbots to do ordinary things. There is a
concern that these systems might reproduce and amplify, for example, racist
or sexist biases, or spread misinformation. These systems are known to, as
researchers put it, “hallucinate” in some cases, making up facts or false
citations. And then there are the harms from malicious use, which might
result from some bad actors using the systems for nefarious purposes. That
would include disinformation on a mass scale. You could imagine a bad actor
using language models to automate the creation of fake news and propaganda
to try to manipulate voters, for example. And this takes us into the medium
term future, because we're not quite there, but another concern would be
language models providing dangerous, potentially illegal information that
is not readily available on the internet for anyone to access. As they get
better over time, there is a concern that in the wrong hands, these systems
might become quite powerful weapons, at least indirectly, and so people
have been trying to mitigate these potential harm
Mais episódios de "ART - The Creative Process & One Planet Podcast"
Não percas um episódio de “ART - The Creative Process & One Planet Podcast” e subscrevê-lo na aplicação GetPodcast.