Live Breaking News & Updates on Sentience Politics|Page 10

Stay updated with breaking news from Sentience politics. Get real-time updates on events, politics, business, and more. Visit us for reliable news and exclusive interviews.

Closer to sentience? Snapchat's My AI chatbot posts story on its own then ghosts users, leaves them scared

Snapchat's AI chatbot, My AI, sparked confusion and fear among users when it posted a story on its own and refused to respond to questions about it. Snapchat says the incident was a temporary outage. ....

Evan Spiegel , Divyanshi Sharma , Snapchat Plus , Jing Ai , I Chatbot , Popular Fiction , Y Ai App , Social Media , Temporary Outage , Snapchat Plus Subscribers , Evan Spiegel , Messaging Service ,

Transcripts for BBCNEWS Click 20240604 04:51:00

More control over to automated systems because they are just faster and better than us and then every so often something goes wrong, a bit like the financial crisis. sure, i think that s a legitimate concern. it s really important to recognise that these systems are fallible. they are not purveyors of the truth, they are not omnipotent, they are not gods, right? they are just as good as the training data and as it turns out, you can already see with chatgpt, they have this propensity to lie, to hallucinate, to make some stuff up. they are not infallible. by giving these machines undue sentience or capabilities, i think we are actually stripping away our own agency, and that is that these systems are still very much in the control of organisations and people. and over the next few years, months and years, we have a chance to think clearly and strategically about how they are going to be integrated into society. thanks so much, nina. a lot to think about. and now for a creative look ....

Training Data , To Hallucinate , Nina Schick ,

Transcripts for BBCNEWS Click 20240604 13:51:00

And then every so often something goes wrong, a bit like the financial crisis. sure, i think that s a legitimate concern. it s really important to recognise that these systems are fallible. they are not purveyors of the truth, they are not omnipotent, they are not gods, right? they are just as good as the training, data and as it turns out, you can already see with chatgpt, they have this propensity to lie, to hallucinate, to make some stuff up. they are not infallible. by giving these machines undue sentience or capabilities, i think we are actually stripping away our own agency, and that is that these systems are still very much in the control of organisations and people. and over the next few years, months and years, we have a chance to think clearly and strategically about how they are going to be integrated into society. thanks so much nina, a lot to think about. and now for a creative look at what could happen if ai did start to take over. yeah, we have been to the misalignment m ....

To Hallucinate , San Francisco , Misalignment Museum ,

Transcripts for BBCNEWS Click 20240604 00:50:00

Be fundamentally different. now there is this debate going on about, is this going to augment us or is it going to automate us? ultimately i think it will become a very political issue, because it will be both. and the really interesting thing is that this is true now for the first time for white collar work. do you think there actually is a worry that we are giving more control over to automated systems because they are just faster and better than us and then every so often something goes wrong, a bit like the financial crisis. sure, i think that s a legitimate concern. it s really important to recognise that these systems are fallible. they are not purveyors of the truth, they are not omnipotent, they are not gods, right? they are just as good as the training data and as it turns out, you can already see with chatgpt, they have this propensity to lie, to hallucinate, to make some stuff up. they are not infallible. by giving these machines undue sentience or capabilities, i think we ....

Training Data , To Hallucinate ,

Transcripts for BBCNEWS Click 20240604 12:51:00

For white collar work. do you think there actually is a worry that we are giving more control over to automated systems because they are just faster and better than us and then every so often something goes wrong, a bit like the financial crisis. sure, i think that s a legitimate concern. it s really important to recognise that these systems are fallible. they are not purveyors of the truth, they are not omnipotent, they are not gods, right? they are just as good as the training data and as it turns out, you can already see with chatgpt, they have this propensity to lie, to hallucinate, to make some stuff up. they are not infallible. by giving these machines undue sentience or capabilities, i think we are actually stripping away our own agency, and that is that these systems are still very much in the control of organisations and people. and over the next few years, months and years, we have a chance to think clearly and strategically about how they are going to be ....

Training Data , To Hallucinate ,