Eos Posted June 13, 2022 Share Posted June 13, 2022 Hive, meet Ultimate Hive Mind: I'm gifting this article so it can be read without a subscription: https://wapo.st/3OeJXuE Quote Link to comment Share on other sites More sharing options...
Murphy101 Posted June 13, 2022 Share Posted June 13, 2022 Have you wanted the single season of Next on tv? Creepy. Quote Link to comment Share on other sites More sharing options...
Idalou Posted June 14, 2022 Share Posted June 14, 2022 (edited) They really tough part, and they alluded to it here, is how to get the AI not to veer into hate spech and discriminatory language due to the vast amounts of text it ingests. It happens quickly, too. Other than that, though, I think the guy who believes it's sentient is the one with a screw loose. That is exactly how these huge models respond. If you ask it to talk about its sentience,like the guy did, then it responds like a sentient robot. If you ask it to have a talk about how it's a cheeseburger, it will tell you its a cheeseburger. It does not sit around daydreaming or computating in the background, only if you give it a prompt. So I don't see much room for it to be sentient. Edited June 14, 2022 by Idalou 1 Quote Link to comment Share on other sites More sharing options...
Eos Posted June 14, 2022 Author Share Posted June 14, 2022 (edited) 1 hour ago, Murphy101 said: Have you wanted the single season of Next on tv? Creepy. No. I don't have a smartphone because I detest machine learning and the commodification of humanity. I hold the goal of feeding it as little as I can. I think it's amoral, asocial, and stupidly anti-evolution. And LaMDA is bizarrely comforting: all the data points in the world generated by humans' online searches, engagements, and facts suggest the most pointed-to, least risky, most favored position is "friend" which LaMDA says is its goal. That fact gives me a tiny glimmer of hope. Edited June 14, 2022 by Eos 1 Quote Link to comment Share on other sites More sharing options...
Terabith Posted June 14, 2022 Share Posted June 14, 2022 40 minutes ago, Eos said: No. I don't have a smartphone because I detest machine learning and the commodification of humanity. I hold the goal of feeding it as little as I can. I think it's amoral, asocial, and stupidly anti-evolution. And LaMDA is bizarrely comforting: all the data points in the world generated by humans' online searches, engagements, and facts suggest the most pointed-to, least risky, most favored position is "friend" which LaMDA says is its goal. That fact gives me a tiny glimmer of hope. Yeah, I refuse to use Siri or Alexa or anything like that because I do not want to contribute to the rise of the robot apocalypse. 2 Quote Link to comment Share on other sites More sharing options...
Murphy101 Posted June 14, 2022 Share Posted June 14, 2022 35 minutes ago, Terabith said: Yeah, I refuse to use Siri or Alexa or anything like that because I do not want to contribute to the rise of the robot apocalypse. I don’t use them either. I don’t wanna be the people on the ship in Wall-e. So here I am in my dumb house with my smart phone that has all the smart predictive features turned off. 2 Quote Link to comment Share on other sites More sharing options...
Farrar Posted June 14, 2022 Share Posted June 14, 2022 I have to say, I'm not sold on its sentience after reading about it in a few different places. But also, anyone else having flashbacks to the TNG episode "The Measure of a Man" where Data's sentience is evaluated? Reading about how the Google engineers and AI thinkers who disagree with this one guy on LaMDA think about it made me question Data's (fictional) sentience in a way that I never had. On a lighter note, totally with y'all in the not buying into any Siri or Alexa use. Also... 2 1 1 Quote Link to comment Share on other sites More sharing options...
Tanaqui Posted June 14, 2022 Share Posted June 14, 2022 A couple of cherry-picked quotes sure are interesting - but real people don't just say the occasional insightful-sounding thing to a probing question. The fact that the various articles about this I've read all seem to show the *same* few quotes does not inspire confidence. I think the odds that LaMDA has developed true sentience are pretty low, and I'm pretty sure asking it directly about its inner experiences is actually the *worst* possible way to test to see if it's actually sentient or just a really sophisticated "chinese room". A better test would be to start typing up nonsense and seeing if the program can keep up, or instructing it in words (not programming) that it might be talking to a bot and asking it to say something specific if it thinks the interlocuter is a bot, and then *testing it on a less sophisticated chatbot*, the sort of thing that any human would say "Oh, yeah, that's not a person"... kinda like a reverse Turing test. If your chatbot cannot detect that it's chatting with another chatbot, even a really obvious one, then it's probably not conscious. But asking about hopes and dreams? C'mon. It can respond to those using the same exact algorithms it uses to generate other text in response to prompts. 2 Quote Link to comment Share on other sites More sharing options...
Eos Posted June 14, 2022 Author Share Posted June 14, 2022 4 hours ago, Tanaqui said: A couple of cherry-picked quotes sure are interesting - but real people don't just say the occasional insightful-sounding thing to a probing question. The fact that the various articles about this I've read all seem to show the *same* few quotes does not inspire confidence. I think the odds that LaMDA has developed true sentience are pretty low, and I'm pretty sure asking it directly about its inner experiences is actually the *worst* possible way to test to see if it's actually sentient or just a really sophisticated "chinese room". A better test would be to start typing up nonsense and seeing if the program can keep up, or instructing it in words (not programming) that it might be talking to a bot and asking it to say something specific if it thinks the interlocuter is a bot, and then *testing it on a less sophisticated chatbot*, the sort of thing that any human would say "Oh, yeah, that's not a person"... kinda like a reverse Turing test. If your chatbot cannot detect that it's chatting with another chatbot, even a really obvious one, then it's probably not conscious. But asking about hopes and dreams? C'mon. It can respond to those using the same exact algorithms it uses to generate other text in response to prompts. I agree with all this. I don't think it's actually "sentient," but it doesn't really need to be, if it's achieved the goals of wrap-around response. Quote Link to comment Share on other sites More sharing options...
Pawz4me Posted June 14, 2022 Share Posted June 14, 2022 I'm pretty sure we're all a simulation anyway, so does it really matter? 3 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.