@charliejane This is absolutely correct. It's not just that we don't really understand human consciousness enough to replicate it in silico (though that is correct). The bigger point here is about LLMs in particular. They have fooled a lot of people into thinking that we have made a big leap down the path towards machine intelligence/consciousness, but that's an illusion. The major accomplishment of LLMs, in my view, is to invalidate the Turing test.
Conversation
Notices
-
Todd Horowitz (toddhorowitz@fediscience.org)'s status on Tuesday, 05-Dec-2023 01:33:23 UTC Todd Horowitz - Santa Claes 🇸🇪🇭🇰🎅 likes this.
-
Smörhuvud (he/him/surprise me) (guncelawits@mastodon.social)'s status on Tuesday, 05-Dec-2023 10:27:47 UTC Smörhuvud (he/him/surprise me) @toddhorowitz @charliejane @clacke Yes, and that foregrounds that we don’t know what we mean when we say “intelligence.”
Santa Claes 🇸🇪🇭🇰🎅 likes this. -
Smörhuvud (he/him/surprise me) (guncelawits@mastodon.social)'s status on Tuesday, 05-Dec-2023 10:27:49 UTC Smörhuvud (he/him/surprise me) @toddhorowitz @charliejane @clacke The Guncelawits Test is a test of a machine's ability to earn a dog’s friendship.
Santa Claes 🇸🇪🇭🇰🎅 likes this.