Re: Thing: Artificial Intelligence
Posted: Sun Dec 10, 2023 4:11 pm
What would make this unlikely?Anthony Flack wrote: Sun Aug 13, 2023 7:03 pmThey don't have consciousness and may never have consciousness.
What would make this unlikely?Anthony Flack wrote: Sun Aug 13, 2023 7:03 pmThey don't have consciousness and may never have consciousness.
Because we don't know how to create consciousness and shouldn't assume that it will emerge spontaneously inside a neural network that lacks most of the functionality of a mammalian brain. I don't think it's as simple as throwing more nodes at it. Our consciousness is only aware of a small part of our brain function so having brain function is I suppose not sufficient to produce consciousness. There must be more to it. I assume there is some kind of organ of consciousness in there, some process which we completely do not understand right now.kokorodoko wrote: Sun Dec 10, 2023 4:11 pmWhat would make this unlikely?Anthony Flack wrote: Sun Aug 13, 2023 7:03 pmThey don't have consciousness and may never have consciousness.
True, but instincts and emotions also rely on information, no? And an emotion itself is treated as information for whatever process follows upon it. Muscle memory is like a cache for an AI.InMySoul77 wrote: Sun Dec 10, 2023 9:03 pmBut an AI will never truly replicate human intelligence because information is only one form that humans' input/output functions take on, and data/information is the lifeblood of AIs.
Humans "think" just as much with their instincts and emotions as they do with their logical and verbal abilities and capacities for memory. When you prompt a human to compose a song or paint a picture, he's going to rely on muscle memory, the tingling in his central nervous system and the level of dopamine in his body just as much as he will utilize a pre set schema of notes, colors and lines/curves.
On one level it's just arranging words. ChatGPT can make poems. Naturally as a poem it has a quality and a character which is more than just words in a certain order, but that character it gets from its poem-ness, which it gets from being presented in the form of a poem, as this is familiar to us. If an AI composes something which looks to you like a poem, it's the same kind of "looks like" that you get when a human does it. There is no doubt that the degree of creativity which a human can bring out of this process is vastly greater than an AI, and indeed maybe unsurpassably so. But then we're talking about empirical limits - as opposed to some secret thing, a creative genius or somesuch, that you have rejected. I see nothing in the process of making a poem which inherently makes the activity of an AI different from that of a human.
I think we can without problem consider an AI as the sum of its operations, in which case we don't need an extra thing like that. And we can do the same for humans, but then the difference would lie in the operations. For a human, they would be located in a body, with a genetic-evolutionary history making up precisely those parts that are involved in the operations. So unless an AI would be simply a human-created human - in which case it would still be slightly different from other humans in that it would be conscious of itself as being such, although this way of being human could (and should) then be incorporated into the concept of human - these operations would remain distinct for it. And if we take these operations to be defining for its consciousness, the AI would remain distinct from humans in how it operates (if, say, it remains without a body, that in human intelligence which entails having a body would not work the same way for it). But even then, we concede that parts of what we call "intelligence" in humans are processes which transcend humans, and which viewed strictly in themselves are identical no matter where they are found.InMySoul77 wrote: Sun Dec 10, 2023 9:03 pmPhilosophy since Nietzsche has been keen to kill off Cartesian dualism, "AI" is another "ghost in the machine" theory that posits a pure, immaterial disembodied subject, uninfluenced by subtle bodily forces.
Id argue that if an AI can't feel emotions, it can't be taught what they are. An emotion can't be encoded in zeroes and ones. You could have an on off switch for the emotion "sadness" but just turning it to the on position would be to encode data without really accomplishing anything human-like.kokorodoko wrote: Mon Dec 11, 2023 6:36 am
True, but instincts and emotions also rely on information, no? And an emotion itself is treated as information for whatever process follows upon it. Muscle memory is like a cache for an AI.
For sure an AI doesn't feel emotions, even though it might be able to learn what they are. But insofar as emotions are a prompt for decisions in a human, their content is treated as rational, the quality of feeling doesn't enter into explaining the resulting decision, more than as stating a preference - which an AI could do too, but again it wouldn't "sense" the preference, or register it differently than any other kind of input.
I disagree with this. One can't know what fear is without experiencing it. One cannot know what sadness is without experiencing it.kokorodoko wrote: Mon Dec 11, 2023 6:36 am
For sure an AI doesn't feel emotions, even though it might be able to learn what they are. But insofar as emotions are a prompt for decisions in a human, their content is treated as rational, the quality of feeling doesn't enter into explaining the resulting decision, more than as stating a preference - which an AI could do too, but again it wouldn't "sense" the preference, or register it differently than any other kind of input.
This. Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.InMySoul77 wrote:
I think Kubrick was prophetic with his depiction of HAL as a murderous, power hungry sociopath. Elements of HALs character can be accomplished by AI designers because there's nothing specifically human about deciding to kill a space crew out of the need for autonomy/survival. It's just a series of on off switches. HAL also kicked ass at chess because this is not a game that requires usage of all the subtle markers of true human intelligence. Ask HAL to write a new song for Joni Mitchell, though, and he'll be flummoxed. You hit on the reason why: we are animals, the product of billions of years of evolution, and therefore there is so much about our consciousness that we don't understand and which can't be replicated in a program.
Yes. Knowledge is not necessarily intelligence. Someone who knows a lot is someone who has learned information and can regurgitate it when needed. Someone who is intelligent is someone who, when put in a new situation, can come up with creative or intuitive solutions that a machine might never think of.enframed wrote: Mon Dec 11, 2023 5:10 pm
Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.
Though, I would call AI as it is now a simulacrum of human knowledge, not intelligence.
All of this had me thinking of Roy in Blade Runner/Do Androids Dream of Electric Sheep? . Roy played, and found enjoyment in play. He played just for the joy of play, even though he knew he'd lose. Of course it's entirely possible that all of that was in his programming by Tyrell. I mean it was in his programming, somehow, that ability, or the possibility thereof.InMySoul77 wrote: Tue Dec 12, 2023 5:37 amYes. Knowledge is not necessarily intelligence. Someone who knows a lot is someone who has learned information and can regurgitate it when needed. Someone who is intelligent is someone who, when put in a new situation, can come up with creative or intuitive solutions that a machine might never think of.enframed wrote: Mon Dec 11, 2023 5:10 pm
Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.
Though, I would call AI as it is now a simulacrum of human knowledge, not intelligence.
AI, to me, is very representative of the negative extreme of Western consciousness. Achievement of goals and dominance are key. Narcissism is a by product of Western obsession with the primacy of the ego. Eastern philosophies, with their stress on intuitive consciousness and the value of the ineffable, are foreign to the AI mindset.
There have been stories in the media about AIs that turned out to be liars. But why shouldn't a machine lie to you, if it feels like that is how it would best attain its goals? Trust is a human value, as is altruism. We see both manifest in animal communities where there is no concept of language. AIs can master language better than any human but they fail these simple tests of what it means to be a human.