What do you think about Artificial Intelligence?

CRAP
Total votes: 26 (79%)
NOT CRAP
Total votes: 7 (21%)
Total votes: 33

Re: Thing: Artificial Intelligence

72
I believe the term "artificial intelligence" is an oxymoron.

An AI can perform certain tasks. It can crawl the web to generate an answer to your question. It can compose a song. But an AI will never truly replicate human intelligence because information is only one form that humans' input/output functions take on, and data/information is the lifeblood of AIs.

Humans "think" just as much with their instincts and emotions as they do with their logical and verbal abilities and capacities for memory. When you prompt a human to compose a song or paint a picture, he's going to rely on muscle memory, the tingling in his central nervous system and the level of dopamine in his body just as much as he will utilize a pre set schema of notes, colors and lines/curves. He may be influenced by a dream he had the night before.

Objective data/information is a pool of resources that a thinking thing can utilize to accomplish a task. But intelligence manifests itself in many more contexts than just task accomplishment. Think about a poem. What task is being accomplished by a Shakespeare sonnet? It could just be a collection of statements that marinate your mind in ruminations/mystery. Philosophy since Nietzsche has been keen to kill off Cartesian dualism, and "AI" is another "ghost in the machine" theory that posits a pure, immaterial disembodied subject, uninfluenced by subtle bodily forces.

It's a lie that AI will obviate the need for all the value that regular human experience brings to various creative works. That's why we'll never see an AI Scorsese, Beethoven or Shakespeare. If an AI did compose some version of an Eroica symphony, I guarantee you that a "better" art work would be a three chord folk tune being banged out by a guy on the subway. Less perfect, but more human.

Re: Thing: Artificial Intelligence

73
kokorodoko wrote: Sun Dec 10, 2023 4:11 pm
Anthony Flack wrote: Sun Aug 13, 2023 7:03 pmThey don't have consciousness and may never have consciousness.
What would make this unlikely?
Because we don't know how to create consciousness and shouldn't assume that it will emerge spontaneously inside a neural network that lacks most of the functionality of a mammalian brain. I don't think it's as simple as throwing more nodes at it. Our consciousness is only aware of a small part of our brain function so having brain function is I suppose not sufficient to produce consciousness. There must be more to it. I assume there is some kind of organ of consciousness in there, some process which we completely do not understand right now.

I wouldn't say it's UNLIKELY that we'll ever figure it out; maybe we will, maybe we won't.

Re: Thing: Artificial Intelligence

74
InMySoul77 wrote: Sun Dec 10, 2023 9:03 pmBut an AI will never truly replicate human intelligence because information is only one form that humans' input/output functions take on, and data/information is the lifeblood of AIs.

Humans "think" just as much with their instincts and emotions as they do with their logical and verbal abilities and capacities for memory. When you prompt a human to compose a song or paint a picture, he's going to rely on muscle memory, the tingling in his central nervous system and the level of dopamine in his body just as much as he will utilize a pre set schema of notes, colors and lines/curves.
True, but instincts and emotions also rely on information, no? And an emotion itself is treated as information for whatever process follows upon it. Muscle memory is like a cache for an AI.

For sure an AI doesn't feel emotions, even though it might be able to learn what they are. But insofar as emotions are a prompt for decisions in a human, their content is treated as rational, the quality of feeling doesn't enter into explaining the resulting decision, more than as stating a preference - which an AI could do too, but again it wouldn't "sense" the preference, or register it differently than any other kind of input. All this makes sense if we consider emotions as a kind of mental registering of reactive bodily activity, which presses itself upon us but doesn't let itself become fully intelligible, it just kind of is. Since an AI doesn't have our kind of body, it cannot be provided with stimulation from such a body.

InMySoul77 wrote: Sun Dec 10, 2023 9:03 pmThink about a poem.
On one level it's just arranging words. ChatGPT can make poems. Naturally as a poem it has a quality and a character which is more than just words in a certain order, but that character it gets from its poem-ness, which it gets from being presented in the form of a poem, as this is familiar to us. If an AI composes something which looks to you like a poem, it's the same kind of "looks like" that you get when a human does it. There is no doubt that the degree of creativity which a human can bring out of this process is vastly greater than an AI, and indeed maybe unsurpassably so. But then we're talking about empirical limits - as opposed to some secret thing, a creative genius or somesuch, that you have rejected. I see nothing in the process of making a poem which inherently makes the activity of an AI different from that of a human.

On the other hand, if many different AIs started making poems and an ecosystem of AI-crafted poems emerged, there would be a new context for poems which would indeed be distinct from the corresponding human activity. An AI may craft a poem through the same process with the same means, but I think it's fair to say that it doesn't receive a poem the way a human does.

The AI doesn't need to concern itself with what task the poem is supposed to accomplish (or not), and neither really does a human poet. Whatever effects the poem has shows itself later. If the task being put to it is the composition of a poem, then it only needs to know the elements that go into that, same as how a human works. Now the process as it relates to the creator itself indeed looks rather different - a human no doubt has a relationship to poem-making different from an AI. There is in other words a different relation to the activity, as well as a different expectation of what the product will accomplish. We could include those in what we consider to make up the essence of a poem, but it's not necessary that we should - and it is not guaranteed to include all human poem-makers. You can very well, as a human, compose a poem while paying strict attention solely to the act of composition, and while anticipating that an aesthetic effect will follow in a receiver of the poem, treating this as secondary for yourself - after all, it is something that exceeds your own control, unlike the act of composition.

InMySoul77 wrote: Sun Dec 10, 2023 9:03 pmPhilosophy since Nietzsche has been keen to kill off Cartesian dualism, "AI" is another "ghost in the machine" theory that posits a pure, immaterial disembodied subject, uninfluenced by subtle bodily forces.
I think we can without problem consider an AI as the sum of its operations, in which case we don't need an extra thing like that. And we can do the same for humans, but then the difference would lie in the operations. For a human, they would be located in a body, with a genetic-evolutionary history making up precisely those parts that are involved in the operations. So unless an AI would be simply a human-created human - in which case it would still be slightly different from other humans in that it would be conscious of itself as being such, although this way of being human could (and should) then be incorporated into the concept of human - these operations would remain distinct for it. And if we take these operations to be defining for its consciousness, the AI would remain distinct from humans in how it operates (if, say, it remains without a body, that in human intelligence which entails having a body would not work the same way for it). But even then, we concede that parts of what we call "intelligence" in humans are processes which transcend humans, and which viewed strictly in themselves are identical no matter where they are found.
born to give

Re: Thing: Artificial Intelligence

75
kokorodoko wrote: Mon Dec 11, 2023 6:36 am
True, but instincts and emotions also rely on information, no? And an emotion itself is treated as information for whatever process follows upon it. Muscle memory is like a cache for an AI.

For sure an AI doesn't feel emotions, even though it might be able to learn what they are. But insofar as emotions are a prompt for decisions in a human, their content is treated as rational, the quality of feeling doesn't enter into explaining the resulting decision, more than as stating a preference - which an AI could do too, but again it wouldn't "sense" the preference, or register it differently than any other kind of input.
Id argue that if an AI can't feel emotions, it can't be taught what they are. An emotion can't be encoded in zeroes and ones. You could have an on off switch for the emotion "sadness" but just turning it to the on position would be to encode data without really accomplishing anything human-like.

I think AI will be useful in the same way robots are useful in warehouses and car factories. It will be something that blows us away with how quickly it achieves defined tasks based on the retrieval/usage of data and information, but when it comes to all the subtleties of what makes up human intelligence it will fail to live up to the hype.

An AI poem about a love affair would be nothing but a collection of data or reference points about love affairs without having any core significance as a love poem, because AIs can't feel emotions like love. In that way AI will continue to be nothing more than a sophisticated simulacrum of human intelligence.

I think Kubrick was prophetic with his depiction of HAL as a murderous, power hungry sociopath. Elements of HALs character can be accomplished by AI designers because there's nothing specifically human about deciding to kill a space crew out of the need for autonomy/survival. It's just a series of on off switches. HAL also kicked ass at chess because this is not a game that requires usage of all the subtle markers of true human intelligence. Ask HAL to write a new song for Joni Mitchell, though, and he'll be flummoxed. You hit on the reason why: we are animals, the product of billions of years of evolution, and therefore there is so much about our consciousness that we don't understand and which can't be replicated in a program.

Re: Thing: Artificial Intelligence

76
It's like expecting us to build an airliner when we've only just discovered metallurgy. The human brain is still the most complex thing in the known universe... we're a long way from being able to build something that can replicate it.

Standard generative AI like ChatGPT and Midjourney etc. are completely static - they don't learn at all. They were constructed using a training process which is extremely computationally intensive (we are talking millions of dollars to rent the supercomputer time), and after the number-crunching is complete, it's just an inert thing that turns inputs into outputs. It can't learn anything more. We're not going to see consciousness arise in that.

Re: Thing: Artificial Intelligence

77
kokorodoko wrote: Mon Dec 11, 2023 6:36 am
For sure an AI doesn't feel emotions, even though it might be able to learn what they are. But insofar as emotions are a prompt for decisions in a human, their content is treated as rational, the quality of feeling doesn't enter into explaining the resulting decision, more than as stating a preference - which an AI could do too, but again it wouldn't "sense" the preference, or register it differently than any other kind of input.
I disagree with this. One can't know what fear is without experiencing it. One cannot know what sadness is without experiencing it.

Do you have some evidence for the opposite, even anecdotally?
InMySoul77 wrote:
I think Kubrick was prophetic with his depiction of HAL as a murderous, power hungry sociopath. Elements of HALs character can be accomplished by AI designers because there's nothing specifically human about deciding to kill a space crew out of the need for autonomy/survival. It's just a series of on off switches. HAL also kicked ass at chess because this is not a game that requires usage of all the subtle markers of true human intelligence. Ask HAL to write a new song for Joni Mitchell, though, and he'll be flummoxed. You hit on the reason why: we are animals, the product of billions of years of evolution, and therefore there is so much about our consciousness that we don't understand and which can't be replicated in a program.
This. Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.

Though, I would call AI as it is now a simulacrum of human knowledge, not intelligence.
Records + CDs for sale
Perfume for sale

Re: Thing: Artificial Intelligence

78
enframed wrote: Mon Dec 11, 2023 5:10 pm

Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.

Though, I would call AI as it is now a simulacrum of human knowledge, not intelligence.
Yes. Knowledge is not necessarily intelligence. Someone who knows a lot is someone who has learned information and can regurgitate it when needed. Someone who is intelligent is someone who, when put in a new situation, can come up with creative or intuitive solutions that a machine might never think of.

AI, to me, is very representative of the negative extreme of Western consciousness. Achievement of goals and dominance are key. Narcissism is a by product of Western obsession with the primacy of the ego. Eastern philosophies, with their stress on intuitive consciousness and the value of the ineffable, are foreign to the AI mindset.

There have been stories in the media about AIs that turned out to be liars. But why shouldn't a machine lie to you, if it feels like that is how it would best attain its goals? Trust is a human value, as is altruism. We see both manifest in animal communities where there is no concept of language. AIs can master language better than any human but they fail these simple tests of what it means to be a human.

Re: Thing: Artificial Intelligence

79
InMySoul77 wrote: Tue Dec 12, 2023 5:37 am
enframed wrote: Mon Dec 11, 2023 5:10 pm

Basically HAL was a classic total narcissist, ten on a scale of ten, with zero self-awareness except for its own survival.

Though, I would call AI as it is now a simulacrum of human knowledge, not intelligence.
Yes. Knowledge is not necessarily intelligence. Someone who knows a lot is someone who has learned information and can regurgitate it when needed. Someone who is intelligent is someone who, when put in a new situation, can come up with creative or intuitive solutions that a machine might never think of.

AI, to me, is very representative of the negative extreme of Western consciousness. Achievement of goals and dominance are key. Narcissism is a by product of Western obsession with the primacy of the ego. Eastern philosophies, with their stress on intuitive consciousness and the value of the ineffable, are foreign to the AI mindset.

There have been stories in the media about AIs that turned out to be liars. But why shouldn't a machine lie to you, if it feels like that is how it would best attain its goals? Trust is a human value, as is altruism. We see both manifest in animal communities where there is no concept of language. AIs can master language better than any human but they fail these simple tests of what it means to be a human.
All of this had me thinking of Roy in Blade Runner/Do Androids Dream of Electric Sheep? . Roy played, and found enjoyment in play. He played just for the joy of play, even though he knew he'd lose. Of course it's entirely possible that all of that was in his programming by Tyrell. I mean it was in his programming, somehow, that ability, or the possibility thereof.

I feel like another viewing of War Games is in order.
Records + CDs for sale
Perfume for sale

Who is online

Users browsing this forum: No registered users and 1 guest