eephus wrote: Wed Mar 29, 2023 3:08 pm
At the moment, in most implementations, AI doesn't know what it doesn't know. Which is a pretty big problem.
And what it does know (or what it presents as if it knows) is just fancy auto-complete. Sometimes very fancy, as in the case with some of the image work it can do.
Not that surprised by it so far. I have no doubt it can do plenty now and more in the future. It will chew up a lot of basic work that is mundane for humans and shorten cycles on a lot of fronts. Most of that is probably good. It will cost jobs, but then you just have to become the guy who knows how to use the AI real good.
I am pretty sure it's being used right now for tech support chat by some very large tech companies. I haven't bothered to check to see if that's verifiably either a known thing or just me being paranoid.
I'm kind of whatever on it. I don't give a shit because caring one way or another isn't going to stop it from being developed.
If it's a tool, it's going to come in and change things like every tool, proportional to its utility and how much it can be exploited.
We have yet to adapt to the internet, but we will do so over the next generation or so, and we'll adapt to AI as well, if we survive as a species long enough.
It's far more likely our disregard for climate destroys us than it is that AI does the job. I mean, AI might help us fix things like that if we use it right.
I have grown to despise AI as it is presently constituted. Basically across the board.
I'm not totally sure if it's the tools or the implementations of the tools, but that's not all that relevant to me right now.
It's destroying search, particularly on Google, where they have this massive investment in it and every reason to want to exploit it as much as possible.
Google started out saying "hey you said you want to know about [query here], and here's some links to relevant information about that."
Then they went to "hey you asked about [query here], and here's The Answer to your question, with some other stuff below it in case that's not enough."
Now, they're at "hey you asked about [query here], but surely you meant to ask [new query that probably isn't what you wanted to know]."
Google's is pretty horrible--just legendarily bad and wrong about certain things, the whole eating rocks and putting glue in pasta sauce and cleaning your washing machine with a solution of bleach and vinegar. Meta's actually worse. I haven't used Microsoft's that much.
It's the more subtle stuff that concerns me more. Just blithely passing on incorrect/misleading information as if it must be true because the AI "read it on the internet." At least a human doing that has to tell you "I read it on the internet." With an AI, that is a given. And you know from experience how reliable that kind of info processing is, when there's zero bullshit detector involved.
I know it's supposed to make these massive leaps into usefulness and everyone is assuming that will happen.
The thing we have to remember is that right now people can make money from it, so the incentive to make it better is shrinking.
The incentive, overall, is always going to be to make more and more money with the tools--not improve them unless it's necessary to do so to make more money.
Critically, the tools being bad in particular ways
will be beneficial financially to the companies who make them.
Making certain more-or-less compulsory things (like paid advertising if you have an e-comm business) more expensive is good for Google, Meta, Microsoft, Apple et al.
it's degraded human discourse already and it's not gonna get better soon.
I said this on Bluesky:
Indifference and a coarsening of expression will be the legacy of AI-generated text.
Most people prefer good writing to shitty writing but have not thought about aesthetics enough to care that a text is garbage.
They will just lose interest instead of thinking "this sucks and here's why."
I'm positive this is happening now and will accelerate in coming months.
The Meat World is your friend.