Actually we know what the human driver's response is to the trolley problem which is that you instinctively save yourself, which is why the front seat passenger is the most likely to be killed in a car crash. The human driver tends to instinctively plow through the crowded mall rather than into the brick wall.
Would a computer driver take into account who's in your car? Would it even know? Should we program the car to head for the brick wall rather than the crowded mall? Would people submit to being driven around in an autonomous car knowing that it was programmed to do that? Would we end up making cars that drive into brick walls every time a cat runs out in front of it, ie. false positives do more harm than good? What if it's glitchy and unpredictable but still performs better than a human driver? Maybe a random choice would be better. It flips a virtual coin and we all take our chances. Why are we still driving cars?
Here in my car, I feel safest of all...
Re: Let me teach you about meta-ethics
22They are - not as they exist right now, but certainly as they are being developed in anticipation for mainstream rollout. It is a textbook issue in AI ethics as I write this. If you listen carefully, you can hear the journals filling up.Anthony Flack wrote: Mon May 15, 2023 4:24 pmAre they? I always thought this was a bit of a red herring for autonomous vehicles, which can't tell the difference between a doctor, a mugger or a pregnant woman. They're just going to do their best to stop without hitting any obstacle... as opposed to a human driver who would similarly avoid the trolley problem by panicking, throwing the car into a skid and wiping out everybody.seby wrote: Sun Apr 23, 2023 8:54 pm For example, collision-avoidance protocols in the algorithms of autonomous vehicles are responses to the Trolly Problem (have fun looking this up).
I think the trolley problem is misapplied to vehicles, probably because it's constructed as a problem with a trolley, as opposed to say, a problem of allocating limited medical resources. During the Covid epidemic, the ventilator problem was a far better real-world analogue to the trolley problem than anything involving vehicles.
The ventilator problem is a great analogue. The trolly problem really is just triage.
"lol, listen to op 'music' and you'll understand"....
https://sebastiansequoiah-grayson.bandcamp.com/
https://oblier.bandcamp.com/releases
https://youtube.com/user/sebbityseb
https://sebastiansequoiah-grayson.bandcamp.com/
https://oblier.bandcamp.com/releases
https://youtube.com/user/sebbityseb
Re: Let me teach you about meta-ethics
23Heh, spot on.Anthony Flack wrote: Mon May 15, 2023 5:25 pm Actually we know what the human driver's response is to the trolley problem which is that you instinctively save yourself, which is why the front seat passenger is the most likely to be killed in a car crash. The human driver tends to instinctively plow through the crowded mall rather than into the brick wall.
Would a computer driver take into account who's in your car? Would it even know? Should we program the car to head for the brick wall rather than the crowded mall? Would people submit to being driven around in an autonomous car knowing that it was programmed to do that? Would we end up making cars that drive into brick walls every time a cat runs out in front of it, ie. false positives do more harm than good? What if it's glitchy and unpredictable but still performs better than a human driver? Maybe a random choice would be better. It flips a virtual coin and we all take our chances. Why are we still driving cars?
Here in my car, I feel safest of all...
"lol, listen to op 'music' and you'll understand"....
https://sebastiansequoiah-grayson.bandcamp.com/
https://oblier.bandcamp.com/releases
https://youtube.com/user/sebbityseb
https://sebastiansequoiah-grayson.bandcamp.com/
https://oblier.bandcamp.com/releases
https://youtube.com/user/sebbityseb