Jump to content

Safety and feasibility of driverless vehicles


studiot
 Share

Recommended Posts

2 hours ago, studiot said:

We have had several threads and discussions about this topic this news would seem to indicate we are not yet there.

1) You do not know how many automatically driven cars reached their destination successfully.

2) The media will only talk about failures, not about successful arrivals.

3) Unlikely media will reveal what is percentage/per millage between crashes/not crashes of automatically driven cars, versus human-driven car crashes/not crashes.

(you are not informed about the number of airplanes which not crashed today either, you're just informed by the media only about one which crashed today)

 

My thoughts about the software:

When car driver instructs car where to go, device should show map of known (already taken and stored in db) parts of road (with green color), and unknown (which will be the first time taken) (with red color). Software should lower speed of driving on roads which are taken the first time. Human must be in advance informed about it. During driving software should compare what it gets from sensors and camera, is what is already in database. When there are bad weather conditions, obstacles on road, heavy traffic, etc. etc. what is in database won't match sensors, and speed should be lowered in advance. Automatic exchange of databases between cars, and central server, would help finding when software made mistake. Exchange of data between cars will prevent crashes between cars (data "we are here #UID at location x,y,z with speed vx,vy,vz, and acceleration ax, ay, az". From #UID computer builds history and predicts future path. Repeated by the all cars on the road tells about heavy traffic in some area in advance).

Edited by Sensei
Link to comment
Share on other sites

No informed person thinks we are there yet.  The self driving mode on the car clearly points out that you are not even supposed to take your hands off the wheel.  These poor people that died did not even have anyone in the driver's seat!

Link to comment
Share on other sites

9 minutes ago, Bufofrog said:

No informed person thinks we are there yet.

It's an interesting question though, when can we expect them????

16 minutes ago, Sensei said:

1) You do not know how many automatically driven cars reached their destination successfully.

2) The media will only talk about failures, not about successful arrivals.

3) Unlikely media will reveal what is percentage/per millage between crashes/not crashes of automatically driven cars, versus human-driven car crashes/not crashes.

(you are not informed about the number of airplanes which not crashed today either, you're just informed by the media only about one which crashed today)

 

My thoughts about the software:

When car driver instructs car where to go, device should show map of known (already taken and stored in db) parts of road (with green color), and unknown (which will be the first time taken) (with red color). Software should lower speed of driving on roads which are taken the first time. Human must be in advance informed about it. During driving software should compare what it gets from sensors and camera, is what is already in database. When there are bad weather conditions, obstacles on road, heavy traffic, etc. etc. what is in database won't match sensors, and speed should be lowered in advance. Automatic exchange of databases between cars, and central server, would help finding when software made mistake. Exchange of data between cars will prevent crashes between cars (data "we are here #UID at location x,y,z with speed vx,vy,vz, and acceleration ax, ay, az". From #UID computer builds history and predicts future path. Repeated by the all cars on the road tells about heavy traffic in some area in advance).

It's therefore a question of statistics, not ethics or morals; who do you chose to kill??? 

Link to comment
Share on other sites

23 minutes ago, dimreepr said:

It's an interesting question though, when can we expect them????

Predicting future technology is a fools errand.  So I think I am up to the task, I predict in 7 - 10 years there will be cars that do not require a driver.  That does not mean they will never crash, it means they will be significantly safer than having human drivers.

Link to comment
Share on other sites

59 minutes ago, dimreepr said:

It's therefore a question of statistics, not ethics or morals; who do you chose to kill??? 

Computer does not understand words that you used "ethics", "morals", "kill". It does not think. It just executes program. If program has not enough data, or incorrect data from sensors (because of unexpected by the software condition, unexpected obstacle on road, or unexpected weather condition, sensor malfunction, etc. etc.), it will execute wrong branch of code, which will eventually lead to crash and somebody will be killed. Nobody "chose person to kill".. "Choosing person to kill" requires thinking and intelligence.

Statistics is what matter when there is news in media about crash, when there is discussion about "safety and feasibility of driverless vehicles ", not absolute number of  crashes. For the media, good news is no news. Only bad news is news for the media. Even if 99.999999% of some cars/airplanes arrive successfully to home every day, you will hear in the media only about the one unlucky 0.000001% which had accident..

Edited by Sensei
Link to comment
Share on other sites

2 minutes ago, Bufofrog said:

Predicting future technology is a fools errand.

Indeed, Look on my future ye mighty...

1 minute ago, Sensei said:

Computer does not understand words that you used "ethics", "morals", "kill". It does not think

Indeed, it looks on it all, as a painting...

Link to comment
Share on other sites

13 minutes ago, Bufofrog said:

Predicting future technology is a fools errand.  So I think I am up to the task, I predict in 7 - 10 years there will be cars that do not require a driver.  That does not mean they will never crash, it means they will be significantly safer than having human drivers.

There is a large component that is dependent on the law rather than technology. 

People sue each other for damages. Who is the target of the lawsuit when an autonomous vehicle kills someone? Who is legally going to be at fault? Currently it's the driver (or one of the drivers), AFAIK, unless you can find fault with the vehicle itself.

If the entity at fault is going to be the manufacturer, they are going to have to be satisfied that their liability is limited. This puts them more at risk than they currently are, for the ~6 million accidents per year is the US (and a corresponding number in other countries) It's not just the ~35,000 deaths (again, a US statistic) that put them at risk, though deaths would likely be be the larger financial risk per incident.

I suspect this will eventually put onto the vehicle owner's insurance, but people and insurance companies will have to be comfortable with this.

Link to comment
Share on other sites

Another thought:

Let's say your risk of getting in an accident is X. Are people going to be willing to accept a risk of X with an autonomous car? I suspect not, because most people think they are above average drivers (Dunning-Kruger or because they're from Lake Wobegone) and also horrible at assessing risk, so they will want something much smaller than X. 

  

Link to comment
Share on other sites

Thank you all who responded.

I do not think it is just a question of risk.

A question of acceptable risk maybe , but not absolute risk.

There was a case a few years a go where a badly designed rear wheel bearing caused some rear wheels to fall off above (i think it was) 35 mph.

There were a few resultant deaths, although many rear wheels did not detach.

No one considered even one single death an acceptable risk. It just should not have happened.

The manufacturer called in all the affected cars for urgent modification.

 

Tesla in this case could have designed the car to be inoperable without someone properly in the driver's seat.

I think it is BMW that has such a system if all passengers are not wearing their seat belts.

Link to comment
Share on other sites

4 hours ago, studiot said:

Thank you all who responded.

I do not think it is just a question of risk.

A question of acceptable risk maybe , but not absolute risk.

There was a case a few years a go where a badly designed rear wheel bearing caused some rear wheels to fall off above (i think it was) 35 mph.

There were a few resultant deaths, although many rear wheels did not detach.

No one considered even one single death an acceptable risk. It just should not have happened.

The manufacturer called in all the affected cars for urgent modification.

 

Tesla in this case could have designed the car to be inoperable without someone properly in the driver's seat.

I think it is BMW that has such a system if all passengers are not wearing their seat belts.

Bypass devices exist for those as well.

I do agree Musk rushed things in a unsafe manner. Shouldn't have been even suggested them as being self-driving until our pleasure-seeking selves could be safely out of the picture.

 

Personal vehicle ownership is likely to go away IMO, so insurance issue will solve itself.

Edited by Endy0816
Link to comment
Share on other sites

On 4/19/2021 at 8:56 PM, Endy0816 said:

Personal vehicle ownership is likely to go away IMO, so insurance issue will solve itself.

Only in the sense of a lottery ticket...

Link to comment
Share on other sites

What a shame that nobody actually read the article.

Quote

Data logs recovered so far show Autopilot was not enabled & this car did not purchase FSD. Moreover, standard Autopilot would require lane lines to turn on, which this street did not have.

 

Link to comment
Share on other sites

On 4/20/2021 at 12:32 AM, Bufofrog said:

 That does not mean they will never crash, it means they will be significantly safer than having human drivers.

As inevitable as driveless cars are to eventually take over our roads, the above highlighted portion is the important issue to take out of this

Link to comment
Share on other sites

10 hours ago, dimreepr said:

Only in the sense of a lottery ticket...

I think nsurance will likely still exist, but will be handled by the rental/delivery company instead.

If the overall accident rate goes down it won't be as big of a burden.

Link to comment
Share on other sites

11 hours ago, Endy0816 said:

I think nsurance will likely still exist, but will be handled by the rental/delivery company instead.

If the overall accident rate goes down it won't be as big of a burden.

As long as insurance exists, both side's of the bet will seek to cash in. But we're off-topic.

There's no reason to suspect the future will provide a safe, enough, driverless car; even a safe driverless train is beyond our capabilities ATM IMHO. 

Link to comment
Share on other sites

43 minutes ago, dimreepr said:

 There's no reason to suspect the future will provide a safe, enough, driverless car; even a safe driverless train is beyond our capabilities ATM IMHO. 

Programming won’t be able to respond to a situation unanticipated by the programmers. The question is whether the programming can cover enough of the situations encountered by driving, and you’re right -we aren’t there yet.

Link to comment
Share on other sites

1 hour ago, dimreepr said:

There's no reason to suspect the future will provide a safe, enough, driverless car

Not to put too fine a point on it, but (in addition to the points above about rail safety) driverless cars (in most situations) ALREADY ARE safer than human drivers, both on net and per capita.

In the US alone, there are on average over 16,000 car crashes every single day. Globally, there are 1.25 Million people Killed in car crashes annually, or 3,300 on average every single day... killed.... nearly 50 Million seriously injured or disabled (source).

The real question here is how many orders of magnitude safer these driverless cars will need to be before our primitive ape minds accept them as 1) a significantly safer alternative to our status quo of driving ourselves (minus the still remaining technical challenges like driving in snow and off-road terrains), and 2) what it will take for us to overcome the repulsion felt when thinking that software controlled vehicles will still experience occasional tragedies regardless of how advanced we build them. 

 

49 minutes ago, swansont said:

Programming won’t be able to respond to a situation unanticipated by the programmers

I'm likely being a bit pedantic here (surely am), but in fact responding to unanticipated situations is also already happening via machine learning / AI. That said, you're of course correct that even the ML/AI relies on inputs up front from programmers to help shape HOW it learns to respond. 

Edited by iNow
Link to comment
Share on other sites

8 minutes ago, iNow said:

I'm likely being a bit pedantic here (surely am), but in fact responding to unanticipated situations is also already happening via machine learning / AI. That said, you're of course correct that even the ML/AI relies on inputs up front from programmers to help shape HOW it learns to respond. 

Right, but that’s essentially changing the programming after the situation has come up, whether that upgrade is built-in, or an external input. The first time, you have an accident, and perhaps a few times after, until you have enough information to deal with the set of circumstances. It won’t properly deal with it the first time.

Link to comment
Share on other sites

25 minutes ago, iNow said:

The real question here is how many orders of magnitude safer these driverless cars will need to be before our primitive ape minds accept them as 1) a significantly safer alternative to our status quo of driving ourselves (minus the still remaining technical challenges like driving in snow and off-road terrains), and 2) what it will take for us to overcome the repulsion felt when thinking that software controlled vehicles will still experience occasional tragedies regardless of how advanced we build them. 

That's a good point +1.

I'm not sure I have an argument...

Other than, we're not there yet...

Link to comment
Share on other sites

2 hours ago, iNow said:

The real question here is how many orders of magnitude safer these driverless cars will need to be before our primitive ape minds accept them as 1) a significantly safer alternative to our status quo of driving ourselves (minus the still remaining technical challenges like driving in snow and off-road terrains), and 2) what it will take for us to overcome the repulsion felt when thinking that software controlled vehicles will still experience occasional tragedies regardless of how advanced we build them. 

It won't help that the way AI fails is very different to the ways humans fail, leading to headlines such as 'how can driverless cars be so stupid', regardless of overall safety records.

Link to comment
Share on other sites

1 hour ago, Prometheus said:

It won't help that the way AI fails is very different to the ways humans fail, leading to headlines such as 'how can driverless cars be so stupid', regardless of overall safety records.

Exactly right. It's more sensational, too... thus reinforcing the perception of the risk beyond its reality

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.