Jump to content

Remote car steering legalization


Moreno

Recommended Posts

5 hours ago, MigL said:

Don't have a problem with self-driving cars.

As long as Boeing doesn't write the MCAS code for them...

Definitely a better option than remote in most cases.  Possible that in some cases outsourcing the driving could work in a combined approach though.

Obtain human input on how to handle an unusual situation, while otherwise remaining on autopilot. Be simpler just to have a person in the car takeover but they could lack a license or the car empty.

Link to comment
Share on other sites

  • 2 months later...

I cannot imagine self-driving cars for the reason of a lot of unpredictable circumstances on the roads. For example: icy and slippery roads, construction works, a pedestrian or an animal running on the red light, dangerous behaviors of other drivers or intoxicated pedestrians, traffic lights out or working improperly or replaced by a human traffic controller, some criminal situations etc, etc. For example, what if a self-driving car will face a choice: "to hit a dog at full speed and continue driving or start to slow down abruptly to avoid collision with dog on a highway, but create a very dangerous situation to humans it this way"? Any reasonable human driver would choose variant N1, but who knows about a robot? Very likely it will not be able even make a difference between a dogs and a humans technically and hence their life priorities. For a robot they all would be just a "moving pieces of meat". For now I simply cannot imaging self-driving legalization.

 

Edited by Moreno
Link to comment
Share on other sites

36 minutes ago, Moreno said:

For example: icy and slippery roads, construction works, a pedestrian or an animal running on the red light, dangerous behaviors of other drivers or intoxicated pedestrians

technology is already better at this than us,

46 minutes ago, Moreno said:

I cannot imagine self-driving cars

why not?

 

49 minutes ago, Moreno said:

Any reasonable human driver would choose variant N1

who said a reasonable human driver would have time to decide?

Link to comment
Share on other sites

A quote from 3 years ago after the first fatal crash.. https://www.bbc.co.uk/news/technology-36680043
 

Quote

 

The Autopilot function was introduced by Tesla in October last year. In a conference call, the firm's enigmatic chief executive Elon Musk urged caution in using the technology.

"The driver cannot abdicate responsibility," he said.

 

A 2018 crash https://www.bbc.co.uk/news/technology-49594260

 

Quote

 

The design of Tesla's Autopilot system and "driver inattention" led to a crash in 2018, according to a US National Transportation Safety Board report.

It said Tesla's semi-autonomous driving system "permitted driver disengagement from the driving task", resulting in the crash, on a California motorway.

Tesla says Autopilot "requires active driver supervision" and drivers should keep their hands on the steering wheel.

 

 

 

On 10/4/2019 at 11:47 PM, Endy0816 said:

Obtain human input on how to handle an unusual situation, while otherwise remaining on autopilot.

Quote

In a statement, Tesla said it appeared the Model S car was unable to recognise "the white side of the tractor trailer against a brightly lit sky" that had driven across the car's path.

 

When Elon Musk says "For maximum safety, the driver must abdicate responsibility," autopilot cars may be safer than human driven cars.

Link to comment
Share on other sites

A tree falls down on a road and obstructs the way. A robot faces a choice to avoid collision: to turn left and roll over a dog, to turn right and roll over a helpless child. A robot takes choice N2 for making no priorities between dogs and humans. For it they are just a moving objects which radiate infrared. Good luck with robot drivers justification.

Link to comment
Share on other sites

12 minutes ago, Moreno said:

A tree falls down on a road and obstructs the way. A robot faces a choice to avoid collision: to turn left and roll over a dog, to turn right and roll over a helpless child. A robot takes choice N2 for making no priorities between dogs and humans. For it they are just a moving objects which radiate infrared. Good luck with robot drivers justification.

Valid argument, though people often fail to do better.

It is just a matter of time until supervised AI does consistently better, on average statistically, than humans...and a matter of further time until they do better again unimpeded by real time human supervision.

Link to comment
Share on other sites

Not to mention that the AI detects the fallen tree several hundred milliseconds to seconds faster than the human driver and initiates braking and collision avoidance procedures with equal quickness... before the human has processed the stimulus and begun even moving their muscles. 

Link to comment
Share on other sites

26 minutes ago, iNow said:

Not to mention that the AI detects the fallen tree several hundred milliseconds to seconds faster than the human driver and initiates braking and collision avoidance procedures with equal quickness... before the human has processed the stimulus and begun even moving their muscles. 

Again from the Sept 2019 report https://www.bbc.co.uk/news/technology-49594260 ....

Quote

Nobody was hurt in the January 2018 crash.

According to the report, Autopilot was activated at the time of the crash and the driver was not holding the wheel.

The car was following the vehicle in front but the lead vehicle changed lanes to the right to avoid a parked fire engine obstructing the lane.

The Tesla accelerated, detecting the fire engine in its path only about 0.49 seconds before the crash.

The car's collision-warning alarm was activated but the car did not brake and hit the fire engine at about 30mph (48km/h).

The AI detects the gap several hundred milliseconds to seconds faster than the human driver could have (maybe) and initiates acceleration faster than a human idiot could do. At such a time even a human idiot would likely be concentrating sufficiently to notice a fire engine in front of him, panic, and slam on the brakes or swerve without calmly and dispassionately analysing the situation first.

My experience with AI is decades out of date, but calmly and dispassionately processing the sudden detection of an object just where human experience would suggest it might be, deciding it's serious enough to require braking or swerving, detecting its size and (zero) speed etc and deciding what to do in 0.49 seconds is a big challenge.

AI is great for very rapid response to predictable problems but when it comes to intelligent response to the unexpected, I prefer human parallel processing with billions of years of hardware and firmware development.

It's clear that 30mph is far too fast for safe AI cars.

 

For now a partial restoration of the Locomotive Act 1861 might be adequate for P.R. and safety.

Quote

 

Requirement for the vehicle to consume its own smoke (Section 8)

A speed limit of 10 mph on open roads, or 5 mph in inhabited areas. (Section 11)

 

 

Link to comment
Share on other sites

2 hours ago, Moreno said:

A robot takes choice N2 for making no priorities between dogs and humans. For it they are just a moving objects which radiate infrared.

And yet my iPhone is capable of identifying the difference between my face and my brother's face. And he doesn't look anything like a dog.

My phone even knows if my eyes are open or not.

Do you have any evidence that technology is incapable of discerning between a dog and a child?

Link to comment
Share on other sites

28 minutes ago, Carrock said:

Again from the Sept 2019 report

My comment wasn’t specific to the Tesla auto drive version that was available 2 years ago. 

30 minutes ago, Carrock said:

My experience with AI is decades out of date

It surely seems so. I applaud your self awareness. 

31 minutes ago, Carrock said:

It's clear that 30mph is far too fast for safe AI cars.

Perhaps a decade ago, but this is far from true today and will be even less true in the years ahead. 

Link to comment
Share on other sites

5 minutes ago, iNow said:

My comment wasn’t specific to the Tesla auto drive version that was available 2 years ago.

The accident report was published in Sept 2019. Such reports are not available for most recent crashes and software failures may be currently unknown or at least unpublished. Before these reports are published, P.R. is usually misleading. I've noticed in particular that published pre crash video typically is either dehanced (dishonest) or (worse) taken with inferior cameras which (in poor lighting) produced almost invisible images of people etc which would have been readily seen with human eyesight.

Have you any evidence that (presumably legal) Tesla auto drive is inferior to other auto drives?

19 minutes ago, iNow said:
49 minutes ago, Carrock said:

My experience with AI is decades out of date, but calmly and dispassionately processing the sudden detection of an object just where human experience would suggest it might be, deciding it's serious enough to require braking or swerving, detecting its size and (zero) speed etc and deciding what to do in 0.49 seconds is a big challenge.

It surely seems so. I applaud your self awareness.

I was and am aware that A.I. is still only suitable for certain specialised purposes. Would you expect A.I. with suitable tools to be able to build a house unsupervised? Driving a car is usually but not always more error tolerant.

Are you saying that going from acquiring an unexpected image to doing something intelligent reliably takes significantly less than 0.5 seconds for A.I.?

A reference would be good.

 

1 hour ago, iNow said:
1 hour ago, Carrock said:

It's clear that 30mph is far too fast for safe AI cars.

Perhaps a decade ago, but this is far from true today and will be even less true in the years ahead. 

Did you perhaps mean since Jan 2018?

Quote

The Tesla accelerated, detecting the [parked] fire engine in its path only about 0.49 seconds before the crash.

The car's collision-warning alarm was activated but the car did not brake and hit the fire engine at about 30mph (48km/h).

At 30mph a pedestrian standing behind the fire engine would likely be seriously hurt.

Much better chances at 10mph. Also, 3 times as long for the driver or A.I. to brake/swerve/sound horn and the pedestrian to run.

Link to comment
Share on other sites

1 hour ago, Carrock said:

Have you any evidence that (presumably legal) Tesla auto drive is inferior to other auto drives?

No, but I do know the version in place 2 years ago was inferior to the current and future versions. 

1 hour ago, Carrock said:

Are you saying that going from acquiring an unexpected image to doing something intelligent reliably takes significantly less than 0.5 seconds for A.I.?

Yes

https://www.wired.com/story/guide-self-driving-cars/

1 hour ago, Carrock said:

Did you perhaps mean since Jan 2018?

No

Link to comment
Share on other sites

14 hours ago, iNow said:
15 hours ago, Carrock said:

Have you any evidence that (presumably legal) Tesla auto drive is inferior to other auto drives?

No, but I do know the version in place 2 years ago was inferior to the current and future versions. 

15 hours ago, Carrock said:

Are you saying that going from acquiring an unexpected image to doing something intelligent reliably takes significantly less than 0.5 seconds for A.I.?

A reference would be good.

Yes

https://www.wired.com/story/guide-self-driving-cars/

I skimmed your year old reference but didn't find anything to support either of those assertions.

 

I did find an interesting quote from it though.

Quote

CEO Elon Musk has defended Autopilot as a life-saving feature, but even he doesn't use it properly, as he made clear during a 60 Minutes interview in December. Moreover, the main statistic he has used to defend the system doesn't hold up, a Tesla "safety report" offers little useful data, and it's not clear, anyway, how to produce more reliable numbers—or smarter systems.

 

 

18 hours ago, zapatos said:
 

It sounds like you are saying 30mph is too fast for AI cars based on a report of one accident. Is that correct?

More quotes from iNow's reference:

Quote

At least two Tesla drivers in the US have died using the system (one hit a truck in 2016, another hit a highway barrier this year)......

The problem is that humans are not especially well suited for serving as backups. Blame the vigilance decrement......

how does the robot decide whom to hit?—might have already been solved. One Stanford researcher says the law has preempted the problem, because the companies building these cars will be “less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability.”

Most accidents (near misses, shunts, minor collisions etc) are lumped together so good stats aren't available.

A real problem is that drivers are expected to be as alert as if they were driving, as they are regarded (by Musk etc) to be at fault if not.

"how does the robot decide whom to hit?" There's a simple solution if the car has to decide between killing an adult or a child. Children rarely have dependents and so on average incur lower liability.

My view is that until these cars are fairly definitively proved to be safer than human driven vehicles they should have severe limitations placed on their use and then the car (manufacturer) should be liable for accidents caused by driving error.

However having the car rather than the driver responsible for accidents could be a nightmare scenario for manufacturers.

Two equally badly driven cars collide and kill people. If you jail the human driver of one what do you do about the other car which is autodrive?

Link to comment
Share on other sites

51 minutes ago, Carrock said:

Two equally badly driven cars collide and kill people. If you jail the human driver of one what do you do about the other car which is autodrive?

Mandate a software upgrade from the manufacturer to address the bug or gap.

Also, it’s extremely rare that humans are jailed for vehicular manslaughter since (as the name itself suggests) these are accidents. 

Link to comment
Share on other sites

1 hour ago, Carrock said:

My view is that until these cars are fairly definitively proved to be safer than human driven vehicles they should have severe limitations placed on their use and then the car (manufacturer) should be liable for accidents caused by driving error.

 

That seems to be a reasonable expectation. And while these cars may not have definitively been proved safer than human driven vehicles, I also don't see where they have definitively been proven less safe than human driven vehicles, which is what it seemed you were implying.

Regulations for AI cars may be quite different than those for human driven cars, just as commercial trucks and taxis have different regulations. The car itself may not have to be 'safer', or even as safe, as long as the overall environment meets safety expectations.

Link to comment
Share on other sites

 

1 hour ago, iNow said:

Mandate a software upgrade from the manufacturer to address the bug or gap.

Well, that's certainly a terrible punishment for unintentionally killing people.

1 hour ago, iNow said:

Also, it’s extremely rare that humans are jailed for vehicular manslaughter since (as the name itself suggests) these are accidents. 

Best not to visit Texas which has more jail options than Iowa...

Quote

 

Here are some examples of common situations that can lead to vehicular manslaughter charges:

  • Using your phone
  • Driving while under the influence of drugs or alcohol (including prescription drugs)
  • Taking your eyes off the road (for example, reaching into the backseat to grab something)
  • Ignoring safety signs (anything from an illegal U-turn to bad windshield wipers)
  • Driving over or under the speed limit
  • Racing other vehicles
  • Driving with little to no sleep

 

OTOH the maximum penalty in Texas is only 20 years while it's 25 years in Iowa.

1 hour ago, zapatos said:

That seems to be a reasonable expectation. And while these cars may not have definitively been proved safer than human driven vehicles, I also don't see where they have definitively been proven less safe than human driven vehicles, which is what it seemed you were implying.

Regulations for AI cars may be quite different than those for human driven cars, just as commercial trucks and taxis have different regulations. The car itself may not have to be 'safer', or even as safe, as long as the overall environment meets safety expectations.

Generally agree, except I'd be reluctant to replace dangerous manual vehicles with vehicles which may or may not be safer.

Edited by Carrock
Coulldn't edit post properly - but then automerged....
Link to comment
Share on other sites

1 hour ago, Carrock said:

Generally agree, except I'd be reluctant to replace dangerous manual vehicles with vehicles which may or may not be safer.

What I'm suggesting is that regulations may make an inherently more dangerous vehicle safer to operate. Your example of limiting their speed to 30mph would be an example of a regulation making up for a weakness in an AI vehicle. A real life example is the regulation of truck drivers to ensure an inherently more dangerous vehicles (18 wheelers for example) are operated more safely, thus making the overall use of tractor-trailers a reasonably safe vehicle system.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.