Jump to content

What is the legal significance of evidence provided by AI ?

Featured Replies

The traffic authorities have announced that a by pass near me is to get a package of AI cameras to try ti improve the safety record of this stretch of road.

Apparantly the AI will monitor for such things as not wearing seat belts, talking on the phone whilst sriving, eating whilst driving, children and animals incorrectly secured in the cabin and so on.

Considering all the recent discussions about AI lying to us to satisfy its programming, how should we consider evidence of wrongdoing provided by AI ?

Here in the use case you cite, I expect AI to flag potential infractions which themselves will need to then be validated by human reviewers before assigning penalties.

Accuracy rates will never reach 100% IMO, but my intuition is 95 -97% based on feedback from real life human reviewers is easily achievable (and that feedback can also be used to train and further refine the AI).

Getting to 99% accuracy will likely remove the need for independent human review and any flaws in the process would likely be addressed through appeals. Video evidence will always be there since that’s what imitated the process (and AI won’t be AFAIK generating fake videos here).

Edited by iNow

1 hour ago, studiot said:

The traffic authorities have announced that a by pass near me is to get a package of AI cameras to try ti improve the safety record of this stretch of road.

Apparantly the AI will monitor for such things as not wearing seat belts, talking on the phone whilst sriving, eating whilst driving, children and animals incorrectly secured in the cabin and so on.

Considering all the recent discussions about AI lying to us to satisfy its programming, how should we consider evidence of wrongdoing provided by AI ?

I think the problem here is on the input side of the equation, but as @iNow suggests that would be reduced by human intervention on the output side.

39 minutes ago, iNow said:

Video evidence will always be there since that’s what imitated the process (and AI won’t be AFAIK generating fake videos here).

They might just do it for a laugh, to start with... ☹️

1 hour ago, iNow said:

Video evidence will always be there since that’s what imitated the process (and AI won’t be AFAIK generating fake videos

I guess one problem with this is where funding cuts lead to labor reduction resulting in some photos not being reviewed by humans, so a ticket (which could be quite an imposition on someone with long work hours, limited funds, etc) is sent before common sense ("he just popped a mint in his mouth, he's not eating a meal") can be applied.

Also I'm a bit puzzled that talking on the phone would be a ticketable offense - texting I can see, and that's illegal in most states AFAIK, but not sure how much conversing on a cellphone differs from talking with a passenger (especially if one is using speaker or Bluetooth, and so both hands can drive). Same reservations about telling people how to secure their pets. Some cats are calmer in someone's lap rather than a carrier - do we need Big Brother to make all those decisions for us?

Also, given that many people don't actually like to be surveilled, announcing that you are installing AI cams on a stretch of road could cause a sharp rise in motorists taking alternate routes (I would definitely be one of them) and in turn making those routes overcrowded and more dangerous.

51 minutes ago, TheVat said:

Also I'm a bit puzzled that talking on the phone would be a ticketable offense - texting I can see, and that's illegal in most states AFAIK, but not sure how much conversing on a cellphone differs from talking with a passenger (especially if one is using speaker or Bluetooth, and so both hands can drive).

I think they're talking about having the phone in your hand up to your ear, unless the hands-free auto packages haven't made it to the UK. They may be having trouble with smartphone map applications as well. Otherwise you're right, it's not that different from talking to a passenger.

I’m guessing this is machine learning rather than LLM-driven. Pattern recognition of seatbelts and not having two hands on the wheel, etc., like facial recognition.

I wonder how easily it would be fooled by the t-shirts that display a shoulder seatbelt, or generates false positives from unexpected situations. I hope it’s been sufficiently vetted, rather than using initial deployment as a beta-test.

I also assume they send the image of the alleged infraction to you so you can potentially challenge it. (Reminds me of a Columbo episode where he figured out a traffic camera was being spoofed; he realized that there was no shadow under the perp’s nose, so it must have been a picture, thus denying the perp an alibi)

1 hour ago, TheVat said:

Also I'm a bit puzzled that talking on the phone would be a ticketable offense - texting I can see, and that's illegal in most states AFAIK, but not sure how much conversing on a cellphone differs from talking with a passenger (especially if one is using speaker or Bluetooth, and so both hands can drive).

It's been my experience that talking on the phone vs talking to a passenger seems much different. In trying to decide why it felt that way to me I concluded it is because the person on the phone is not sharing the driving experience with me and can distract me by continuing to talk when I need to concentrate on driving. A person in the car with me recognizes when I am dealing with a situation that requires my full attention, and thus quits talking for a moment.

1 hour ago, TheVat said:

I guess one problem with this is where funding cuts lead to labor reduction resulting in some photos not being reviewed by humans, so a ticket (which could be quite an imposition on someone with long work hours, limited funds, etc) is sent before common sense ("he just popped a mint in his mouth, he's not eating a meal") can be applied.

Or worse, just reflexively covering your mouth/face from a yawn, cough or sneeze

  • Author
1 hour ago, TheVat said:

Same reservations about telling people how to secure their pets. Some cats are calmer in someone's lap rather than a carrier - do we need Big Brother to make all those decisions for us?

The USA was behind the UK in requiring the wearing of seatbelts, and subsequently the UK introduced requirements for placing animals either behind an impenetrable screen or in a suitable restraint harness.

I'm not agreeing or disagreeing as the subject is the legal implications of AI use.

Also authorities with some legal powers are increasingly using private sub contractors, who do not share these powers but sometimes act as if they do.

There have been several cases recently in the news where these contractors demanded £!0,000 for overstaying parking and after loosing cases in the high court, put out statements claiming they are 'in the right.'

3 hours ago, zapatos said:

It's been my experience that talking on the phone vs talking to a passenger seems much different. In trying to decide why it felt that way to me I concluded it is because the person on the phone is not sharing the driving experience with me and can distract me by continuing to talk when I need to concentrate on driving. A person in the car with me recognizes when I am dealing with a situation that requires my full attention, and thus quits talking for a moment.

I know what you mean - I've had phone conversations where I occasionally have to say "gotta deal with traffic for a minute" (or if I had to respond in the moment to traffic I will just explain I need them to repeat what they said) and they always understand. I always prioritize my attention to the road when conversing, either way, but I don't assume everyone does that. I recall being a passenger with someone who would look at people in the car while talking with them, and it was pretty unnerving.

3 hours ago, studiot said:

Also authorities with some legal powers are increasingly using private sub contractors, who do not share these powers but sometimes act as if they do.

Contracting out has several woes. In prisons for example there may be a push towards profit which results in guards getting rewarded for catching inmates out on petty infractions because that can be used to extend their stays and ensure filled beds. Anything profit-centered in legal enforcement can encourage excessive zeal to ticket people or detain them as much as possible. There's an implicit pressure to judge harshly and be inflexible - this in turn reduces public respect for the law.

5 hours ago, studiot said:

Also authorities with some legal powers are increasingly using private sub contractors, who do not share these powers but sometimes act as if they do.

There have been several cases recently in the news where these contractors demanded £!0,000 for overstaying parking and after loosing cases in the high court, put out statements claiming they are 'in the right.'

In the US some contractors who controlled the red-light cameras were found to have adjusted the timing so the amber was shorter than required by statute. Invalidated a lot of tickets. A lot of places got rid of them after all that bad press.

If you're okay with cameras read by humans I'm not seeing a substantive difference between that and a system whereby the first pass is by AI then verified by a human. Sounds like a way for small towns to raise revenue during hard times. Thus the concept isn't to my taste, while the methodology is unimportant.

It might be slightly off-topic, and while a narrow-use like road safety might be less questionable, I am wondering about privacy issues regarding the collection of surveillance data in the age of AI. UK, Germany and other countries do have some sort of Data Protection laws and I believe that generally speaking they are supposed to be for narrow use cases. But given the relative use of widening the scope with fast-moving tech, I am a bit concerned regarding oversight.

  • Author
4 minutes ago, CharonY said:

It might be slightly off-topic, and while a narrow-use like road safety might be less questionable, I am wondering about privacy issues regarding the collection of surveillance data in the age of AI. UK, Germany and other countries do have some sort of Data Protection laws and I believe that generally speaking they are supposed to be for narrow use cases. But given the relative use of widening the scope with fast-moving tech, I am a bit concerned regarding oversight.

I agree the ramifications run very wide. +1

On 8/25/2025 at 3:21 PM, TheVat said:

I guess one problem with this is where funding cuts lead to labor reduction resulting in some photos not being reviewed by humans, so a ticket (which could be quite an imposition on someone with long work hours, limited funds, etc) is sent before common sense ("he just popped a mint in his mouth, he's not eating a meal") can be applied.

Also I'm a bit puzzled that talking on the phone would be a ticketable offense - texting I can see, and that's illegal in most states AFAIK, but not sure how much conversing on a cellphone differs from talking with a passenger (especially if one is using speaker or Bluetooth, and so both hands can drive). Same reservations about telling people how to secure their pets. Some cats are calmer in someone's lap rather than a carrier - do we need Big Brother to make all those decisions for us?

Also, given that many people don't actually like to be surveilled, announcing that you are installing AI cams on a stretch of road could cause a sharp rise in motorists taking alternate routes (I would definitely be one of them) and in turn making those routes overcrowded and more dangerous.

I think the problem is, we already rely on security cameras both individually and nationally; "we" also don't want to pay for enough police to scan the images we've allowed to be taken.

This is the perfect, almost inevitable, testing ground for AI to be accepted by the public, if it gets enough hits to drown out the noise of the occasional innocent, then the public are happy to accept its protection; when they're the innocent, it's too late to object.

  • 3 months later...

I wonder how it takes into account whether the vehicle is being piloted by AI or a human. Should it be illegal for a person in an automated vehicle to eat lunch, talk on the phone, light a cigar or even sleep?

1 hour ago, npts2020 said:

I wonder how it takes into account whether the vehicle is being piloted by AI or a human. Should it be illegal for a person in an automated vehicle to eat lunch, talk on the phone, light a cigar or even sleep?

Who is legally culpable if an AI-piloted vehicle causes an accident or breaks the law?

56 minutes ago, swansont said:

Who is legally culpable if an AI-piloted vehicle causes an accident or breaks the law?

It should be the car manufacturer, just as if any part failed not due to the driver's incompetence or negligence. If the owner fails to follow mandated protocols in their vehicle's upkeep, then they are liable. It is a sticky one with grey areas, I think, with some not thought of or encountered yet.

24 minutes ago, StringJunky said:

It should be the car manufacturer, just as if any part failed not due to the driver's incompetence or negligence. If the owner fails to follow mandated protocols in their vehicle's upkeep, then they are liable. It is a sticky one with grey areas, I think, with some not thought of or encountered yet.

I agree; I think this is the basis of lawsuits about accidents in “self-driving” mode (and the disclaimers about how it’s not really self-driving)

I wonder when we’ll get to the point when the issue isn’t whether an accident is the fault of the automated system, but whether a human could have reasonably avoided it while the computer did not

1 hour ago, swansont said:

I agree; I think this is the basis of lawsuits about accidents in “self-driving” mode (and the disclaimers about how it’s not really self-driving)

I wonder when we’ll get to the point when the issue isn’t whether an accident is the fault of the automated system, but whether a human could have reasonably avoided it while the computer did not

It's going to be a whole new learning curve.

11 hours ago, StringJunky said:

It's going to be a whole new learning curve.

Indeed, AI is bound to start doing the heavy lifting in building a legal case, it's much cheaper and more convenient for the culpable.

21 hours ago, swansont said:

Who is legally culpable if an AI-piloted vehicle causes an accident or breaks the law?

At present it is the driver in most cases but that is currently being litigated and IMO when automation becomes widespread it will be the non-automated part of the incident which will be at fault. I would think it to be extremely rare for automated vehicles to run into each other or break traffic laws.

1 hour ago, npts2020 said:

At present it is the driver in most cases but that is currently being litigated and IMO when automation becomes widespread it will be the non-automated part of the incident which will be at fault. I would think it to be extremely rare for automated vehicles to run into each other or break traffic laws.

I was under the impression that the car companies are trying to blame drivers because “self-driving” doesn’t actually mean self-driving, owing to fine print and disclaimers. Tesla is being sued for false advertising because they had promised that capability.

On 11/30/2025 at 4:20 PM, swansont said:

I was under the impression that the car companies are trying to blame drivers because “self-driving” doesn’t actually mean self-driving, owing to fine print and disclaimers. Tesla is being sued for false advertising because they had promised that capability.

This is all true but I fully expect to see AI drivers become significantly better than any human ones in the near future, if they aren't already. Eventually, even the courts will have to take that into consideration.

15 minutes ago, npts2020 said:

This is all true but I fully expect to see AI drivers become significantly better than any human ones in the near future, if they aren't already. Eventually, even the courts will have to take that into consideration.

But that won't matter; the liability lies with whoever is at fault. So I think that we won’t get true autonomous vehicles until the companies accept that liability, or con the customers into accepting it. But to get to the customers to accept the computer being better than the average driver isn’t enough, because most people think they are good drivers. The computer has to be better than people think they are.

And that includes accident avoidance that I mentioned earlier - I think most people won’t accept getting into an accident even if it’s the other driver’s fault, owing to injury (money compensation vs chronic pain/permanent disability) and just the hassle if getting a car fixed, even at minimal monetary cost.

ETA- One big issue is/is going to be convincing people they’re at fault when a human and computer vehicle get into an accident. Another is issues with accidents involving pedestrians

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.