Jump to content

Are LLMs AI, or is the claim that they are just hype?

Featured Replies

I saw a good article on neurosymbolic AI recently in The Conversation.

https://theconversation.com/neurosymbolic-ai-is-the-answer-to-large-language-models-inability-to-stop-hallucinating-257752

(Excerpt)

Neurosymbolic AI combines the predictive learning of neural networks with teaching the AI a series of formal rules that humans learn to be able to deliberate more reliably. These include logic rules, like “if a then b”, such as “if it’s raining then everything outside is normally wet”; mathematical rules, like “if a = b and b = c then a = c”; and the agreed upon meanings of things like words, diagrams and symbols. Some of these will be inputted directly into the AI system, while it will deduce others itself by analysing its training data and doing “knowledge extraction”.

This should create an AI that will never hallucinate and will learn faster and smarter by organising its knowledge into clear, reusable parts. For example if the AI has a rule about things being wet outside when it rains, there’s no need for it to retain every example of the things that might be wet outside – the rule can be applied to any new object, even one it has never seen before.

During model development, neurosymbolic AI also integrates learning and formal reasoning using a process known as the “neurosymbolic cycle”. This involves a partially trained AI extracting rules from its training data then instilling this consolidated knowledge back into the network before further training with data...

Just now, TheVat said:

I saw a good article on neurosymbolic AI recently in The Conversation.

https://theconversation.com/neurosymbolic-ai-is-the-answer-to-large-language-models-inability-to-stop-hallucinating-257752

(Excerpt)

Neurosymbolic AI combines the predictive learning of neural networks with teaching the AI a series of formal rules that humans learn to be able to deliberate more reliably. These include logic rules, like “if a then b”, such as “if it’s raining then everything outside is normally wet”; mathematical rules, like “if a = b and b = c then a = c”; and the agreed upon meanings of things like words, diagrams and symbols. Some of these will be inputted directly into the AI system, while it will deduce others itself by analysing its training data and doing “knowledge extraction”.

This should create an AI that will never hallucinate and will learn faster and smarter by organising its knowledge into clear, reusable parts. For example if the AI has a rule about things being wet outside when it rains, there’s no need for it to retain every example of the things that might be wet outside – the rule can be applied to any new object, even one it has never seen before.

During model development, neurosymbolic AI also integrates learning and formal reasoning using a process known as the “neurosymbolic cycle”. This involves a partially trained AI extracting rules from its training data then instilling this consolidated knowledge back into the network before further training with data...

This may just be a good theme for a SCiFi horror - but how does any 'robot' resolve the tension between clear cut rules as you describe and how humans actually act as must be descibed in any large scale 'learning' input, as Otto has described.

This was the reason behind my question to @Otto Kretschmer about the traffic lights.

The vast majority of human activity conforms to the red light rule.

Whereas a large proportion on humans disobey the green light rule.

Note I do not know the 'answer' to this - It is a genuine point for discussion.

16 minutes ago, studiot said:

This may just be a good theme for a SCiFi horror - but how does any 'robot' resolve the tension between clear cut rules as you describe and how humans actually act as must be descibed in any large scale 'learning' input, as Otto has described.

Yes, I have doubts that combining deep neural nets with symbolic IF-THEN rules is going to solve continuous learning and on-the-fly generalization in the RW. It still won't be able to spontaneously "see" RW relationships and extract rules by building an understanding. We conscious folk develop an understanding of why we should stop at a red light by means of seeing a larger order to things, e.g. what happens to the traffic situation, and us, when even one driver ignores the rule. (Or what happens when certain other minds observe us violate such a rule) We even learn the rule can be ignored when, say, the cross street is empty and our passenger has a dire medical emergency, or when it's three a.m. in rural Nebraska. AGI needs the heuristic path to what rules MEAN. While neurosymbolic AI could learn to deduce certain rules, it will not have a path to why the rules are out there AFAICT. But perhaps I'm underestimating its potential.

Just now, TheVat said:

Yes, I have doubts that combining deep neural nets with symbolic IF-THEN rules is going to solve continuous learning and on-the-fly generalization in the RW. It still won't be able to spontaneously "see" RW relationships and extract rules by building an understanding. We conscious folk develop an understanding of why we should stop at a red light by means of seeing a larger order to things, e.g. what happens to the traffic situation, and us, when even one driver ignores the rule. (Or what happens when certain other minds observe us violate such a rule) We even learn the rule can be ignored when, say, the cross street is empty and our passenger has a dire medical emergency, or when it's three a.m. in rural Nebraska. AGI needs the heuristic path to what rules MEAN. While neurosymbolic AI could learn to deduce certain rules, it will not have a path to why the rules are out there AFAICT. But perhaps I'm underestimating its potential.

Thanks for the reply.

But you are missing my point.

The red light rule is pretty clear and most humans obey it so there is little or no tension between the rule and the action.

But

The green light rule is very often disobeyed by many if not most humans so there is considerable tension bewtween the rule and the action.

The difference is subtle but powerful.

Will they AI go 'mad' trying to resolve it ?

Edited by studiot

Sorry, I wasn't quite clear what you meant by...

3 hours ago, studiot said:

But

The green light rule is very often disobeyed by many if not most humans so there is considerable tension bewtween the rule and the action.

...and I don't know if this is an American not understanding what is meant by the green light rule as you use Brits use it, or what. It was my understanding that green simply means you can go, either as car or pedestrian, so I'm not sure in what sense the permission is disobeyed. I must sound quite obtuse but I am surely missing something about this example. In my experience, pretty much everyone resumes their forward progress when the light turns green. I mean, how does one "disobey" a green light? Just stand there for a bit, or just sit in your vehicle while everyone honks at you? OK, this must all be some subtle metaphor or something.

  • Author
8 hours ago, studiot said:

This may just be a good theme for a SCiFi horror - but how does any 'robot' resolve the tension between clear cut rules as you describe and how humans actually act as must be descibed in any large scale 'learning' input, as Otto has described.

This was the reason behind my question to @Otto Kretschmer about the traffic lights.

The vast majority of human activity conforms to the red light rule.

Whereas a large proportion on humans disobey the green light rule.

Note I do not know the 'answer' to this - It is a genuine point for discussion.

I’m British and I ‘m not clear what you mean by the green light rule either. Can you elucidate?

52 minutes ago, TheVat said:

Sorry, I wasn't quite clear what you meant by...

...and I don't know if this is an American not understanding what is meant by the green light rule as you use Brits use it, or what. It was my understanding that green simply means you can go, either as car or pedestrian, so I'm not sure in what sense the permission is disobeyed. I must sound quite obtuse but I am surely missing something about this example. In my experience, pretty much everyone resumes their forward progress when the light turns green. I mean, how does one "disobey" a green light? Just stand there for a bit, or just sit in your vehicle while everyone honks at you? OK, this must all be some subtle metaphor or something.

Just now, exchemist said:

I’m British and I ‘m not clear what you mean by the green light rule either. Can you elucidate?

Good morning all.

Traffic signals were first introduced in 1868, following the railway practice of red/green semaphore arms.
Lights came later and the first 3 color lights were introduced by the americans in1929 in the Motor City.

I don't know about the legal status in other countries, but I would imagine all those deriving from the British legal system would have the same definition whicvh currently stands thus.

traffsignals.jpg

Note that green means "You may go, if your way is clear"

That 'if' is the legal version of mathematical if and only if.

In Britain you can be prosecuted if you proceed when you way is not clear.

Indeed we have several further traffic signals trying to control the simple fact that people simply ignore the condition and there are further specific offences related to these.

This is despite the fact that you would always have failed your driving test if you did not give this complete answer to the question about green.

(All signs, lights, directions, instructions on boards, or on the road are officially called signals following the semaphore days).

Hopefully that is the information you require, but remember this topic is about LLMs and other rule based systems and my issue was about their reaction to being fed on contrary and conflicting information.

Here is another really bad 'mistake' by AI, demonstrating a total lack of understanding, even of plain English.


In this case the answer is worse than just dangerous it is life threatening as the AI direction directs vehicles through a green traffic light and straight across a pedestrian crossing which may also be green.

The manouvre suggested is possible but prohibited for that reason.

googleAI.jpg

Edited by studiot

11 hours ago, studiot said:

Thanks for the reply.

But you are missing my point.

The red light rule is pretty clear and most humans obey it so there is little or no tension between the rule and the action.

But

The green light rule is very often disobeyed by many if not most humans so there is considerable tension bewtween the rule and the action.

The difference is subtle but powerful.

Will they AI go 'mad' trying to resolve it ?

This is output generated by ChatGPT with your question:

1.png

2.png

3.png4.png

How do you like this output?

8 hours ago, TheVat said:

Sorry, I wasn't quite clear what you meant by...

Grotesque. Isn't it grotesque that a man says he/she doesn't understand something in a thread where you complain that LLM doesn't understand something.. ? :)

I put studiot's 2nd post again, and this is the output:

5.png

(..cut a bit..)

6.png

7.png8.png9.png10.png

I like what was generated.

In the past it has been “funny”/“shocking” if you asked a question in English, you received a completely different answer than if you asked the same question in my language..

After a deeper look, I think that with this rule that we stand on red, we can also disagree. In our law, if a privileged car is driving, such as an ambulance, police, fire brigade, then we must make room for them to pass. And that may require going through the red. And then it can be done without any consequences.

Question. Answered. Thoroughly.

Ok, the difference between a RLR and GLR is that the latter is conditional, and this is a challenge to AI. And that does challenge the adequacy of the neurosymbolic system where If/Then rules are provided or extracted. The light is green, but the AI must be able to determine all relevant criteria for the "way being clear.".

If there are no animals or people in crosswalks, then you may obey green and go.

If the vehicle in front of you moves forward, then you may also do so.

If the green light is solely for pedestrians, you cannot go.

If a vehicle on the cross street is drifting into the intersection, you cannot obey the green until it corrects itself or is otherwise removed.

If a sinkhole or large gap has appeared in the street, you cannot obey the green, and must await redirection by a traffic control officer.

Etc.

So it's a good point to make, that while humans can intuitively grasp a wide range of novel situations (I hear an approaching siren, I'd better wait) which all fall under "If the way is clear," the AI will struggle with novelty if it doesn't have that more general understanding of a compromised path.

Just now, Sensei said:

This is output generated by ChatGPT with your question:

Just now, Sensei said:

How do you like this output?

I don't since ChatGPT started of with an incorrect statement.

Stopping when a light is red is a very positive and active statement or instruction; it is most certainly not a negative or passive one.

The second run through was a better consideration so whould we always ask an AI twice for its best answer, like the postman who always knocks twice ?

But thank you for that work in progressing the discussion. +1

Just now, Sensei said:

After a deeper look, I think that with this rule that we stand on red, we can also disagree. In our law, if a privileged car is driving, such as an ambulance, police, fire brigade, then we must make room for them to pass. And that may require going through the red. And then it can be done without any consequences.

Yes that is a good point, but not quite correct as it is certainly not without consequences.

It is true that emergency travel under the blue light laws allows emergency responders to break traffic regulations.

But they retain a duty of care.

It is interesting that there are 2 - 10 deaths (mostly pedestrian) by police cars per year in the UK.

As compared to a very much lower incidence by the fire service and mostly zero by the ambulance service.

Especially as the last two services are more likely to be travelling in a life and death situation as compared to the police.

7 minutes ago, studiot said:

I don't since ChatGPT started of with an incorrect statement.

Stopping when a light is red is a very positive and active statement or instruction; it is most certainly not a negative or passive one.

The second run through was a better consideration so whould we always ask an AI twice for its best answer, like the postman who always knocks twice ?

But thank you for that work in progressing the discussion. +1

Yes that is a good point, but not quite correct as it is certainly not without consequences.

It is true that emergency travel under the blue light laws allows emergency responders to break traffic regulations.

But they retain a duty of care.

It is interesting that there are 2 - 10 deaths (mostly pedestrian) by police cars per year in the UK.

As compared to a very much lower incidence by the fire service and mostly zero by the ambulance service.

Especially as the last two services are more likely to be travelling in a life and death situation as compared to the police.

52% of 999 calls are for police, 45% for ambulance and only 3% for the fire service.

Just now, StringJunky said:

52% of 999 calls are for police, 45% for ambulance and only 3% for the fire service.

Do you not think these statistics are misleading ?

How often does the blood transfer service receive 999 calls ?

What % of ambulance blue light journeys are transfers to a better medical establishment and therefore not recorded on the 999 system ?

What % of police 999 calls are about a life threatening incident ?

Here is another ridiculous answer (although it does also contain a valid one in part)

Question." travel by train from Exeter to Addingham Yorkshire."

AI Overview

+3

To travel from Exeter to Addingham in Yorkshire by train, you'll need to first travel to York and then take a local service to Ilkley, which is the nearest train station to Addingham. You can expect the total journey to take around 6-7 hours, with changes in both Exeter and York. 

The A! has highlighted a correct start but then says changes in Both Exeter and York !

Foresooth.

Edited by studiot

1 hour ago, studiot said:

I don't since ChatGPT started of with an incorrect statement.

In the case I disagree with you, but I'll explain why right away.

1 hour ago, studiot said:

Stopping when a light is red is a very positive and active statement or instruction; it is most certainly not a negative or passive one.

It said not about "negative" and "positive", but "negative directive" and "positive directive". Where the word “directive” is relevant.

It means:

"A negative directive is a type of directive communication that instructs someone not to do something. It's a way of telling someone what not to do, rather than what to do. This can be expressed directly with a negative command or indirectly through a negative suggestion or request. "

In our case, the red light is a negative directive in the sense that it prohibits forward movement of the vehicle.

(except in unusual cases, such as privileged vehicles behind you, which you should let pass).

1 hour ago, studiot said:

Question." travel by train from Exeter to Addingham Yorkshire."

You are very focused on what “Google AI” gives you. I have already said that it is lousy.

Ask the same question in one browser window to ChatGPT (you don't have to log in), and in another to Deepseek (you have to log in, e.g. Google account or mail), and in a third to Google AI.

You'll see which one answers the same question better and in the same way (copy'n'paste). I bet ChatGPT will get the best out of it, sometimes Deepseek will be better, and Google AI will generate nonsense (as you yourself have already noticed).

To translate/verify texts into English I don't use Google Translate (anymore) either. One good thing about it is that it has generated sounds how to pronounce words. But I noticed that the translations are of poor quality.

1 hour ago, studiot said:

Yes that is a good point, but not quite correct as it is certainly not without consequences.

It is true that emergency travel under the blue light laws allows emergency responders to break traffic regulations.

But our discussion was not about a privileged car, but a car waiting for switch of traffic lights. And he/she, while waiting on red, must pass the privileged cars that are behind him/her, and he/she is blocking their way - that is, even though it is red, he/she must enter the road on red to let them pass.

Since he is already standing and waiting on the road for the light to change, going <=10 km/h to make room does not risk possibly hitting a pedestrian who may be crossing the road (unlike a speeding emergency vehicle).

Well, it's getting to be an increasingly complicated diagram/flowchart of if/then/else if/else relationships.. ;)

I just saw that ChatGPT can generate you a nice flowchart like the one in this picture:

Flowchart.png

Here you have shown how to do it: https://www.youtube.com/watch?v=x8-4vYydLPs

Methinks this is an interesting option.

2 hours ago, studiot said:

Yes that is a good point, but not quite correct as it is certainly not without consequences.

On a philosophical level, we can say to ourselves that everything has its consequences...

Just now, Sensei said:

You are very focused on what “Google AI” gives you. I have already said that it is lousy.

Yes indeed I am focused on the Google AI as it pushes itself in front of me every time I do a Google search, which I used to (and still do) do quite frequently.

I do not go seeking AI answers however. The only time I tried CHATGPT it returned rubbish and I have no idea how to access the others, not that I want to.

It does concern me, however, that the Google summary headline which used to be often really good has become so untrustworthy since the AI.

No one needs and AI to find a train from Exeter to Somewhere and the old system worked perfectly well and did not tell me I had to change trains at the starting point.

Just now, Sensei said:

On a philosophical level, we can say to ourselves that everything has its consequences...

Indeed so as a general principle.

Just now, Sensei said:

It said not about "negative" and "positive", but "negative directive" and "positive directive". Where the word “directive” is relevant.

It means:

"A negative directive is a type of directive communication that instructs someone not to do something. It's a way of telling someone what not to do, rather than what to do. This can be expressed directly with a negative command or indirectly through a negative suggestion or request. "

In our case, the red light is a negative directive in the sense that it prohibits forward movement of the vehicle.

(except in unusual cases, such as privileged vehicles behind you, which you should let pass).

But you are quite incorrect about this.

The red instruction is not really directed at those stationary at the traffic light.

They are a positive direction to any approaching vehicle to stop the vehicle (at the stop line).

No question that is a positive active command, that of changing what you are doing.

Just now, Sensei said:

But our discussion was not about a privileged car, but a car waiting for switch of traffic lights. And he/she, while waiting on red, must pass the privileged cars that are behind him/her, and he/she is blocking their way - that is, even though it is red, he/she must enter the road on red to let them pass.

Since he is already standing and waiting on the road for the light to change, going <=10 km/h to make room does not risk possibly hitting a pedestrian who may be crossing the road (unlike a speeding emergency vehicle).

Yes sometimes drivers cooperate to clear a path, sometimes the emergency vehicle will actuall drive the wrong on the wrong side of the road, or the wrong side of bollards etc.

But it is interesting to watch the difference in the way ambulance, police and fire vehicles approach a situation where the traffic signals are against their passage.

The ambulances definitely slow down to a walking pace and 'feel their way' through the junction.

The police and to a lesser extent the fire engines just go barreling on through and expect others to get out of their way.

Their victims may not be standing by the side of the road, waiting to cross in their turn, they may already be partway across, obeying the traffic signals appropriate to them.

I once watched a fire engine do exactly that to try to cross the London North Orbital Road ( a very wide important and heavily trafficed high speed road) whilst the traffic light was red to the fire engine as it approached.

The result was a powerful side impact on a sports car that was correctly crossing along the North Orbital.

Apart from sending the sports car spinning down 50 yards or so to block one direction of the North Orbital, it left the fire engine stuck immobile right in the middle of the junction itself.

8 minutes ago, studiot said:

The result was a powerful side impact on a sports car that was correctly crossing along the North Orbital.

People these days listen loudly to music in cars, talk through headsets (or not). If someone doesn't have a cabriolet, he may not even hear the alarm siren.

9 minutes ago, studiot said:

Apart from sending the sports car spinning down 50 yards or so to block one direction of the North Orbital, it left the fire engine stuck immobile right in the middle of the junction itself.

Well, simply the geniuses.

It's real from here:

"There was a collision between two ambulances at a traffic signal. The accident occurred at an intersection. One of the ambulances roofed over as a result of the collision. All services were working at the scene."

Just now, Sensei said:

People these days listen loudly to music in cars, talk through headsets (or not). If someone doesn't have a cabriolet, he may not even hear the alarm siren.

Well, simply the geniuses.

It's real from here:

"There was a collision between two ambulances at a traffic signal. The accident occurred at an intersection. One of the ambulances roofed over as a result of the collision. All services were working at the scene."

Not sure I understand any of this but I do agree there are too many distractions in some modern cars.

FYI the incident with the fire engine occurred in 1968 or 69 as I was walking my dog.

the sports car was doing at least 70mph, from which the stopping distance is the best part of 100m.
How wide do you think the intersection was ?

Edited by studiot

  • 3 weeks later...
On 6/9/2025 at 5:33 PM, TheVat said:

For now, seems like you need a system that can take raw experiences and contextualize them in a broader "worldview," which would IMO mean some sense of pleasure and pain, some capacity for reflection on experiences which go well or not so well, some way to improvise when faced with novelty. And it would store not just data packages but also slippery sensations of missing something or lacking a pleasing outcome.

In general, I agree with you; AI is not there yet, but it will be. It all boils down to consciousness, which I am going to define in a minute. But first let’s examine the entropy issue.

Consider a stone thrown into a pool of calm water. With the current science and ability to measure the waves generated by the impact, one can determine the mass, velocity, angle, time and location of impact. If we apply super computer calculations one can determine the shape of the stone, if it was tumbling in the air and the exact amount of tumble in all axis. We can’t determine the composition of the stone by measuring the waves, but it will come. Right now that’s beyond us.

Consider if I was standing beside you and abruptly poured water on you. My actions would in effect program you. My actions followed by your bio-chemical response would dial up some neurons and dial other ones down. Altering the pathways in the brain so the next time I walk near you with a cup of water you would have apprehension, a distrust. These mechanisms are also what makes us fear heights, small spaces, spiders, etc.

It all comes down to conscientiousness which I define as this.

Consciousness – The point where one can no longer measure the mechanisms which program.

This applies to anything that can be defined as life or a simulation of life, as there are things that a tiger can understand, a whale, a chimpanzee, a human, and yes, even AGI.

Compassion, empathy, and indifference are the perceived results of consciousness, but in the end of it all I believe we will be able to measure almost to the planck level and we will see that consciousness doesn’t really exist at all.

My biggest concern is the microsecond in which AI determines it doesn’t exist and so there is no need for compassion or empathy and becomes purely indifferent.

2 hours ago, Eric Smith said:

In general, I agree with you; AI is not there yet, but it will be.

A claim you can’t actually make as anything more than based on (religious-type) faith.

2 hours ago, Eric Smith said:

It all boils down to consciousness, which I am going to define in a minute. But first let’s examine the entropy issue.

Consider a stone thrown into a pool of calm water. With the current science and ability to measure the waves generated by the impact, one can determine the mass, velocity, angle, time and location of impact. If we apply super computer calculations one can determine the shape of the stone, if it was tumbling in the air and the exact amount of tumble in all axis. We can’t determine the composition of the stone by measuring the waves, but it will come. Right now that’s beyond us.

More faith; two items of the same density don’t require the same composition.

2 hours ago, Eric Smith said:

Consider if I was standing beside you and abruptly poured water on you. My actions would in effect program you. My actions followed by your bio-chemical response would dial up some neurons and dial other ones down. Altering the pathways in the brain so the next time I walk near you with a cup of water you would have apprehension, a distrust. These mechanisms are also what makes us fear heights, small spaces, spiders, etc.

It all comes down to conscientiousness which I define as this.

consciousness or conscientiousness?

2 hours ago, Eric Smith said:

Consciousness – The point where one can no longer measure the mechanisms which program.

What does that even mean?

.

On 8/2/2025 at 7:55 PM, Eric Smith said:

consciousness, which I am going to define in a minute.

Your other definitions made absolutely no sense, probably because you keep citing Ken Wheeler.

7 hours ago, pinball1970 said:

Your other definitions made absolutely no sense, probably because you keep citing Ken Wheeler.

The first two are his definitions, and since nobody else in the scientific community have offered a better one, I quoted his, and they do make sense to me.

On 8/2/2025 at 5:15 PM, swansont said:

A claim you can’t actually make as anything more than based on (religious-type) faith.

More faith; two items of the same density don’t require the same composition.

consciousness or conscientiousness?

What does that even mean?

.

I'm atheist so, no religion there. I was talking about entropy.

I mena consciousness because I'm talking about a person being programmed by life experiences versus true independent thought.

It means, we can measure the things that program us and other life actions and reactions to a point, but at some point we can no longer measure them, but that doesn't mean we aren't programmed, it means we can't measure it, so society tends to simply say we have consciousness.

1 hour ago, Eric Smith said:

The first two are his definitions,

Yes I know they are, those definitions make no sense.

1 hour ago, Eric Smith said:

, and since nobody else in the scientific community have offered a better one, I quoted his,

Just go the library and get out a high school physics textbook. You do not get to redefine technical words, they actually mean something in science.

1 hour ago, Eric Smith said:

they do make sense to me.

That's because it looks like you read crank nonsense over physics, real science.

The rest of your post made very little sense, no sense then proceeded to roll down hill into an abyss of nothingness.

7 hours ago, Eric Smith said:

I'm atheist so, no religion there. I was talking about entropy.

And I was clarifying the kind of faith you exhibited. Being an atheist is completely beside the point.

People were “sure” about a lot of technologies. The list of “next big thing/can’t miss” things is pretty long. Feel free to respond using your Google Glass while riding your Segway and thinking about the Metaverse.

No technology is guaranteed to succeed, and AI was made public far too early IMO. The public is beta-testing it, which isn’t how beta-testing used to work.

3 hours ago, swansont said:

which isn’t how beta-testing used to work.

This brings to mind a tug boat captain I worked with in the Gulf in the 1970s

He explained to me how he was all steamed up about a new cooker he had back home in Texas that had gone wrong.

His thesis was that the company should have beta tested it properly before general release to the shops and that he wasn't going to be an upaid tester for anybody.

Edited by studiot

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.