Jump to content

Featured Replies

Inspired by a comment from StringJunky and to avoid derailing a thread further, I started this topic to explore the idea how much of the challenges of AI is real and how much of that is just an outdated viewpoint.

SJ mentioned that calculators and their introduction has likely led to some reduction in arithmetic abilities. The rise of wikipedia and search engines have reduced the ability of folks to look for obscure sources, especially in libraries to some degree. But while they have reduced certain skills and abilities, they enabled other, and often more efficient approaches. I.e. the much feared loss of overall competency did not manifest, rather it resulted in a shifts of abilities.

But from an academic/educational viewpoint, AI feels different and I have struggled for a bit in figuring out whether my view is just old-fashioned, or whether I am looking at a valid problem.

I will preface that the COVID-19 pandemic has accelerated an already ongoing decline in academic abilities in young folks almost world-wide. My personal opinion is that decline in literacy is a key element, as this skill affects almost all higher academic abilities. Now, in other threads we have explored uses of AI, and how it could for example be a personalized tutor. At this point most leading LLMs tend to be decent in undergrad-level topics.

However, it does not seem that it is used as such. Rather, most students use it to bypass the learning process entirely (similar to the example of AI writing emails to be read by AI). Once they arrive at Uni, many struggle with basic comprehension, and as a consequence, it is very difficult to teach advanced concepts that build on simpler ones. They struggle to see connection between those elements and if they memorize advanced concepts, they cannot use it to extrapolate ideas from them. Perhaps unsurprisingly, they often also struggle to explain what their (likely AI-generated) reports actually mean.

We are thus again at a point, where tools seem to make certain skills obsolete. The issue I am having is that the skills in this case are not specialized, but very fundamental how we think. Folks have trouble reading, and there is little evidence that modes of media consumption (e.g., videos) can fill that gap (most studies point to the opposite).

Folks struggle with connecting ideas and synthesize information and increasingly offload it to software. But without that ability, I do not know how higher learning is possible. In short, if we offload all that mental tasks that, IMO ultimately makes us human, what else do we have left? Where is the space that the human mind still can prosper? Again, I do think that there are scenarios where AI is being used to better one's mind. But the fact that the incentive is to offload, most folks will. And if the system is geared towards that what would be a possible best case scenario? I could imagine regulations and other means to reign in AI for certain uses, though our track record in regulating tech has been abysmal at best.

In short, I would like to discuss whether my perspective is just too skewed and missing elements and if so, which are those?

11 hours ago, CharonY said:

And if the system is geared towards that what would be a possible best case scenario?

I think the best case scenario would be that it strengthens one’s ability to ask the right questions.

I for example don’t work in academia, but I have a strong interest in physics. I often play around with ideas in my head which require some mathematical investigation. Unfortunately I don’t have access to advanced CAS such as Maple, and in GR calculating stuff with pen and paper is generally cumbersome and error-prone, especially when it’s not something you do every day for a living. I can do it, because I’ve taught myself how to, but I often make silly mistakes. So nowadays I offload the cumbersome stuff to AI, and just focus on the overarching ideas (caveat - AI does get maths wrong, so one needs to check!!!). That requires me to consider carefully what questions to ask, and how the answers fit into an overall context.

So I think AI might (!!!) ultimately help to focus better on the bigger picture, by automating the cumbersome details, just as calculators helped us focus on concepts rather than manual arithmetic. But again, one has to think about the answers one gets, because they are often flawed, meaningless, or straight out wrong.

But of course, it really depends much on how people use it in practice. There are no straightforward answers.

One thing is for sure though - AI is here to stay.

Edited by Markus Hanke

To me AI is a set of tools, we have Large Language Models, Machine Learning and other tools within that, I agree it is about asking the right questions, then asking again so we get close to the expected answer.

Usually we use tools to save time, an electric saw has not made handsaws obsolete for example, we just use the right tool for the job we are doing.

As with anything we need to know what to expect as output based on the input so we know it is correct.

I don;'t see myself as a luddite, I just have not really had a huge use case for using LLMs to help with writing,, it is built in to OverLeaf so sort of useful for spelling /; grammar checking (or enhances that feature) as well as Writing LaTeX documents.

As in a previous thread, using AI to read an e-mail write a summary then write a reply seems either lazy or evidence we are subjected to too much information, so we need to change how we do things across the board rather than trying to make managing it easier.

Paul

17 hours ago, CharonY said:

Inspired by a comment from StringJunky and to avoid derailing a thread further, I started this topic to explore the idea how much of the challenges of AI is real and how much of that is just an outdated viewpoint.

SJ mentioned that calculators and their introduction has likely led to some reduction in arithmetic abilities. The rise of wikipedia and search engines have reduced the ability of folks to look for obscure sources, especially in libraries to some degree. But while they have reduced certain skills and abilities, they enabled other, and often more efficient approaches. I.e. the much feared loss of overall competency did not manifest, rather it resulted in a shifts of abilities.

But from an academic/educational viewpoint, AI feels different and I have struggled for a bit in figuring out whether my view is just old-fashioned, or whether I am looking at a valid problem.

I will preface that the COVID-19 pandemic has accelerated an already ongoing decline in academic abilities in young folks almost world-wide. My personal opinion is that decline in literacy is a key element, as this skill affects almost all higher academic abilities. Now, in other threads we have explored uses of AI, and how it could for example be a personalized tutor. At this point most leading LLMs tend to be decent in undergrad-level topics.

However, it does not seem that it is used as such. Rather, most students use it to bypass the learning process entirely (similar to the example of AI writing emails to be read by AI). Once they arrive at Uni, many struggle with basic comprehension, and as a consequence, it is very difficult to teach advanced concepts that build on simpler ones. They struggle to see connection between those elements and if they memorize advanced concepts, they cannot use it to extrapolate ideas from them. Perhaps unsurprisingly, they often also struggle to explain what their (likely AI-generated) reports actually mean.

We are thus again at a point, where tools seem to make certain skills obsolete. The issue I am having is that the skills in this case are not specialized, but very fundamental how we think. Folks have trouble reading, and there is little evidence that modes of media consumption (e.g., videos) can fill that gap (most studies point to the opposite).

Folks struggle with connecting ideas and synthesize information and increasingly offload it to software. But without that ability, I do not know how higher learning is possible. In short, if we offload all that mental tasks that, IMO ultimately makes us human, what else do we have left? Where is the space that the human mind still can prosper? Again, I do think that there are scenarios where AI is being used to better one's mind. But the fact that the incentive is to offload, most folks will. And if the system is geared towards that what would be a possible best case scenario? I could imagine regulations and other means to reign in AI for certain uses, though our track record in regulating tech has been abysmal at best.

In short, I would like to discuss whether my perspective is just too skewed and missing elements and if so, which are those?

It seems to me your experience in academia is very valuable. Like other posters, I feel sure AI has a big contribution to make, but more selectively than its proponents (who have a strong commercial interest) currently claim. The perceived Luddism is really just a natural reaction against something that is being pushed upon us too far and too fast and without any controls.

Society is already reeling under the impact of social media, which has only recently arrived and is already causing serious damage to the mental health of our children and even to democracy itself. We have not had time to adapt to it and control it properly, though some countries are getting better laws in place as we speak. Now LLMs come along, on top of this and using the self-same channels that we are slowly trying to get under control. These plainly have the potential to make things far worse for society, especially in terms of the mental health and intellectual development of individuals and on broader social cohesion. The most problematic aspect, as I see it, is that both social media and LLMs are designed to induce dependency.

The fight for personal autonomy and against dependency is as old as civilisation itself. You see it in historical concern about alcohol, nicotine and drug abuse, about overindulgence in eating, or sex, and of course about economic and political systems. So pushback against uncontrolled use of LLMs strikes me as a healthy thing. We need time to adjust to its promises and threats.

Edited by exchemist

33 minutes ago, exchemist said:

The perceived Luddism is really just a natural reaction against something that is being pushed upon us too far and too fast and without any controls.

The way I see it, when machines become as generally capable as humans, capitalism will fail. This failure will either lead to a new golden age or an apocalypse. And without any specific plan to deal with such a future, I think an apocalypse would be inevitable.

We're following the same pattern as with all technology, most of us lose the abilities that our grandparents took for granted, and we rely on the specialists to do it f

or us, like making cloth or servicing the car.

It'll be the same now but bc the technology is intelligence, most of us will rely on the specialist and lose what our grandparents took for granted.

I doubt we'll loose our philosophers/scientists but I imagine the number of students will decline substantially as the AI sophistication increases.

Edited by dimreepr

10 minutes ago, KJW said:

The way I see it, when machines become as generally capable as humans, capitalism will fail. This failure will either lead to a new golden age or an apocalypse. And without any specific plan to deal with such a future, I think an apocalypse would be inevitable.

Is this related to Yanis Varoukaface's concept of "technofeudalism"?

Just imagine what those students could achieve with unlimited extendable shoulders to stand on.

Edited by dimreepr

1 minute ago, exchemist said:

Is this related to Yanis Varoukaface's concept of "technofeudalism"?

I'm not familiar with that. The basic problem will be who owns the machines and what will become of most of humanity when those in power no longer need us.

33 minutes ago, KJW said:

I'm not familiar with that. The basic problem will be who owns the machines and what will become of most of humanity when those in power no longer need us.

But our history is littered with dystopic regime's followed by revolution and a viarbly brief time of peace, before the previous regime births a polar opposite dystopic vision.

I think AI has the potential to extend the brief time of peace, much like splitting the atom did, but much more efficiently.

When they realise they don't need the rest of us, they might realise the futility of being superior, in terms of things and stuff, they will find a different metric with which to be better, probably/hopefully who's the greatest at my favorite hobby.

If they kill us all, who's gonna serve them drinks?

Robots wouldn't work, they can't suck the jealous out of them and they starve.

Potentially, AI is the artificial god that Nietzche was looking for... 😉

40 minutes ago, dimreepr said:

When they realise they don't need the rest of us, they might realise the futility of being superior, in terms of things and stuff, they will find a different metric with which to be better

Where have you been the past year?

Just now, KJW said:

Where have you been the past year?

On the slippery slope to a dystopic future, the odds are it won't kill me. 😉

  • Author
11 hours ago, Markus Hanke said:

So I think AI might (!!!) ultimately help to focus better on the bigger picture, by automating the cumbersome details, just as calculators helped us focus on concepts rather than manual arithmetic. But again, one has to think about the answers one gets, because they are often flawed, meaningless, or straight out wrong.

I agree on that, the issue though is that it is an universal tool and while it can help you to get deeper into things, it can also be used to bypass the elements that take effort. Often, this is a good thing, but the the issue arises if the bypassed skill is actually a fundamental one. Most students write reports from articles without even reading summaries. Others, are more active, and have for example created an audio summary, that they listen to while they are doing other things.

While the latter is at least somewhat creative, it is also ineffective and in-person neither group is really able to hold a normal discussion on the topic that they were supposed to write and think about. Personally, I think it boils down to lack of basic reading and comprehension skills and while I think there is a way where AI can assist in that, I think on balance, folks will choose the easy way. It like having a tool that at the same time invites you to work out or to relax and enjoy yourself. Given the chance most folks will do the latter and I wonder how we can convince folks to choose the former.

11 hours ago, Markus Hanke said:

One thing is for sure though - AI is here to stay.

That is for sure, but similarly to social media I still think that it shouldn't be up to the companies to decide how it is implemented. Because another thing is for sure, they do not have the betterment of mankind in mind.

6 hours ago, exchemist said:

The most problematic aspect, as I see it, is that both social media and LLMs are designed to induce dependency.

The fight for personal autonomy and against dependency is as old as civilisation itself. You see it in historical concern about alcohol, nicotine and drug abuse, about overindulgence in eating, or sex, and of course about economic and political systems. So pushback against uncontrolled use of LLMs strikes me as a healthy thing. We need time to adjust to its promises and threats.

That is a very interesting viewpoint and makes sense from a corporate viewpoint. I am reminded of studies among children and young adults, where the majority have indicated that social media has been a detriment to their mental health and that they see it generally as a bad thing. At the same time, the overwhelming majority won't or is unable to stop. When I read those studies, I was reminded of typical addict behaviour.

5 hours ago, dimreepr said:

I doubt we'll loose our philosophers/scientists but I imagine the number of students will decline substantially as the AI sophistication increases.

I am not sure why you think we won't lose scientists or philosophers. There is a pathway to that actually. In academia (where most of the research is happening), focus on vocational aspects and certificates rather than quality of education. That opens the way to reduce faculty and replace with sessionals with AI support. Then, rather than broad funding of research, focus narrowly on priority areas and train AI to address them. Over time, areas where AIs are still weak or yield poor results (there are molecular biological areas with huge gaps where, from all I have read, AI is massively underperforming), will be considered non-priority. That shift will save millions if not billions, which is what most folks will focus on. The role of scientists will be limited to what probably is already starting to be the case with senior devs (i.e. agential supervision rather than leading research groups).

5 hours ago, dimreepr said:

Just imagine what those students could achieve with unlimited extendable shoulders to stand on.

There will other AIs on top of those. Students will lack the boost to climb to the first set of shoulders.

4 hours ago, dimreepr said:

I think AI has the potential to extend the brief time of peace, much like splitting the atom did, but much more efficiently.

Considering how the US government is wielding AI, do you really think that there will be short-term benefits beyond boosting the AI-economy?

On 2/19/2026 at 11:52 AM, CharonY said:

However, it does not seem that it is used as such. Rather, most students use it to bypass the learning process entirely (similar to the example of AI writing emails to be read by AI). Once they arrive at Uni, many struggle with basic comprehension, and as a consequence, it is very difficult to teach advanced concepts that build on simpler ones.

I wonder how long this has been building, with AI just being the latest tool. As came up in a discussion last year, Cliff/Monarch notes summaries of books have been around for at least 50-60 years, but you could ask questions that got into enough detail/nuance that you could tell who read the book.

I can see how AI can be abused for essays/papers, but I’m not sure how AI comes into play for exams

I saw some articles talking about bringing back blue-book exams (i.e. handwritten work) which suggests that academia had gotten away from that, and I was surprised. What have they been doing to test students?

1 hour ago, CharonY said:

That is a very interesting viewpoint and makes sense from a corporate viewpoint. I am reminded of studies among children and young adults, where the majority have indicated that social media has been a detriment to their mental health and that they see it generally as a bad thing. At the same time, the overwhelming majority won't or is unable to stop. When I read those studies, I was reminded of typical addict behaviour.

I am currently reading "Careless People", by Sarah Wynn-Williams, a New Zealander who used to work at Meta (or Facebook as it was) and left because of disillusionment about its true goals. I have only read a quarter so far but it is already clear that, for all the high-minded talk at the time, the only real goal was more hours on-line, by more eyeballs, and collecting as much monetisable personal data as possible, with little consideration of the consequences. LLMs are fairly clearly a means to turbocharge that effort.

As yet, there is almost no body of law or regulation to moderate and balance this drive of the tech corporations with what is in the public interest. The EU is working on it - which is one reason why Vance and the tech bros hate the EU more than China or Russia and desperately yearn for it to fail (Exhibit A being the new US National Security Strategy).

Edited by exchemist

7 hours ago, exchemist said:

It seems to me your experience in academia is very valuable. Like other posters, I feel sure AI has a big contribution to make, but more selectively than its proponents (who have a strong commercial interest) currently claim. The perceived Luddism is really just a natural reaction against something that is being pushed upon us too far and too fast and without any controls

That’s a big objection and it’s not just AI - a lot of tech has been pushed out as a product that should still be in beta test. Self-driving is another; people have admitted they need the data from an unready product out in the world in order to improve it. I’ve bought computer games that were/are doing rather significant “gameplay adjustment” updates for years after the initial release.

I think there wouldn’t be this kind of pushback if AI actually did its job reliably.

Another issue is the breadth of the rollout, the desire to put it seemingly everywhere, without regard to how appropriate it is. I’ve seen that happen elsewhere. When the US Navy rolled out a standardized computer system ~20 years ago (to make the IT department job easier), we had a similar resistance, because it seemed like it was a decision made by someone who only needed the Microsoft Office products on a Windows machine to do their job, and they decided that’s all that everyone else needed, too. It’s a lack of awareness of how others use the technology, which is a form of managerial incompetence

  • Author
2 hours ago, swansont said:

I can see how AI can be abused for essays/papers, but I’m not sure how AI comes into play for exams

it is mostly an issue for online exams (there is a big push towards online learning, due to a) ability to reach more folks but also b) to save money.

2 hours ago, swansont said:

I saw some articles talking about bringing back blue-book exams (i.e. handwritten work) which suggests that academia had gotten away from that, and I was surprised. What have they been doing to test students?

Administration has discouraged in-class work and want profs to provide a more interactive experience. As such lengthier writing assignments (for credit) have become take-home assignments. Short-answer tests are still in class most of the time.

2 hours ago, swansont said:

That’s a big objection and it’s not just AI - a lot of tech has been pushed out as a product that should still be in beta test. Self-driving is another; people have admitted they need the data from an unready product out in the world in order to improve it. I’ve bought computer games that were/are doing rather significant “gameplay adjustment” updates for years after the initial release.

But not only on the usability side, whole purposes change. Facebook marketed itself original as a privacy-driven sharing platform when myspace was relevant. Once they got a monopoly their whole purpose was to extract and sell client information. I have no doubt that this will also influence AI implementation.

2 hours ago, exchemist said:

As yet, there is almost no body of law or regulation to moderate and balance this drive of the tech corporations with what is in the public interest. The EU is working on it - which is one reason why Vance and the tech bros hate the EU more than China or Russia and desperately yearn for it to fail (Exhibit A being the new US National Security Strategy).

What is also somewhat troubling to me is that folks cannot even predict the strategic risk or benefit of AI or even AGI. Maybe I am reading the wrong articles but there is enough wandwaving there to power the whole endeavour via wind energy. I am also curious about how efficient the regulations are going to be. Parts of EU do have stronger privacy laws, but often legislation fails to keep up with tech.

5 hours ago, swansont said:

because it seemed like it was a decision made by someone who only needed the Microsoft Office products on a Windows machine to do their job, and they decided that’s all that everyone else needed, too. It’s a lack of awareness of how others use the technology, which is a form of managerial incompetence

Didn't most other large organisations make the same decision/mistake ?

I wrote about what happened 100 years ago on that day for around 10 years. Even with the internet it would often take 3-4 hours to find interesting and verifiable news of the day, so it would be hard to imagine how long it would have taken to go to the library and look all of it up. Having said that, I think the discussion ought to be more about the purpose and goals of deploying AI everywhere. If it is simply to make as much money as possible, I am against it. If the purpose is to make life for everyone easier/safer/better, then we need to plan on how it is to be implemented and move forward. Unfortunately, I see way more of the former than the latter.

The goal of a corporate capitalist system is shareholder profits. That's it. It's up to government to counter this with ethical and moral guardrails, so any society where government is primarily answering to corporate donors and lobbying will suffer erosion of human values. AI is a textbook case of this. The main driver of economic growth in the US is now digital tech, software and AI, so You Know Who will do anything he can to favor those sectors and take personal credit for rosy GDP figures, etc.

Students won't return to that deep, long-attention-span learning that is so vital to knowledge and competence unless forced to do so by what will likely be seen as Draconian measures. Maybe some crisis will bring that kind of change, e.g. vast numbers of employees who can't think critically, solve problems, or innovate, confronted by some disaster where those are sorely needed. Companies will maybe enter choppy waters where they discover AI is as likely to bring paperclipmageddon as to do any nuanced problem solving.

MIT Technology Review
No image preview

Our Fear of Artificial Intelligence

A true AI might ruin the world—but that assumes it’s possible at all.

I'm not a Luddite; I was building 8 bit computers in 1979, programming Z80 assembly, faithfully reading BYTE magazine ( and Steve Ciarcia;s hardware column ) along with Microprocessor Report in the University library, and waiting for the micro-computer revolution which came 15 years later.

I do think that AI has its uses, as others have outlined.
I don't think it's the panacea that a large number of people think can substitute for independent thought.

The 'group-think' of LLMs is no substitute; think for yourself, before someone, or something ( AI ), does your thinking for you.

Edited by MigL

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.