Jump to content

ChatGPT and science teaching


CharonY

Recommended Posts

So, during the pandemic I switched to take home exams. I gave my class 8 days to complete the exam, the only rule was "Don't copy from each other or plagiarize, I will use turnitin to make sure." All other sources were fair game. I altered the in-person exam with minor changes to make it a little less easy to simply Google the answers, then sent it. 

The same proportion of students got As, the same proportion failed. The average went from a high C to a low B. I was flabbergasted - to me that exam should have been a fucking gift, but it turns out when you write questions that necessitate synthesizing two ideas, Google doesn't help so much. 

As an aside, I had a student once use a paraphrasing app and get caught by me when it changed the word "parsimony" to "niggardly", which made for an interesting "see me in my office" moment. 

After shifting to a government science position this year, I no longer teach, but another key thing I can't see ChatGPT getting right is citing the right sources. You can usually tell if an essay is well researched by the citations a student chooses - there's usually a "greatest hits" of seminal citations in a field a switched on student will pick up on, where as a slammed together at the last minute essay will cite a random grab bag of obscure papers. 

Link to comment
Share on other sites

1 hour ago, Arete said:

another key thing I can't see ChatGPT getting right is citing the right sources

I saw a twitter thread that showed a case where the AI just fabricated the sources, presumably using the most common terms and most-cited authors, because that’s how the algorithm works.

Link to comment
Share on other sites

More attention should be spent on Galactica which is specifically trained on scientific literature. Even though it is a smaller model trained on a smaller corpus, that data (i.e. scientific literature) is much higher quality, which results in much improved outputs for scientific ends. They also incorporated a 'working memory token' to help the model work through intermediate steps to its outputs - i.e. showing your working. Would love to use this for literature reviews, there's just way too much in most domains for any human to get through.

Link to comment
Share on other sites

ChatGPT may really only be suitable for dull tasks like writing a State of the Union speech in the style of Shakespeare or Seinfeld.

https://apnews.com/article/if-chatgpt-wrote-state-of-the-union-8b4dc4774acd0f4ba4ad2fb1ea768d47

Situations where fabrication or outright nonsense isn't a problem.  And for the SOTU address  it might be more niggardly, er, parsimonious with word count.  

Link to comment
Share on other sites

  • 1 month later...
On 1/31/2023 at 12:27 AM, Genady said:

There are people who want to learn. These people don't have any reason to cheat. They do their homework to learn and to practice what they learn. They take exams to see if they got it. They look for mentors. Educators work with these people.

I agree.

While the internet does make it easier to cite references,  for example arXiv you can click a link and get the correct bibtex reference for the related article, that you just copy / paste.  BUT you still have to put the effort in by reading the articles and finding the information you want to refer to.  then writing the text around any citations.

If people want to cheat,  then I am sure at some point in the future that will catch up with them.

I did read the other day that the Microsoft and Google AI bots were citing each other and actually citing misinformation in doing so.  So while AI can help us write better I don't think it can, or will be able to do all the work for us.

The students who put in the effort will probably always stand out over those that don't.

 

Link to comment
Share on other sites

On 2/4/2023 at 12:23 AM, swansont said:

I saw a twitter thread that showed a case where the AI just fabricated the sources, presumably using the most common terms and most-cited authors, because that’s how the algorithm works.

There is now a pending lawsuit in Australia, from a man who was wrongly accused by ChatGPT of a criminal conviction for fraud.

And this article in today's Guardian makes chilling reading: https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article

Just when you thought the churning of false and mad stories around the internet could not get any worse, it is given a further boost by being actually fabricated by this bloody Artificial Stupidity robot.  The sheer irresponsibility of these people is just amazing.

(And of course we have a live example of how it can't be trusted on science, on this very forum.)  

Link to comment
Share on other sites

  • 2 months later...
On 4/6/2023 at 11:58 PM, exchemist said:

There is now a pending lawsuit in Australia, from a man who was wrongly accused by ChatGPT of a criminal conviction for fraud.

And this article in today's Guardian makes chilling reading: https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article

Just when you thought the churning of false and mad stories around the internet could not get any worse, it is given a further boost by being actually fabricated by this bloody Artificial Stupidity robot.  The sheer irresponsibility of these people is just amazing.

(And of course we have a live example of how it can't be trusted on science, on this very forum.)  

Well, most AIs still work in progress. So, it's only matter of time when they become smarter. 

Link to comment
Share on other sites

3 minutes ago, JohnathanSpike said:

Well, most AIs still work in progress. So, it's only matter of time when they become smarter. 

Yes, they are so stupid now that it is not difficult to make them smarter.

Link to comment
Share on other sites

So, with GPT and science learning on the rise, educators need to step up their assessment game, you know what I'm sayin'? It's all about finding that right balance between traditional methods and the new stuff. They can mix it up with hands-on experiments, critical thinking assignments, and yeah, even throw in some GPT-generated tasks to see how students are rockin' it. By keeping up with the times and being creative, educators can truly assess academic abilities in this evolving educational landscape.

Link to comment
Share on other sites

4 minutes ago, xkalibur08 said:

So, with GPT and science learning on the rise, educators need to step up their assessment game, you know what I'm sayin'? It's all about finding that right balance between traditional methods and the new stuff. They can mix it up with hands-on experiments, critical thinking assignments, and yeah, even throw in some GPT-generated tasks to see how students are rockin' it. By keeping up with the times and being creative, educators can truly assess academic abilities in this evolving educational landscape.

Any ideas on how to do this?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.