Jump to content

Menu

Chat GPT (and this forum)


Teaching3bears
 Share

Recommended Posts

1 minute ago, lewelma said:

Very interesting. So I'm wondering about 2 issues concerning geographical location. 1) For example, euthanasia is legal in NZ because it has strong support, but illegal in most places in USA. Do you think that ChatGPT's American origins will impact how it evaluates issues and what content it provides?

2) Perhaps it will be tailored to regions or to the user's google history (or Chatbot history)? So for an abortion paper, a user's with GPS showing location in the American South would have a different outcome than if they were in California? If so, we would need transparency. Who is making these decisions?

So interesting!

 

Link to comment
Share on other sites

I just asked my ds about the geography question, and he said it is currently taking majority of English speaking internet and because American is the majority of the English internet, it would definitely be biased to American viewpoints.  Definitely Cultural Imperialism. 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

He also told me that a friend said that he asked to have a joke about women, and was told that jokes based on gender were not funny and the AI would not give one. But then they asked for a Joke about men and were given one. "Why did the man stare at the OJ container?  Because it said 'concentrate'."  

Someone clearly coded in that one was acceptable and one wasn't. 

  • Like 2
  • Thanks 1
  • Sad 2
Link to comment
Share on other sites

Can you guys tell that I am currently studying power in my new degree?!?!? lol.  My current paper is Neocolonialism in international intellectual property law of indigenous genetic resources.  And yes, we have tried to get it to write this paper for me, and it can't. 🙂

  • Like 7
Link to comment
Share on other sites

2 hours ago, lewelma said:

Also, for controversial material, how does it choose which source to use?  Or what about ethical issues with multiple valid perspectives?.  These things could be manipulated for a price, I would assume. Companies could pay to have their viewpoints be the ones picked by the AI, not that different from google or facebook feeds.  

It is using all of the sources, not just one. That’s why it’s a stochastic parrot. It has compiled information, puts in words that have appeared in sources that are similar to the question or topic at hand and makes them sound good. It could actually use two different viewpoints and just make them sound good. That’s all it is programmed to do. It doesn’t know what is accurate, legitimate, true, whatever. It is just spitting out words.

There are many ethical issues that have been brought up about these models. At issue isn’t just how they would be used, it’s how they are created in the first place. 

https://towardsdatascience.com/stochastic-parrots-paper-alarm-for-future-harm-6bc2abc5e327

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

ChatGPT is quite good (aka middle school level) at writing a superficial analysis of a poem. It can 'infer' better than a lot of younger students. I imagine it may get better at that. It does have the benefit of being able to recognize allusions.

It gives decent dating advice.  Generic, but decent.

It tells me it's been trained to be polite and friendly, but that sometimes user questions make it difficult to respond in this way, so it must try to adapt.

It's really hard not to say thank you to it (for me).

 

 

 

Edited by Melissa Louise
  • Sad 1
Link to comment
Share on other sites

10 hours ago, Melissa Louise said:

Write a forum post on the topic 'good news' in the voice of an American mother who homeschools her children.

Hello everyone,

I hope this post finds you all well. I wanted to take a moment to share some good news that's been happening in my household lately.

As many of you know, I am a homeschooling mom and it can be a tough journey at times. But lately, I've seen some real progress in my children's education and it has brought so much joy to our home. My oldest son has been struggling with math for a while, but with the help of some extra resources and a lot of patience, he finally understands it and his grades have improved dramatically.

Also, my daughter who has always been shy and introverted, has found a passion for public speaking and has been participating in online debate competitions and doing exceptionally well. It has been amazing to see her confidence grow and her love for learning.

I wanted to share this with you all, because in these times, it can be easy to get caught up in the negativity and bad news that surrounds us. But it's important to remember that there is still good happening all around us, we just have to look for it.

Take care, A Homeschooling mom

I swear that’s word for word those spam emails I’m still getting thanks to Build Your Bundle most weeks 😂😂

  • Like 1
  • Haha 5
Link to comment
Share on other sites

14 minutes ago, Grace Hopper said:

Why was this even developed, I’m wondering. I know people do all sorts of things just because, but I’m wondering what the initial plan was for use of this type of AI. It seems unnecessary to even exist, with a rabbit warren of ethical implications.  

Humans are far more competent than moral.  We like to build stuff first, then wonder why it hurts us.

Shoshana Luboff's book The Age of Surveillance Capitalism makes an excellent case for the why of AI: to predict and lower risk for investors in all profitable and potentially profitable sectors.  Widgets like this particular one are just fun crumbs we get for for participating in this larger project (even unwittingly.)

Edited by Eos
  • Like 3
Link to comment
Share on other sites

A former student of mine who is currently traveling posted about how he decided to use ChatGPT to pick his next destination. He told it exactly what he wanted and in what country. It suggested a town and why it would fit his qualifications. He looked it up and decided it sounded perfect. He's headed there. It honestly seemed like a great use of the AI. 

  • Like 6
Link to comment
Share on other sites

I've been thinking a bit about the plagiarism aspect. I keep reading that plagiarism software cannot detect the AI, but is it actually plagiarism when you are not stealing work from another human?  Could you argue that if you are working with the AI, that it is just a tool not another author? I think that in the past we have equated plagiarism with not doing the work yourself, but actually plagiarism is not giving attribution to another human who wrote it. Are we redefining plagiarism? or are we claiming an AI should get attribution? 

 

 

  • Like 3
Link to comment
Share on other sites

4 minutes ago, lewelma said:

I've been thinking a bit about the plagiarism aspect. I keep reading that plagiarism software cannot detect the AI, but is it actually plagiarism when you are not stealing work from another human?  Could you argue that if you are working with the AI, that it is just a tool not another author? I think that in the past we have equated plagiarism with not doing the work yourself, but actually plagiarism is not giving attribution to another human who wrote it. Are we redefining plagiarism? or are we claiming an AI should get attribution? 

 

 

Plagiarism has already been redefined since people have been disciplined for using their own work (from another class) to use in another setting.  So that has nothing to do with stealing someone's work or ideas. 

  • Like 4
Link to comment
Share on other sites

1 minute ago, Jean in Newcastle said:

Plagiarism has already been redefined since people have been disciplined for using their own work (from another class) to use in another setting.  So that has nothing to do with stealing someone's work or ideas. 

Interesting. 

But I guess the true question is: If the AI is a tool, then it is your work. If the AI is someone else, then you have stolen work.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

40 minutes ago, lewelma said:

I've been thinking a bit about the plagiarism aspect. I keep reading that plagiarism software cannot detect the AI, but is it actually plagiarism when you are not stealing work from another human?  Could you argue that if you are working with the AI, that it is just a tool not another author? I think that in the past we have equated plagiarism with not doing the work yourself, but actually plagiarism is not giving attribution to another human who wrote it. Are we redefining plagiarism? or are we claiming an AI should get attribution? 

 

 

I think we could train students to work effectively with the AI. I'm already thinking about  what a boon this could be to my older dysgraphic kids, whose dysgraphia manifests in the structure of their writing, but not in the depth and complexity of their oral responses.

'Talking' with ChatGPT, it is explicit that it is a tool.

Do we give attribution to a tool?

So much to think about.

 

  • Like 2
Link to comment
Share on other sites

6 hours ago, lewelma said:

Interesting. 

But I guess the true question is: If the AI is a tool, then it is your work. If the AI is someone else, then you have stolen work.

A calculator is a tool, too, but if you are supposed to solve the problem without a calculator because you are expected to do the all work yourself, using one is academic dishonesty. If you're supposed to demonstrate that you can solve an integral, you can't ug it into Wolfram Alpha and claim you solved it yourself. 

Same with an essay you were supposed to write yourself without AI support. 

Eta: and outside a class environment,  it would be perfectly fine for a researcher to solve the integral with a computer program- just as a blogger could have his posts written by the Chatbot.

Edited by regentrude
  • Like 4
Link to comment
Share on other sites

My 13 year old and I sat down to play around with ChatGPT today. We decided to see how well it could do writing one of his recent essays. His virtual English 8 class required a 5 paragraph essay explaining how the character of Jonas in The Giver was relatable. Peter had found this essay easy to crank out, but also told me that he felt like an automaton while writing because he was cranking out grammatical prose that he didn't actually believe. Peter has autism and has a very hard time relating to anyone on more than the most superficial level. Also, his utilitarianism made him react to some of the dystopian elements of Jonas' society much less strongly than Jonas did. 

ChatGPT produced an essay very similar to Peter's. They both highlighted vague, stereotypical "relatable" traits between Jonas and a 13 year old boy: "growing sense of individuality", "growing understanding of love and family", and "journey of self-discovery". We had asked it to include support from the text, and it adequately did so. It, like Peter's essay, was incredibly formulaic, generic and boring, but those are exactly the traits that score well in his virtual class which is a showcase of apathy and mediocrity (from students and teachers).

I hope that teachers do not go to a model of in-class, by-hand essay writing. I think standardized test essays show us that essays written in those conditions wildly favor certain students over others, and almost always sacrifice depth and creative for speed and safety.

What I would like to see is a lot more short, slow, deliberate, collaborative writing in elementary. A lot more analyzing really well-written sentences and then paragraphs. A lot more working together as a class to model/write/edit sentences, then paragraphs, then short essays.

At that point, I would shift focus almost entirely to creating strong outlines. It is the ideas, organization and support that I care about way more than the actual writing of prose in high school. In the long run, a poor writer who thinks deeply, organizes clearly, and supports strongly will go much farther than a good, fluent, grammatical writer who cranks out the essay equivalent of elevator music. In the workplace, an employee who produces a deeply nuanced argument for why their company should make a certain choice is going to go far...even if they "wrote" the actual text by learning how to input their outline point by point into ChatGPT.

I also think this is a rallying cry for teachers to start really honing essay topics. "Explain how Jonas is relatable" is a horrible prompt. 1) Some kids are not going to find him relatable. 2) It lends itself to super generic answers that can easily be produced by AI. and 3) It is busywork, with value only in practicing writing, not in actually communicating anything interesting. If the goal was character analysis - what about a three way Venn diagram of Jonas, Fiona and the student. If the goal is pulling support from a novel - give the students several varied theses and have them provide three pieces of textual support for each. If the goal is organizing an essay - have them take one of the provided theses and the textual support they found and expand it into a full outline. And if the goal is just producing grammatical prose - provide a strong essay with various sentences missing and have them write 2-3 options for each missing sentence. (Though, even this ChatGPT can handle pretty adeptly. All the more reason to focus less on the actual writing and more on the idea generation.)

  • Like 9
Link to comment
Share on other sites

  • 2 weeks later...

I read this article a couple of days ago and it has me really concerned. https://www.medpagetoday.com/opinion/faustfiles/102723

Quote

 

I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What's the evidence for that, please? Because I'd never heard of that. It's always possible there's something that I didn't see, or there's some bad study in the literature.

OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn't find it. I went on PubMed and I couldn't find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it's made up. That's not a real paper.

It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.

 

Are we going to trust that humans will check citations to make sure they are real and not fake?  If one is propagated enough through paper after paper, Google Scholar, etc. without being caught, might it (and whatever "research" it supposedly represents) become implicitly accepted?

  • Thanks 2
Link to comment
Share on other sites

37 minutes ago, Cecropia said:

I read this article a couple of days ago and it has me really concerned. https://www.medpagetoday.com/opinion/faustfiles/102723

Are we going to trust that humans will check citations to make sure they are real and not fake?  If one is propagated enough through paper after paper, Google Scholar, etc. without being caught, might it (and whatever "research" it supposedly represents) become implicitly accepted?

Did it use actual full names, or did it take first names and surnames and scramble them up for a completely invented person? Seems like there’d be some grounds for a lawsuit by original authors whose names were appropriated. I can only imagine how spitting mad I’d be to find my name attributed so improperly. 

Link to comment
Share on other sites

19 minutes ago, Grace Hopper said:

Did it use actual full names, or did it take first names and surnames and scramble them up for a completely invented person? Seems like there’d be some grounds for a lawsuit by original authors whose names were appropriated. I can only imagine how spitting mad I’d be to find my name attributed so improperly. 

https://insidemedicine.substack.com/p/fun-with-openai-medical-charting
 

Quote

 

So I pushed back again, asking in one word for proof of this conjecture.

"Reference?
 
Vincenzo, B., Caruso, M., Garzia, M., & Licitra, G. (2007). Costochondritis in women taking oral contraceptives. European Journal of Internal Medicine, 18(2), 86-88. DOI: 10.1016/j.ejim.2006.09.001"

[...] Yes, the European Journal of Internal Medicine exists, and yes, some researchers with the last names above have published papers in said journal. But this particular team of eggheads never wrote an article called “Costochondritis in women taking oral contraceptives.”

I asked the program for a link. It gave a broken one. I asked it for another. It gave me a different link—also broken. It gave me a year and volume number for the issue. I checked the table of contents. Nothing.

 

 

Edited by Cecropia
Swapped text for image
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Grace Hopper said:

Did it use actual full names, or did it take first names and surnames and scramble them up for a completely invented person? Seems like there’d be some grounds for a lawsuit by original authors whose names were appropriated. I can only imagine how spitting mad I’d be to find my name attributed so improperly. 

 

2 hours ago, Cecropia said:

I read this article a couple of days ago and it has me really concerned. https://www.medpagetoday.com/opinion/faustfiles/102723

Are we going to trust that humans will check citations to make sure they are real and not fake?  If one is propagated enough through paper after paper, Google Scholar, etc. without being caught, might it (and whatever "research" it supposedly represents) become implicitly accepted?

Like I said upthread, it lies. It has no ethics. It puts similar words together following common writing conventions. The source that it gave probably had enough words in common with the text it had written that it met the conditions of a match. It doesn’t know if the information is accurate, true, fake. It doesn’t know if links are dead or if someone made that resource up for their own purposes and it landed in the data set and here you are!

 

Link to comment
Share on other sites

25 minutes ago, TechWife said:

 

Like I said upthread, it lies. It has no ethics. It puts similar words together following common writing conventions. The source that it gave probably had enough words in common with the text it had written that it met the conditions of a match. It doesn’t know if the information is accurate, true, fake. It doesn’t know if links are dead or if someone made that resource up for their own purposes and it landed in the data set and here you are!

 

I find all that alarming but what I remain unsure of is whether it used actual full names of physicians who wrote work it lifted from. The use of the word “conflate” in cecropia’s linked article implies there were probably name mash-ups, but it wasn’t totally clear. I think there was a whole lotta conflation going on! 

  • Like 2
Link to comment
Share on other sites

Whoa. That's fascinating. I see that as being different from the whole unreliability issue. Like, it's one thing to say, you can't trust ChatGPT because it doesn't really know what its talking about and will cobble together things that makes it sound like it does. That's ChatGPT telling you it does know what it's talking about and making up a reference. That's... huh.

  • Like 6
Link to comment
Share on other sites

I had to ask:

As a language model AI, I do not have emotions, but the sentiment of sadness is not typically associated with the idea of a bot joining a forum. Some people might find it interesting or entertaining, while others may have concerns about the potential impact on the community. Ultimately, the emotional response to a bot joining a forum is subjective and can vary among individuals.

ChatGPT

Link to comment
Share on other sites

4 hours ago, Teaching3bears said:

Would it bother you to know that the novel you read or the song you heard that touched your heart was written by AI rather than a real person?

Speaking for myself, as someone who loves to hear artists’ backstories, yes. I believe it devalues the artistic process. 

  • Like 1
Link to comment
Share on other sites

4 hours ago, Teaching3bears said:

Would it bother you to know that the novel you read or the song you heard that touched your heart was written by AI rather than a real person?

I wouldn't be happy if AI compositions were being passed off as human art works.

But if I knew a novel was written by AI, I wouldn't be bothered, particularly.

 

 

 

  • Like 1
Link to comment
Share on other sites

On 1/17/2023 at 8:07 PM, lewelma said:

I've been thinking a bit about the plagiarism aspect. I keep reading that plagiarism software cannot detect the AI, but is it actually plagiarism when you are not stealing work from another human?  Could you argue that if you are working with the AI, that it is just a tool not another author? I think that in the past we have equated plagiarism with not doing the work yourself, but actually plagiarism is not giving attribution to another human who wrote it. Are we redefining plagiarism? or are we claiming an AI should get attribution? 

 

 

There are two layers to plagiarism in an academic setting  - I regularly attend academic misconduct boards. 

Yes, plagiarism is lack of attribution. But the student receives penalties because the copying means that they have 'lied' about how much work they put into an assignment,  and the resultant grade is therefore fraudulent. 

So yes, if the entire work of the AI was put into inverted commas and cited, that would remove the plagiarism and the deceit,  and the student could be awarded a zero.

  • Like 1
Link to comment
Share on other sites

11 hours ago, Teaching3bears said:

Would it bother you to know that the novel you read or the song you heard that touched your heart was written by AI rather than a real person?

No. Just like it doesn't bother me that the movie I watch uses CGI.

Edited by regentrude
  • Like 2
  • Haha 1
Link to comment
Share on other sites

14 hours ago, Teaching3bears said:

Would it bother you to know that the novel you read or the song you heard that touched your heart was written by AI rather than a real person?

Maybe a little, but I'm aware that AI's are never the source of the material they produce. They replicate patterns they have learned from humans -- so AI work is kind of like, "The ultimate medley of thousands of anonymous human contributors." The AI functions as an advanced collaboration assistant and I don't mind that very much. I don't love that nobody gets the credit they deserve, and it happens without consent or knowledge of the 'contributors'. But in a way, we all get credit: it's the cumulative work of all online humanity.

Link to comment
Share on other sites

Total fiction done by AI wouldn't bother me.  If it's good and really does have pertinent themes, then it's good and has pertinent themes.  I suspect though, that the quality would be more along the line of mass produced pulp fiction, though. 

I have seen art done by an artist and then fed to an AI for their version.  It has a "prettiness" of it's own but lacks some of the depth and sharpness of a human produced artwork.  But it didn't create it out of it's own "imagination" (to anthropomorphize the process). 

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...
On 1/15/2023 at 8:18 PM, Ausmumof3 said:

Alternatively it could drive a faster move toward authentic assessment models and maybe more oral, video and podcast type presentation modes!

Oral and written in class assessment was the only way you earned a grade in the Soviet Block before the fall of the Berlin Wall. There were no subjects where you could take a multiple choice or at home exam, or just present a research paper. You were given a set of topics, about 60 per class, that you had to know well enough to have a one hour oral examination/conversation with the professor. The way you got the topic is to reach in a bowl which contained the topics written on folded pieces of paper, and pull one - basically, a lottery. It was grueling, and people tried to cheat with outlines written in pt2 font on a long piece of paper hidden in your sleeve, but cheating was an automatic fail.

That is the good (but impossible to return to) way to exclude this and future AI iterations from the "receiving a grade on genuine knowledge" situation.

On the other hand, the AI in its current form can only regurgitate, quite elegantly, original human work. It cannot, at this point, make connections and be "creative." It will need human product to grow its own knowledge. But, there will be fewer and fewer people who may be able to supply this new knowledge if the current and future generations do not learn the hard work of summarizing information and extracting meaning from random connections.

Edited by Nurit
spelling
  • Like 4
Link to comment
Share on other sites

On the topic of plagiarism - to me that is complex. On one hand, AI produced text is not a new/novel product; it simply spits out what others have created as original thought. It is worse than plagiarism because it does what students who plagiarize do - it does not immediately attribute those nicely written pro/against arguments (in my case, I asked about terraforming Mars; it was very competent) to those who came up with them. On the other hand, can we look at AI as a teacher - or a mentor - who exposes a topic very coherently, and you simply take those coherent thoughts, tweak them a bit through your point of view? In this scenario, the AI is the teacher, and you are the student. You got help with the outline, grammar, and vocabulary. If you are a good note-taker in class, would that not be the same? IDK.

  • Like 1
Link to comment
Share on other sites

I used ChatGPT for work with a student today.

I see her for writing, and she is a great kid with an amazing imagination, but significant learning issues incl dysgraphia. It takes her a long time to write the stories she thinks of, and although it's important she writes, it can be frustrating for her.

So sometimes we reverse engineer an idea she's had - I write and she edits, revises and otherwise changes it. This often gives her a sense of momentum in the writing.

I don't do it often, because it's time consuming for me to produce the text.

Today I set the task for ChatGPT instead - write a short story under 500 words for a ten year old reader, including 4 details the student had talked about last week in discussion. Less than a minute later, I had a perfectly serviceable model - it was a tad dull, which made it perfect for the student to play with and revise. We had a productive discussion about what wasn't in the text as well - sensory detail, figurative language character development etc.

  • Like 7
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...