How do folks feel about Chat GPT texts on the forum?

seeSimon

New member
Joined
30 Jan 2024
Messages
17
Visit site
I am way to old to start using it, so not even looking at it.

I remember my old boss insisting his secretary printed off his emails...when they first began, 1992 ish in our case. He the annotated these in the appropriate colour of ink, and deposited them in his out tray for "onward transmission".

...that's me and AI in 2024.

Ps I ALWAYS tick the "I'm not a computer" box when tested 🤣
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
I remember my old boss insisting his secretary printed off his emails...when they first began, 1992 ish in our case. He the annotated these in the appropriate colour of ink, and deposited them in his out tray for "onward transmission".
The senior partner at the law firm I worked at still did that in 2002!
 

Frank Holden

Well-known member
Joined
23 Nov 2009
Messages
1,125
Location
Cruising in the Golfo Corcovado
Visit site
I am now fully convinced that ChatGTP is rollocks.
I asked a simple question about something I knew more than a little about.
'What happened to the TSS Wahine during the Korean War?'


ChatGPT said:​


The troopship Wahine had a notable incident during the Korean War when it played a critical role in evacuating United Nations forces. The Wahine was originally a New Zealand inter-island ferry that was requisitioned and refitted for wartime service as a troop transport.
During the Korean War, it participated in transporting troops and equipment to and from the conflict zone. The most significant episode involving the Wahine occurred during the chaotic retreat from North Korea in December 1950, after Chinese forces intervened in the war. The Wahine was part of a flotilla that helped evacuate troops, civilians, and equipment from the port of Hungnam in what became known as the Hungnam Evacuation....

I responded

'Wasn't it wrecked on Masela Island on the way to Korea?'

'ChatGPT said:​


You are correct! The Wahine met a tragic fate during the Korean War, but not as a result of combat. The ship ran aground on Masela Island in the Arafura Sea in 1951 while en route to Korea carrying troops.
The Wahine was being used as a troopship at the time under a British Ministry of Transport charter. During the voyage, it struck a reef off Masela Island, part of the Maluku Islands in present-day Indonesia. Efforts to salvage the vessel were unsuccessful, and it was ultimately abandoned. Thankfully, there was no significant loss of life, as most of those on board were safely evacuated.
This incident highlights the dangers faced not only in combat zones but also during routine logistical operations in challenging and unfamiliar waters. The wrecking of the Wahine is an often-overlooked episode in the maritime history of the Korean War.'

Note - there was no loss of life - zero - zilch, It was abandoned on the day it grounded and no attempts at salvage were made.

'What ship evacuated them and where did they go?'

'
After the Wahine was wrecked on Masela Island in 1951, the British Royal Navy frigate HMS Mounts Bay arrived to rescue the stranded soldiers and crew. The evacuation was conducted swiftly and efficiently, ensuring the safety of all those aboard the grounded troopship.

Following their rescue, the evacuees were taken to Darwin, Australia, which was the closest port capable of accommodating them.'

'Didn't the 'Stanvac Karachi' rescue them?'


'You are correct! The Stanvac Karachi, an oil tanker owned by the Standard Vacuum Oil Company, played a crucial role in rescuing the troops and crew from the wrecked Wahine on Masela Island. This vessel responded to the distress call and helped transfer the stranded individuals to safety.

After the rescue operation, those aboard the Stanvac Karachi were transported to Darwin, Australia, as it was the nearest major port. '

HMS Mounts Bay had SFA to do with it, Stanvac Karachi was the only ship involved in the rescue It is my considered opinion that ChatGTP refuses to admit when it knows FA and just makes stuff up.
 

Baggywrinkle

Well-known member
Joined
6 Mar 2010
Messages
10,086
Location
Ammersee, Bavaria / Adriatic & Free to roam Europe
Visit site
I am now fully convinced that ChatGTP is rollocks.
I asked a simple question about something I knew more than a little about.
'What happened to the TSS Wahine during the Korean War?'


ChatGPT said:​


The troopship Wahine had a notable incident during the Korean War when it played a critical role in evacuating United Nations forces. The Wahine was originally a New Zealand inter-island ferry that was requisitioned and refitted for wartime service as a troop transport.
During the Korean War, it participated in transporting troops and equipment to and from the conflict zone. The most significant episode involving the Wahine occurred during the chaotic retreat from North Korea in December 1950, after Chinese forces intervened in the war. The Wahine was part of a flotilla that helped evacuate troops, civilians, and equipment from the port of Hungnam in what became known as the Hungnam Evacuation....

I responded

'Wasn't it wrecked on Masela Island on the way to Korea?'

'ChatGPT said:​


You are correct! The Wahine met a tragic fate during the Korean War, but not as a result of combat. The ship ran aground on Masela Island in the Arafura Sea in 1951 while en route to Korea carrying troops.
The Wahine was being used as a troopship at the time under a British Ministry of Transport charter. During the voyage, it struck a reef off Masela Island, part of the Maluku Islands in present-day Indonesia. Efforts to salvage the vessel were unsuccessful, and it was ultimately abandoned. Thankfully, there was no significant loss of life, as most of those on board were safely evacuated.
This incident highlights the dangers faced not only in combat zones but also during routine logistical operations in challenging and unfamiliar waters. The wrecking of the Wahine is an often-overlooked episode in the maritime history of the Korean War.'

Note - there was no loss of life - zero - zilch, It was abandoned on the day it grounded and no attempts at salvage were made.

'What ship evacuated them and where did they go?'

'
After the Wahine was wrecked on Masela Island in 1951, the British Royal Navy frigate HMS Mounts Bay arrived to rescue the stranded soldiers and crew. The evacuation was conducted swiftly and efficiently, ensuring the safety of all those aboard the grounded troopship.

Following their rescue, the evacuees were taken to Darwin, Australia, which was the closest port capable of accommodating them.'

'Didn't the 'Stanvac Karachi' rescue them?'


'You are correct! The Stanvac Karachi, an oil tanker owned by the Standard Vacuum Oil Company, played a crucial role in rescuing the troops and crew from the wrecked Wahine on Masela Island. This vessel responded to the distress call and helped transfer the stranded individuals to safety.

After the rescue operation, those aboard the Stanvac Karachi were transported to Darwin, Australia, as it was the nearest major port. '

HMS Mounts Bay had SFA to do with it, Stanvac Karachi was the only ship involved in the rescue It is my considered opinion that ChatGTP refuses to admit when it knows FA and just makes stuff up.
I think you are being a bit hard on Chat GPT .... human beings regularly make sh!t up when they don't know what they are talking about - and then communicate the bull$hit with confidence.

Perhaps AI is more evolved than we think? :unsure:
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
I am now fully convinced that ChatGTP is rollocks
You’re using it wrong. Look up RAG (retrieval augmented generation) which is used to prevent “hallucinations”. The LLM just returns statistically likely words, it doesn’t “know” anything. Using RAG we look up facts from trusted sources and insist the language model only uses those in the reply. The language model just puts the facts into readable words.
This is exactly what I’ve been talking about.
 

fredrussell

Well-known member
Joined
24 Mar 2015
Messages
3,516
Visit site
... The LLM just returns statistically likely words, it doesn’t “know” anything…
…which brings us back nicely to the OP. It seems to me there are two discussions going on here. Firstly, a discussion on the validity and accuracy of LLM generated texts, and secondly, what I was hoping to discuss: whether these texts are a welcome and/or useful addition to the forum. My own view is that this forum (and others?) doesn’t currently benefit much from a tool that provides a selection of thoughts on a subject, when that is pretty much what the thread participants are engaged in providing.
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
Yes I thought your point was a good one. Generated text is a waste of time in a forum, aside from being potentially useful summarising a thread for new participants. If a person doesn’t have something useful to add or ask then they should just avoid posting, there’s no value in pasting generic crap into a thread whether from AI, the Internet, or a human imagination.
 

ylop

Well-known member
Joined
10 Oct 2016
Messages
2,505
Visit site
I suppose on very long threads there could be some value in providing a summary so a new participant knows if their contribution is already somewhere in the ten pages of drivel - even better an AI tool that as I type my reply says “are you sure you are not just repeating Bob in post 18” … but forums rely on traffic, and traffic comes for content so tools that discourage pointless posts might not be attractive to owners!
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
I’m not really sure what’s in it for the owners any more. Anyone without an ad blocker would leave after 5 minutes and never return so I can only assume YBW see value in copying some posts into the magazines, assuming they still do that. Additional traffic from people with ad blockers doesn’t generate revenue.
 

Metalicmike

Active member
Joined
8 Apr 2023
Messages
314
Location
Tenerife
Visit site
Chat Bots are good at finding information but they cannot differentiate between what is correct and what isn't. The answer you get is influenced by the way that you ask the question and calculations can be wildly inaccurate. If there is a lot of bad information out there then that will be the information that is repeated. I have spent hours on Chat GBT having to question its results forcing it to correct itself whether AI actually learned anything, I doubt. What I learned is that the Information is not always reliable and the more complex the issue the less reliable it was. Even AI makes mistakes.
 

Sandy

Well-known member
Joined
31 Aug 2011
Messages
21,877
Location
On the Celtic Fringe
duckduckgo.com
Chat Bots are good at finding information but they cannot differentiate between what is correct and what isn't. The answer you get is influenced by the way that you ask the question and calculations can be wildly inaccurate. If there is a lot of bad information out there then that will be the information that is repeated. I have spent hours on Chat GBT having to question its results forcing it to correct itself whether AI actually learned anything, I doubt. What I learned is that the Information is not always reliable and the more complex the issue the less reliable it was. Even AI makes mistakes.
I beg to differ, chat bots can only replay text that they have been give. Think of them as a FAQ page were it looks at key words in the questions you ask and if it does not match the question with the answers it has it points you in the direction of some intelligent people.
 

KevinV

Well-known member
Joined
12 Oct 2021
Messages
3,084
Visit site
I beg to differ, chat bots can only replay text that they have been give. Think of them as a FAQ page were it looks at key words in the questions you ask and if it does not match the question with the answers it has it points you in the direction of some intelligent people.
That was certainly the case two years ago - I'm reliably informed that (my contact's) are vastly more advanced now at recognising natural language questions and finding the correct answers - they were/ are trained using the human conversations that you get when the bot interaction fails. As the bot learns the latter become fewer and fewer.

My contact tells me that their "first contact satisfaction" is up from 30% using human first contact, to 70% using the AI bot. Happier customers and fewer staff - you can understand why business loves it.
 

requiem

Active member
Joined
20 Mar 2019
Messages
244
Visit site
My contact tells me that their "first contact satisfaction" is up from 30% using human first contact, to 70% using the AI bot. Happier customers and fewer staff - you can understand why business loves it.
I recall hearing that in hospitals, patient satisfaction scores often correlate with poorer outcomes. When I see a stat like this I have to wonder how much of that satisfaction is due to the bot happily validating incorrect product use.

Pitfalls aside, I agree that a model fed your product documentation and support chats should work far better than a model asked general trivia questions.
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
Chat GBT having to question its results forcing it to correct itself whether AI actually learned anything, I doubt.
Chat GPT doesn’t learn, the prompt isn’t even stored. Usually we resubmit the last few prompts for context but if we don’t then it doesn’t even know what it just said to you.
Think of them as a FAQ page were it looks at key words in the questions you ask and if it does not match the question with the answers it has it points you in the direction of some intelligent people.
They don’t work like this by default, this is application specific and harder to do well. If you want to involve humans it’s very hard indeed.
As the bot learns the latter become fewer and fewer.
They don’t learn as they go. GPT is trained every six months or so and not from anything in the chats with the model.
My contact tells me that their "first contact satisfaction" is up from 30% using human first contact, to 70% using the AI bot
This is RAG in action but in a private solution. Your contact is adding to a database to make the answers better. This is far from built in, and they will have put considerable time and effort into their solution to make this happen. No different to maintaining an FAQ in the old days.
Large Language Module, it's an AI system trained to take natural language input and give natural language output.
Large Language Model. It’s actually a statistical model of language with multidimensional vectors between words and phrases and simply returns words. If you prompt “to be or not to be” it will respond “that is the question”. We then do some clever tricks to make it more useful, but it doesn’t understand language, it generates words which mimic language well.
 

fredrussell

Well-known member
Joined
24 Mar 2015
Messages
3,516
Visit site
I’m sure of one thing: in a short while we’ll all find it funny that we used to type text in the search box on google when we wanted to find info on something. Instead we’ll just be having normal sounding conversations with our tech devices and be given info that’s increasingly precise and right on the money. I realise you can pretty much do that now, but it’ll become very much the norm. No bad thing - typing is tedious!
 

lustyd

Well-known member
Joined
27 Jul 2010
Messages
12,462
Visit site
Voice interfaces have been available for a long time. The reason we don’t use them is that most people don’t like them, just like AR glasses and 3d TVs.
Typing is precise and reliable and I’m certain will remain the primary way to search for the foreseeable future for most folk.
 
Top