
Important Takeaways:
- Artificial intelligence (AI) chatbots like ChatGPT have been designed to replicate human speech as closely as possible to improve the user experience.
- But as AI gets more and more sophisticated, it’s becoming difficult to discern these computerized models from real people.
- Now, scientists at University of California San Diego (UCSD) reveal that two of the leading chatbots have reached a major milestone.
- Both GPT, which powers OpenAI’s ChatGPT, and LLaMa, which is behind Meta AI on WhatsApp and Facebook, have passed the famous Turing test.
- Devised by British WWII codebreaker Alan Turing Alan Turing in 1950, the Turing test or ‘imitation game’ is a standard measure to test intelligence in a machine.
- An AI passes the test when a human cannot correctly tell the difference between a response from another human and a response from the AI.
- ‘The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,’ say the UCSD scientists.
- ‘If interrogators are not able to reliably distinguish between a human and a machine, then the machine is said to have passed.’
- Last year, another study by the team found two predecessor models from OpenAI – ChatGPT-3.5 and ChatGPT-4 – fooled participants in 50 per cent and 54 per cent of cases (also when told to adopt a human persona).
- As GPT-4.5 has now scored 73 per cent, this new suggests that ChatGPT’s models are getting better and better at impersonating humans.
Read the original article by clicking here.