Loebner Prize facts for kids
The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most human-like. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously held textual conversations with a computer program and a human being via computer. Based upon the responses, the judge would attempt to determine which was which.
The contest was launched in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts, United States. Beginning in 2014 it was organised by the AISB at Bletchley Park. It has also been associated with Flinders University, Dartmouth College, the Science Museum in London, University of Reading and Ulster University, Magee Campus, Derry, UK City of Culture. In 2004 and 2005, it was held in Loebner's apartment in New York City. Within the field of artificial intelligence, the Loebner Prize is somewhat controversial; the most prominent critic, Marvin Minsky, called it a publicity stunt that does not help the field along.
For the final 2019 competition, the format changed. There was no panel of judges. Instead, the chatbots were judged by the public and there were to be no human competitors. The prize has been reported as defunct as of 2020.
Contents
Prizes
Originally, $2,000 was awarded for the most human-seeming program in the competition. The prize was $3,000 in 2005 and $2,250 in 2006. In 2008, $3,000 was awarded.
In addition, there were two one-time-only prizes that have never been awarded. $25,000 is offered for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program. $100,000 is the reward for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. The competition was planned to end after the achievement of this prize.
Competition rules and restrictions
The rules varied over the years and early competitions featured restricted conversation Turing tests but since 1995 the discussion has been unrestricted.
For the three entries in 2007, Robert Medeksza, Noah Duncan and Rollo Carpenter, some basic "screening questions" were used by the sponsor to evaluate the state of the technology. These included simple questions about the time, what round of the contest it is, etc.; general knowledge ("What is a hammer for?"); comparisons ("Which is faster, a train or a plane?"); and questions demonstrating memory for preceding parts of the same conversation. "All nouns, adjectives and verbs will come from a dictionary suitable for children or adolescents under the age of 12." Entries did not need to respond "intelligently" to the questions to be accepted.
For the first time in 2008 the sponsor allowed introduction of a preliminary phase to the contest opening up the competition to previously disallowed web-based entries judged by a variety of invited interrogators. The available rules do not state how interrogators are selected or instructed. Interrogators (who judge the systems) have limited time: 5 minutes per entity in the 2003 competition, 20+ per pair in 2004–2007 competitions, 5 minutes to conduct simultaneous conversations with a human and the program in 2008–2009, increased to 25 minutes of simultaneous conversation since 2010.
Criticisms
The prize has long been scorned by experts in the field, for a variety of reasons.
It is regarded by many as a publicity stunt. Marvin Minsky scathingly offered a "prize" to anyone who could stop the competition. Loebner responded by jokingly observing that Minsky's offering a prize to stop the competition effectively made him a co-sponsor.
The rules of the competition have encouraged poorly qualified judges to make rapid judgements. Interactions between judges and competitors was originally very brief, for example effectively 2.5 mins of questioning, which permitted only a few questions. Questioning was initially restricted to a single topic of the contestant's choice, such as "whimsical conversation", a domain suiting standard chatbot tricks.
Competition entrants do not aim at understanding or intelligence but resort to basic ELIZA style tricks, and successful entrants find deception and pretense is rewarded.
Contests
2006
In 2006, the contest was organised by Tim Child (CEO of Televirtual) and Huma Shah. On August 30, the four finalists were announced:
- Rollo Carpenter
- Richard Churchill and Marie-Claire Jenkins
- Noah Duncan
- Robert Medeksza
The contest was held on 17 September in the VR theatre, Torrington Place campus of University College London. The judges included the University of Reading's cybernetics professor, Kevin Warwick, a professor of artificial intelligence, John Barnden (specialist in metaphor research at the University of Birmingham), a barrister, Victoria Butler-Cole and a journalist, Graham Duncan-Rowe. The latter's experience of the event can be found in an article in Technology Review. The winner was 'Joan', based on Jabberwacky, both created by Rollo Carpenter.
2007
The 2007 competition was held on October 21 in New York City. The judges were: computer science professor Russ Abbott, philosophy professor Hartry Field, psychology assistant professor Clayton Curtis and English lecturer Scott Hutchins.
No bot passed the Turing test, but the judges ranked the three contestants as follows:
- 1st: Robert Medeksza, creator of Ultra Hal
- 2nd: Noah Duncan, a private entry, creator of Cletus
- 3rd: Rollo Carpenter from Icogno, creator of Jabberwacky
The winner received $2,250 and the annual medal. The runners-up received $250 each.
2008
The 2008 competition was organised by professor Kevin Warwick, coordinated by Huma Shah and held on October 12 at the University of Reading, UK. After testing by over one hundred judges during the preliminary phase, in June and July 2008, six finalists were selected from thirteen original entrant artificial conversational entities (ACEs). Five of those invited competed in the finals:
- Brother Jerome, Peter Cole and Benji Adams
- Elbot, Fred Roberts / Artificial Solutions
- Eugene Goostman, Vladimir Veselov, Eugene Demchenko and Sergey Ulasen
- Jabberwacky, Rollo Carpenter
- Ultra Hal, Robert Medeksza
In the finals, each of the judges was given five minutes to conduct simultaneous, split-screen conversations with two hidden entities. Elbot of Artificial Solutions won the 2008 Loebner Prize bronze award, for most human-like artificial conversational entity, through fooling three of the twelve judges who interrogated it (in the human-parallel comparisons) into believing it was human. This is coming very close to the 30% traditionally required to consider that a program has actually passed the Turing test. Eugene Goostman and Ultra Hal both deceived one judge each that it was the human.
Will Pavia, a journalist for The Times, has written about his experience; a Loebner finals' judge, he was deceived by Elbot and Eugene. Kevin Warwick and Huma Shah have reported on the parallel-paired Turing tests.
2009
The 2009 Loebner Prize Competition was held September 6, 2009, at the Brighton Centre, Brighton UK in conjunction with the Interspeech 2009 conference. The prize amount for 2009 was $3,000.
Entrants were David Levy, Rollo Carpenter, and Mohan Embar, who finished in that order.
The writer Brian Christian participated in the 2009 Loebner Prize Competition as a human confederate, and described his experiences at the competition in his book The Most Human Human.
2010
The 2010 Loebner Prize Competition was held on October 23 at California State University, Los Angeles. The 2010 competition was the 20th running of the contest. The winner was Bruce Wilcox with Suzette.
2011
The 2011 Loebner Prize Competition was held on October 19 at the University of Exeter, Devon, United Kingdom. The prize amount for 2011 was $4,000.
The four finalists and their chatterbots were Bruce Wilcox (Rosette), Adeena Mignogna (Zoe), Mohan Embar (Chip Vivant) and Ron Lee (Tutor), who finished in that order.
That year there was an addition of a panel of junior judges, namely Georgia-Mae Lindfield, William Dunne, Sam Keat and Kirill Jerdev. The results of the junior contest were markedly different from the main contest, with chatterbots Tutor and Zoe tying for first place and Chip Vivant and Rosette coming in third and fourth place, respectively.
2012
The 2012 Loebner Prize Competition was held on the 15th of May in Bletchley Park in Bletchley, Buckinghamshire, England, in honor of the Alan Turing centenary celebrations. The prize amount for 2012 was $5,000. The local arrangements organizer was David Levy, who won the Loebner Prize in 1997 and 2009.
The four finalists and their chatterbots were Mohan Embar (Chip Vivant), Bruce Wilcox (Angela), Daniel Burke (Adam), M. Allan (Linguo), who finished in that order.
That year, a team from the University of Exeter's computer science department (Ed Keedwell, Max Dupenois and Kent McClymont) conducted the first-ever live webcast of the conversations.
2013
The 2013 Loebner Prize Competition was held, for the only time on the Island of Ireland, on September 14 at the Ulster University, Magee College, Derry, Northern Ireland, UK.
The four finalists and their chatbots were Steve Worswick (Mitsuku), Dr. Ron C. Lee (Tutor), Bruce Wilcox (Rose) and Brian Rigsby (Izar), who finished in that order.
The judges were Professor Roger Schank (Socratic Arts), Professor Noel Sharkey (Sheffield University), Professor Minhua (Eunice) Ma (Huddersfield University, then University of Glasgow) and Professor Mike McTear (Ulster University).
For the 2013 Junior Loebner Prize Competition the chatbots Mitsuku and Tutor tied for first place with Rose and Izar in 3rd and 4th place respectively.
2014
The 2014 Loebner Prize Competition was held at Bletchley Park, England, on Saturday 15 November 2014. The event was filmed live by Sky News. The guest judge was television presenter and broadcaster James May.
After 2 hours of judging, 'Rose' by Bruce Wilcox was declared the winner. Bruce will receive a cheque for $4000 and a bronze medal. The ranks were as follows:
Rose - Rank 1 ($4000 & Bronze Medal); Izar - Rank 2.25 ($1500); Uberbot - Rank 3.25 ($1000); and Mitsuku - Rank 3.5 ($500).
The Judges were Dr Ian Hocking, Writer & Senior Lecturer in Psychology, Christ Church College, Canterbury; Dr Ghita Kouadri-Mostefaoui, Lecturer in Computer Science and Technology, University of Bedfordshire; Mr James May, Television Presenter and Broadcaster; and Dr Paul Sant, Dean of UCMK, University of Bedfordshire.
2015
The 2015 Loebner Prize Competition was again won by 'Rose' by Bruce Wilcox.
The judges were Jacob Aaron, Physical sciences reporter for New Scientist; Rory Cellan-Jones, Technology correspondent for the BBC; Brett Marty, Film Director and Photographer; Ariadne Tampion, Writer.
2016
The 2016 Loebner Prize was held at Bletchley Park on 17 September 2016. After 2 hours of judging the final results were announced. The ranks were as follows:
- 1st place: Mitsuku
- 2nd place: Tutor
- 3rd place: Rose
2017
The 2017 Loebner Prize was held at Bletchley Park on 16 September 2017. This was the first contest where a new message by message protocol was used, rather than the traditional one character at a time. The ranks were as follows, and were announced by a Nao_(robot):
- 1st place: Mitsuku
- 2nd place: Midge
- 3rd place: Uberbot
- 4th place: Rose
2018
The 2018 Loebner Prize was held at Bletchley Park on 8 September 2018. This was the final time it would be held in its traditional Turing Test format and its final time at Bletchley Park. The ranks were as follows:
- 1st place: Mitsuku
- 2nd place: Tutor
- 3rd place: Colombina
- 4th place: Uberbot
2019
The 2019 Loebner Prize was held at the University of Swansea from 12th-15th September, as part of a larger exhibition which looked at creativity in computers. The format of the contest changed from being a traditional Turing Test, with selected judges and humans, into a 4 day testing session where members of the general public, including schoolchildren, could interact with the bots, knowing in advance that the bots were not humans. Seventeen bots took part instead of the usual 4 finalists. Steve Worswick won for a record 5th time with Mitsuku, which enabled him to be included in the Guinness Book of Records.
A selected jury of judges also examined and voted for the ones they liked best. The ranks were as follows:
Most humanlike chatbot:
- 1st place: Mitsuku - 24 points
- 2nd place: Uberbot - 6 points
- 3rd place: Anna - 5 points
Best overall chatbot
- 1st place: Mitsuku - 19 points
- 2nd place: Uberbot - 5 points
- 3rd place: Arckon - 4 points
Winners
Official list of winners.
Year | Winner | Program |
---|---|---|
1991 | Joseph Weintraub | "Whimsical Conversation" (PC Therapist) |
1992 | Joseph Weintraub | PC Therapist |
1993 | Joseph Weintraub | PC Therapist |
1994 | Thomas Whalen | TIPS |
1995 | Joseph Weintraub | PC Therapist |
1996 | Jason Hutchens | HeX |
1997 | David Levy | Converse |
1998 | Robby Garner | Albert One |
1999 | Robby Garner | Albert One |
2000 | Richard Wallace | Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) |
2001 | Richard Wallace | Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) |
2002 | Kevin Copple | Ella |
2003 | Juergen Pirner | Jabberwock |
2004 | Richard Wallace | Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) |
2005 | Rollo Carpenter | George (Jabberwacky) |
2006 | Rollo Carpenter | Joan (Jabberwacky) |
2007 | Robert Medeksza | Ultra Hal |
2008 | Fred Roberts | Elbot |
2009 | David Levy | Do-Much-More |
2010 | Bruce Wilcox | Suzette |
2011 | Bruce Wilcox | Rosette |
2012 | Mohan Embar | Chip Vivant |
2013 | Steve Worswick | Mitsuku |
2014 | Bruce Wilcox | Rose |
2015 | Bruce Wilcox | Rose |
2016 | Steve Worswick | Mitsuku |
2017 | Steve Worswick | Mitsuku |
2018 | Steve Worswick | Mitsuku |
2019 | Steve Worswick | Mitsuku |
See also
In Spanish: Premio Loebner para niños
- List of computer science awards
- Artificial intelligence
- Glossary of artificial intelligence
- Robot
- Artificial general intelligence
- Confederate effect
- Computer game bot Turing Test