Extended Lesson Plan for Hour of Code activity

Overview

In this code activity, students are introduced to programming topics, including reading input from the user and printing output to the screen. They are also introduced to computational thinking concepts, including control structures which determine the flow of a program.

The activity is structured as sequence of interactive notes and questions that students are challenged to answer. Students are introduced to chatbots, which are computer programs designed to simulate intelligent conversation with a user, often via text, and sometimes with the aim of passing the Turing test.

Chatbots offer an excellent opportunity to discuss concepts of artificial intelligence with students.

These concept could be introduced after a discussion about a number of topics, or could lead on to more discussion:

Specifically our Eliza activity is inspired by the ELIZA chatbot program written at MIT by Joseph Weizenbaum between 1964 and 1966. ELIZA was a simulation of a Rogerian psychotherapist. Using simple pattern matching techniques, (and no information about human thought or emotion), ELIZA sometimes provided a startlingly human-like interaction.

Learning Objectives

At completion of this activity, learner will:
  • have used a computer to print data to the screen
  • have used the computer to make decisions
  • have used logic and problem solving skills to answer simple questions

Extended lesson plan and activities

Students can chat with a reimplementation of the original ELIZA: http://nlp-addiction.com/eliza/ and more recently developed chatbot, Cleverbot: http://www.cleverbot.com/

Students can then answer these questions in pairs or small groups:

  • What did it feel like talking to Eliza?
  • What did it feel talking to Cleverbot?
  • How are the two bots similar?
  • How are they different?
  • Which seemed more like you were talking to a human?
  • Do you think it would pass the Turing test?
  • What did the bots do "wrong" in terms of passing as a human?

On the Internet, no on can tell you're a bot! - Follow up activity

Students have a discussion about how trusting they are of things they read on the internet, discussing questions such as 'Who can you trust', and specifically, 'How do you know you're not talking to a chatbot?'

With apps such as "Clevertweeter" which you can set up to tweet on your behalf with auto-generated, somewhat relevant content, how sure can we be of our interactions online?

Further discussion or extension points:

  • Will we be able to make robots that are indistinguishable from humans?
  • To what extent is language a human endeavour?
  • How different is a robot that can talk to you to one that can't?
  • What areas of society could best utilise 'companion' robots that can communicate easily and fluently? For instance, the use of robots in aged care and dementia care.

Let's get started!