Week 4

Benjamin Williams - Mon 30 March 2020, 6:54 pm
Modified: Thu 21 May 2020, 3:33 pm

Update :(

With the Corona virus becoming a more serious matter in the last week, it has been difficult to focus on deco3850. Amraj has decided to drop the course since it has gone fully online. It's a shame because he offered some great insight on our idea and is a really interesting guy. I would have enjoyed working with him more this semester. Despite these set backs, the group has made some progress refining our idea.

Concept

Our our original concept was a robot companion that helps you solve a maze. While working your way through the maze, you are faced with decisions regarding which way to go. The robot offers its suggestion and some reasoning behind it. If you choose to ignore the robot's advice, it will become increasingly less helpful and even try to sabotage your progress. These days, unhelpful and almost menacing behaviour from a robot is very rare. The point of this concept is to make users aware of the potential for any seemingly helpful robot companion to act against its human master for its own benefit. This concept took inspiration from the laws of robotics, as this behaviour is designed to be a violation of laws 1 and 2:

  1. don't harm humans
  2. obey humans
  3. protect yourself

It's interesting how the nature of robots is that they can only go as far as their programming lets them. Subsequently, they lack the human cognition to make rational decisions in abnormal situations. Despite this, humans will trust a robot to lead them out of a maze because of its friendly face and by making the assumption that the robots acts under the three laws of robotics. This maze concept would explore how humans react when the robot starts to show signs that it actually has no idea what it's doing or is intentionally acting against the human. It is at this stage where the human should think 'hold up, this robot is actually just a computer, I shouldn't blindly follow it'. The robot could then start says stuff like 'Trust me, you're hurting my feelings', insinuating that it is self aware. At this stage the human user should be really confused, and begin questioning everything. 'Do robots have feelings? Is there someone controlling the robot? Is this robot my friend? Should I make the robot like me?' And ultimately this is the goal of the concept. To make users aware that technology cannot be trusted.

Here's a clip from one of my favourite movies, iRobot, where the first sentient robot, Sonny, is interrogated by the boi Will Smith. Sonny inquires about human emotion and how it is 'right' to do something when someone asks. In this case Sonny kills his creator because he asks him to, which is breaking the first law of robotics by following the second law.

Reflection

In response to feedback, we're trying to figure out if there's a better problem to solve than a maze since it would be pretty difficult to create. We just need a task that involves decisions where a robot can attempt to 'help' the user. Overall I'm happy with where our concept is headed. I think it's one of the more interesting concepts since it explores such an abstract human experiment, yet it is still one of the most relevant issues with the rapid rise of technology.