This week our team worked on delving deeper into our problem space by building off our original concept idea.
Stemming off our theme, Sassy Tech, the maze idea explored how a user would cooperate with a dodgy robot guide to complete a maze. When the user comes to a decision point (left or right), the robot would offer its advice and the user could choose to follow or ignore it. The more the user ignores the robot, the more moody and devious the robot would become. Alternatively, cooperating makes the robot friendlier and more inclined to offer good advice. Ultimately, the aim of this concept was for the user to experience cooperating with an untrustworthy robot. We boiled it down to the problem space of 'how humans put trust in robots'.
Steven the tutor helped us collect our thoughts and think more critically about how we could effectively develop a concept. Once we decided on a problem space, he advised that we identify some core concepts and a mantra. In identifying these points, we considered how each element relates back to the problem space.
- Clear objective
- Decision points multiple answers
- Guide in power
- No clear path
- Some idea of progress
- Sensation of being lost
- Trust robot or not
- Penalties for not trusting
- Benefit for ignoring robot
With these foundations, we then spent our time putting forward ideas and developing a concept.
This idea is a driving assistant that gives you directions in your mission to reach a destination. Similar to the maze, the robot recommends a path. However, you encounter obstacles follow a dodgy route if you follow the robot's directions. Eventually, the user should begin to mistrust the robot and ignore its advice.
This concept involves a cooking instructor that offers methods and ingredients for baking a cake. The robot instructor offers some dodgy ingredients and cooking methods, ultimately leading you second guess the instructions. Since the user has a vague idea in mind of the ingredients that belong in the cake, the user is inclined to place their trust in their own instincts.
The origami teacher offers folding directions in your task to create origami. Throughout the task, the robot teacher will offer some incorrect folds. The user should try to guess when these arise based on their idea of what the final origami should look like.
Those three concepts involve making the user suspect that the guide is supplying false information and that they cannot be entirely trusted. Again reinforcing this problem space that people shouldn't by default place their trust in a robot.
When Bat Skwad struct a dead end, we decided that it would be helpful if we each went off to collect some individual research, and then come back with some new inspiration.
- Overtrust of Robots in Emergency Evacuation Scenarios
I found an interesting article about a volunteer study done to test how people follow a robot to lead them to safety in the case of an emergency.
Trust in evacuation robots
The scenario was setup by placing a bunch of people in a mock building which begins to fill with smoke. Incidently, an "Emergency Guide Robot" comes to the rescue. By following the guide, participants found themselves being offered LED-lit directions to an unmarked, previously unknown exit rather than the door they'd first come in through. The bot would attempt to take followers to a clearly blocked-off area, admit that it was malfunctioning, or otherwise behave unreliable. The bottom line of this study was that humans should consider establishing an "appropriate level of trust" with robots.
- "People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault ... In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency"
After this week, it's clear that our concept is progressing, I'm just not sure where we're headed. After essentially ditching the Maze Bot idea we were feeling a bit directionless. We knew that we were exploring how humans put trust in robots, we just weren't sure of a good application to do so. Then Steven came to the rescue. Steven helped us organise our ideas into a list of core concepts and develop a mantra. With these foundations we were able to focus our ideas more and not stray too far away from the original goals. Consequently, we amassed a bunch of new ideas that all have the core concepts in common. Over the next couple of weeks we'll work on refining these ideas and choosing the best. Another consideration I want to bring up is that we will have to eventually build this thing, so while it is still the early stages of concept development, I'm trying to stay rational with what is actually achievable.