Week 7
Benjamin Williams - Fri 24 April 2020, 3:43 pm
Modified: Thu 21 May 2020, 4:35 pm
Prototype Developement
On wednesday Batt Skwad had another great discussion about how we would go about building our individual protoypes.
- Tim is working on hacking into a tv by sending remote signals. He's doing this by working with infrared LEDs.
- Anshuman is focusing on how the user can interact directly with the robot to send it away and essentially shut it up. He's playing around with various sensors.
- I am dealing with how the robot will communicate with the user. Initially I thought I could do this by hacking into a TV (with IR LED) and put subtitles on the screen. But I ruled this out since remote signals can't add subtitles. We joked that the robot would be able to quickly switch between channels and put together a message from the various channel audios - this was inspired by Transformers when Bumblebee does this with radio. So we talked about the potential of hacking a radio or speaker and trying to do something like this, but the problem was that a radio doesn't have a screen and so it'd be awkward for the robot to have to go off and find a radio to speak through. We discussed attaching a speaker to the arduino, but after doing some research this is apparently very difficult. Running out ideas, we consulted Clay who suggested the use of a DF mini player. The DF player takes an SD card filled with separate audio tracks. The programming involved will look at making a certain message play at a certain time. If the DF player proves to have problems, the fallback option is to simply hide a phone in the robot to play messages.
Combining each part
Another topic of discussion was about how we would combine our separate components. Since we can't physically meet each other it's going to be impossible to physically combine each component. The problem stands that how would the robot know when to say "haha I turned off the tv" if the TV hacking IR remote is 10kms away in Tim's bedroom. Clay cleared up this confusion by suggesting that we simulate component interactions by implementing buttons to act as the trigger of when to do what.
Reflection
I'm happy with the progress we've made this week. It took a good couple of hours of discussion to work out what approach we wanted to take in separating and building the prototype. The result being that we went with option 1 of separating the prototype into components where I am dealing with communication, Anshuman with physical interaction and Tim with screen hacking. Not only does this option allow us to efficiently build the prototype, it allows is to focus and refine our own aspect of the concept. I'm especially satisfied with this outcome because communication allows me to further explore the human and computer emotional interaction aspect that I've been interested in since the start. Moreover, I have some experience with audio technology and sound design so I'd be interested to see how I can apply these skills.