Documentation & Reflection

Week 14 - Online Exhibit

Anshuman Mander - Sat 13 June 2020, 8:11 pm


The long awaited journey has finally come to end. With the exhibition this week, we have finally reached the end of weird yet amusing online classes. Before the farewell, let me take you through the week.

Online Exhibit (Whaaaaaaaat?) -

Alright, everything that could have gone wrong went wrong. While preparing for the online exhibit, me and the team arrived 3 hours before exhibit begins so, we decided to test if everything worked. Plugged the prototype in, uploaded the code and everything broke. We fixed them eventually but there were three problems we had to solve -

  1. Numero Uno, Arduino IDE failure: Unfortunately & funnily, the new arduino update on the day our exhibit was due screwed up. We were unable to even open up the IDE. Fortunately with a simple fix on internet, we were back on track. Even some other students there faced the same problem.
  2. Numero Dos, burning arduino: After uploading the code, we took 10 min break and when we came back, smoke was coming out of arduino. The problem was related to voltage and too many parts connected to one 5V pin. We had IR sensors, 3 Neopixel Strips, DFmini Player & speaker connected to one power source. Following this, we took out individual parts and made each host on different arduinos. This did meant we had to simualte the individual aspects working together.
  3. Numero Tres, Sensor failure - The potentiometer stopped working. I had hot glued all the parts but still, it broke. I was getting same reading even when I was turning the potentiometer. So, I borrowed a spare one for Lab and thankfully the prototype was fixed.

Overall, once the exhibit started everything went well. We presented and got good feedback along with interesting questions and future development suggestions. Though we weren't able to give viewers the same experience due to the exhibit being online, still seeing the responses, I would say that we were able to get the concept across successfully.

We built website for the exhibit which can be found here -

I know there are tons of improvements that can be made but if you would like to review the website and suggest me improvements, it would be appreciated.

Farewell -

I know the course isn't over and there's still one assessment left but still it feels like over (I will probably post another update on the last assessment). Anyway Arigato & Sayonara


Week 13 - Last week :-(

Anshuman Mander - Sun 7 June 2020, 5:13 pm

Sat Sri Akaal,

Combining code, part 2 -

This week the team got together again, trying to combine each individual aspect. Continuing from last week where we combined talking and interactions, most of this week was spent fixing the errors. Due to some unknown reason, the DFmini player showed error constantly. 1/10 times it worked and rest of time it showed error with the wiring. We tried everything, rewiring the breadboard, finding solutions on internet but nothing worked. The only rescue was ignoring and running code if there's an error. Don't know why but if the error was ignored the DFmini player still worked. With the little time we had left, we tried combining Tim's part where the robot turns off TV. The component was working individually, but combined together it didn't work. Most probably, the error was related to how much voltage the circuit was using. All neopixel Leds (40 ofthem), IR sensor and receiver and DFmini player was connected to one power source.


Considering the time we had left, the team decided to not include the TV turn off functionality, even though it was important to the functionality. I would like to say that if we had bit more time, we could have figured it out but because Ben had a surgery and couldn't meet, it was harder to manage combining the aspects together.

Website Development -

I started working on developing the portfolio at the start of week. I laid out each section on a word document and thought I would finish developing by the end of week but due to other course commitments and how much time was spent on combining code, the website isn't full complete. I have compleed around 70% of website but it's still far away. There is a lesson here which I have learned hundreds of time but still forget, "get started early".

To do -

Complete the portfolio before the end of the day. Meet with team on monday and have a last try at combining Tim's part. If not possible, we would just check the functionality present works properly. Also as a self reminder, getStartedOnTeamDocumentOrYouWillRegretIt.

Week 12 - Work towards final build

Anshuman Mander - Sat 30 May 2020, 10:45 pm


This week was aaaaaaaaaaaaaahhh weirdly weird. A lot happened and nothing happened at the same time. The week went like.....

Start of week, coding struggles

At the start of week, I tried to work with multiple sensors and failed miserably. Since each sensor read different values, to average and display on neopixel strip did not work. The main reason for this was the senstivity of different sensors. The potentiometer was able to read values from 0 to 1023 easily but the photocell got stuck from 200 to 700. The pressure sensor worked well but the piezo sensor showed struggles. Realising this, instead of averaging values, I thought to only use one sensor at a time. This was done with while loops which enabled me to display properly. Additionally, while loops me to produce different patterns on neopixel instead of just sticking to one. A sacrifice here was that only one sensor is used at a time, which is fine since interacting with multiple sensors wasn't part of plan anyway.

Imgur Imgur
Mid Week - Combining the code

During the regular studio and workshop time, the team decided to meet and combine their individual parts to form the robot. It was quite a struggle to get everything working due to wiring being a mess and constantly unplugging. So, after learning basic code of robot speaking from team mate Ben, I used the workshop and hot glued any piece of wire I could. This ensured nothing broke mid way. We were also able to combine bits of code and produce a very simple interaction, when volume is low, the robot comments sassily and when its volume is high, it "yayyys".

Weekend - To Do

Towards the end of weekend, I plan to get every sensor working properly. They do work now but are a bit janky. Moreover, I plan to add code that keeps track anger level of robot and based on the anger level, the robot can say different things.

Week 11 - Final build progress

Anshuman Mander - Sat 23 May 2020, 7:01 pm
Modified: Sun 24 May 2020, 10:29 pm

Nǐn hǎo,

This week I started making version 2 of the prototype, which contained a few added quirks to the robot's face. The new changes were the features I touched on last week. An eyebrow and new smiling implementation were used alongside potentiometer to control robot's expressions. It is just a simple build that tested out the new neopixel strips I acquired last week.


As can be seen, a potentiometer acts as volume control for robot. Turning the potentiometer in either direction controls the mood of robot which is reflected through smile and eyebrow.

In addition to simply implementing previous functionality, I have also added the leds turning on one by one. The purpose here was to support the transition from happy smile to frown as user lower's the volume.

In Arduino, very simple code is used, which reads the potentiometer and depending upon it's range, lights up the led. Something to implement in future would be a variable which holds the anger of robot and increases once a transition have been made from smile to frown.

To do -

The team is going to meet on Tuesday and we are going to try and put the prototype together. Before that I would try to include other sensors as well. Additionally, I would try to include code for each part so that every part works together.

Week 10 - Appraisals

Anshuman Mander - Sat 16 May 2020, 11:17 pm


Not much has happened since last week in terms of concept development and prototype build. During the studio and workshop, the team spent the time on team appraisals, going through prototypes together and discussing them. From the appraisals we were given, there were a few common critiques mostly every prototype had -

  1. Intended Experiences - This was common in almost every prototype. Many people hadn't thought about what experience their prototype delivers to stakeholders and what emotional responses are triggered when people interact with the prototype. Intended experience is one of the major contributor to the progress of concept.
  2. Interaction Paradigm - I think students had hard time understanding what it is so, many documents didn't have a clear indication of what interaction paradigm exists within their concept. This is also my weak point, I do understand part of it but need further clarification.
  3. Problem space - Some students had mentioned interactions and activity that solves their problem space but how the activity achieves this was not made clear. A clear link between the problem and how the solution solves it betters the understanding of concept.

Overall, I think students worked hard on their videos and all components of prototype were well defined in it. The purpose of pointing out the areas above is for me to pay more attention to them. I had considered the areas when planning the interactions but hadn't thought about them well enough, something I aim to do for the final deliverable. Also, from other student's appraisals on my video, there were a few key findings -

  • Negative and positive interactions should be more balanced.
  • In addition to smile, more facial expressions like red eyes when angry would give robot more character.
  • There needs to be more anthromorphic feedback.

These are some points I aim to look at and work on in the upcoming weeks.

Week 9 - Prototype Build

Anshuman Mander - Sat 9 May 2020, 2:31 pm


This week I completed the physical build for the prototype. I had hashed out all the details about build and its design the previous week and only needed to implement it. So, to build robot's face, I used a cardboard box and flipped it inside out. Attached ears to it and started putting sensors in. I had designed the robot to look scary and yet human. Scary part of robot is used to induce hesitation in humans from treating it harshly and human part allows people to sympathise with it.The build was easy but the most challenging part was cable management. Tons and tons of blue tack and super glue has been used to hold everything together.

In the prototype, there are four sensors which corresponds to four interactions with the robot -

  • The eye - Contains photocell and detects if robots view is blocked.
  • Mouth - has a photocell that is used to control volume of robot.
  • Ears - Thin film pressure sensor used to detect squeezing of ears.
  • Inside robot - Piezo (vibration) Sensor used for detecting any objects thrown at robot.

Aside from this, an Led mouth is used that display affects of using the interactions above. Only four Leds are used since using more was a mess and did break several times. Also, the interaction which made robot happy wasn't included in the prototype. The decision was made because without other team mates work, the interactions doesn't make sense and doesn't affect the intended experience in anyway.

Below is an example of how block view interaction works -

Imgur Imgur Imgur
To Do -

Apart from completing video and documentation, the prototype still needs to be tweaked for senstivity of sensors. Arduino code also needs some further work in order to include the dimming feature of Leds. The code till now works for individual sensors but breaks when combined together. This also needs to be fixed before video is filmed.

Week 8 - New Changes and Physical Prototype

Anshuman Mander - Sun 3 May 2020, 3:30 pm


This week I made a small survey to test out the interactions in concept and from the results of that I made some changes. The new changes revolve around analysis of test (covered in previous blog).

The first change was inculsion of a smile on the robot. The smile reflects the "mood" of robot and is affected by user's interactions. For example, the robot frowns it gets angry and smiles when it gets happy. Happy and frown of robot is determined by users interactions. Hence, the smile shows how user's actions are affecting the robot & is a visual indicator of the sassiness of robot.

Also, it was suggested to have interactions that makes robot happy. To make the robot happier, I was thinking of something like patting the robot on head. This interaction makes the robot happy and stops it from sassing the user. Since patting is a constant action, it is rather hard for user to continue for long periods of time. This fits in perfectly and gives user a choice to either do hard interaction of patting and make robot happy or do other interactions and dig make robot angry.

Physical Build -

To start with physical build and form, I used cardboard lying around the house and folded it to make face of robot. I only built face since all interactions resided within it and there was no need for a body. To build the aforementioned smile, green and red Led's were used. When the robot becomes angry, the red Led's light up, else the green Led's light up. Also, there is dimming functionality in Leds which depends on how much you are annoying robot.

Imgur Imgur

In the picture above the leds shows smile & the potentiometer in between smiles is used as the volume slider interaction. The second pic of photocell is the eye of robot and works as the "block view interaction". This eye was inspired by "Daleks" from TV show "Doctor Who". The dalek looklike gives robot an intimidating look which bodes well with our robot.

A Dalek

The third interaction of squeezing ears is achieved through a pressure sensor in the ears (in pic 1). The last interaction which uses piezo sensor for detecting vibration is fitted inside the cardboard box.

Moving Ahead -

Now, with somewhat assembled physical build, the programming part is left. Some code has already been written which just needs to b completed. Also, I would like to test and tune senstivity of sensors before start filming the video. As of concept development, I'm satisfied with the current state and before moving forward, would get feedback on the build in next week's studio/workshop.

Week 8 - Testing and Updates

Anshuman Mander - Thu 30 April 2020, 12:08 pm


As mentioned in the last blog, I conducted a survey to gather some further insights into my individual direction. The questions in survey revolves around exploring what the tester thinks about the current concept, how it may be different to their mental models and if given chance what would they change about it. The survey can be found here.

Responses and changes -

Through the testing, it was revealed that the interactions themselves are sassy and makes sense to testers but further improvements can be made. The testers suggested some reactions when the robot is stopped. This reaction can be verbal or some other minor retaliation. When asked for ways to stop user from stopping robot further, testers suggested that robot plays by rules of user and later retaliate. This lines up straight with our idea of the robot gaining control over user's devices in order to retaliate more. When asked for features to include, the response was for inculsion of interactions that made the robot happier. Based on the summary above, few changes were made to concept and interactions -

  • Addition of reactions when robot is stopped (These reactions would come under my team mates focus, such as the robot screaming would be handled by Ben).
  • The robot plays nice and doesn't sass in verbal ways, when it is stopped. It rather stops the TV or controls TV and doesn't express it. Not expressing helps robot from being stopped again.
  • One more interaction would be included that makes the robot happier. This gives user the chance to apologise to the robot.

A reflection on ideal finished product -

In the previous entry, I said the ideal finished product is similar to what I'm developing. Looking back at it, there a few things I would have included. I left the part where robot has reactions. If I had more time, I would have liked to see reactions from the robot, such as yelling if its ear is pulled or going around in circles if its view is blocked. As of now there is not much I can do to make them work.

Week 7 - Individual Recap and Update

Anshuman Mander - Fri 24 April 2020, 4:23 pm
Modified: Fri 24 April 2020, 4:41 pm


Individual Recap:

Our team concept is a sassy robot who stops the user from watching screens too much (like TV) using sass as its weapon. In our team, every member works at different aspects of same concept. My individual direction focusses on "stopping sass of the robot temporarily". In particular, my focus looks at interactions that aims at stopping robot from performing sassy activities for short period of time. To user it may look like a short relief from sassiness of robot but there is hidden purpose for this.

The hidden purpose is to highlight people's choice to ignore good decision for temporary pleasure (watching TV). So, the idea is whenever the user ignores robot's sass and stick to watching screen, the robot develops and makes it harder to ignore advice the next time. Repetitive stopping would help robot become more powerful and eventually user wont be able to stop it until screen usage is stopped.

With this purpose in mind, I developed some interactions that mimics scolding. Scolding is selected as it allows user to self reflect. All the interactions are also meant to disorient the robot so that robot's personality is reflected.

Imgur Imgur Imgur Imgur

The short term effects are temporary relief methods while long term effects are gained through repetive usage of interactions. Long term effects are used to help users realise - the longer you ignore, the harder it gets.

Ideal Finished Product:

My ideal finished product would be the robot with all the interactions above. The robot would zoom around the house and whenever it detects someone use screen for too long, it will start sassing. To make the sass go away, user would either have to turn off the TV or use the interactions above. The ideal product is similar to what I'm developing but since due to social distancing, the form of protype is not fully developed and we can't use the roomba cleaning robot that was supposed to be the robot's chariot. Except for that the prototype is close to the ideal product.

Current Progress & Ahead:

Currently I'm looking to understanding and testing individual sensors used in interactions -

  • Cloth over face- Photocell resitor
  • Shaking - Piezo element
  • Ear pull- Pressure sensor
  • Volume Slider - Potentiometer

I am not worried about how to use sensors but what I'm unsure about is how I will develop the form of prototype. This something I'm going to work on over the weekend.

Also, I'm preparing a test which helps me understand more about how the user views the interactions I have chosen. This in the form of a survey that would be indicative about the roles of interactions.

Week 6 - Pandemic Testing and Break

Anshuman Mander - Sun 12 April 2020, 7:46 pm


During the contact session, we got to experience "Pandemic Testing". COVID-19 is rampant and people are stuck inside their homes. So what does this mean for design - unfortunately, the ordinary testing methods have become void. How do we test now while under quarantine and restrictions?, this is what we explored this week in our contact class. In the contact exercise we observed at person on train without actually being on a train (whatttttt). While tasked, I opted to search for videos on youtube while some chose to see this in magazines. It was fascinating to understand how we can test while home, though not being the same it did fulfil our purpose of gaining insights. Below are my insights collected -


In second part of exercise, we were also asked to research for other methods that can also be used in quarantine and below are what I found. Personally, I think online discussions forms are an excellent source of gathering info, which I could even use for testing my individual direction in project.


Break -

My plan for the break (except for watching movies all day) is to complete the Arduino tutorial. Currently I'm on Activity 3 and since I have good understanding of Arduino, I don't think it would be difficult. Over the break, I also intend to complete the interactions for my individual direction. Except for that, I wish everyone Happy Easter.

Adios Amigos

Week 6 - Individual Direction & Future

Anshuman Mander - Sat 11 April 2020, 7:22 pm
Modified: Sat 11 April 2020, 7:22 pm


Continuing from previous week, our team has decided to work on the concept of Self-Aware Bot (any suggestions for better name??), with some changes. We decided to narrow down the concept to focus on sceen usage only instead of any tech. We decided on screen usage since its a habit everyone struggles with and is common in everyday environment - home. Hence, the concept now looks at ways to reduce screen usage in home through the an element of sass.

On the individual level, the team decided to opt option 1 and develop the concept collaboratively where each individual looks at a different aspects. My team mates Ben & Tim chose to look at interactions where the robot will annoy user while I decided to pursue the the opposite, where users can stop the robot from annoying them. There is only one way to stop the sass of robot and that is to stop it from talking. So, continued from previous blog, these were interactions used to stop the robot from talking -


Though these interactions fulfilled our purposes, it was quickly realised that they weren't friendly and presented a dytopian view of world, which we don't want our stakeholders to see/experience. Hence, I decided to move away from using these gruesome interactions and made them more subtle. The new interactions fulfil the same purposes but in a non-horific way -


Future -

During the next week or so, I will try to research more and include cheeky ways to stop the robot from sassing. My goal is to finalise the interactions in the Easter break and get ready to start testing (bit confused about testing due to pandemic restrictions).

Week 5 - Team Concept Development

Anshuman Mander - Sat 4 April 2020, 1:28 pm

Minasan, kon'nichiwa

Between week 4 & 5

After the critiques the previous week, some problems with our concept were highlighted. To recap, the main issues were regarding everyday usage missing and its interaction (the levers) not being playful. Our team, the BAT Skwad decided to ideate more and research more individually regarding concept refinement. One idea that I thought of was Doggo Assistant


Although, at first doggo assistant might not be sassy but as you progress and start ignoring it, ignoring the reminders and mistreat it (being lazy and instead of tapping the dog by hand, they might tap with their feet), the dog becomes sassy as well. The dog can fight back by pissing on you or headbutting you and in extreme agitation state, can even bite you.

Week 5

In week 5, the team had a meeting where we each discussed our ideas to find the one we collectively are ready to work on. Although we had plenty of ideas, there was none that fully satisfied us. With doggo assistant it's interactions are mundane and not playful enough. So we took up this problem to our tutors & they suggested to write the core elements we had in our first idea. In reflection, this is something we should have done after critiques. This process makes sure that team is in agreement with whats important. So, we extracted four core elements in our pitch idea -

  • Decision Points - We want users to decide their own fate . If users are nice to tech, tech is nice to them and vice-versa.
  • Tech's Control - The tech should be in control even though human think they are.
  • Reason for not trusting - The user should have a reason for not trusting the tech.
  • Penalty for not trusting - The tech should penalise user in a sassy way to not trusting.

Following the points above, I came up with another idea of Self Aware Robot (sorry for terrible name) -

Self-aware robot is a robot you found lying on the side of road. You took it home, powered it up and it started talking!!. The robot said its from future and is aware of fact that its a machine made to serve humans so reasonably, the robot hates people misusing tech. Hence, whenever the user misuses the tech, such as watching too much TV or phone, the robot becomes sassy and starts spouting at user to stop. Now, the user just wanting to watch TV can stop the robot in the following ways -


Everytime you try to shut the robot up, you are torturing it. Being tortured again and again, the robot gains control over your appliances and starts raining havoc such as making you watch a documentary about being kind or turning off the TV. Although some parts of idea is missing, like what else can the robot do, what extent of power the robot has over electrical appliances, etc but, its a introductory concept whuch the team can refine if its decided to be our main concept moving forward.

In its essence, Self Aware bot is a habit changer which tries to lead you to using less tech and also teaches you kindness. This concept covers all the core elements we want extracted above. The concept is yet to be finalised by team but, moving forward in our next meeting, the team will again extract core elements from each idea to iterate over.

Min'na arigatō.

Week 4: Presentation, Feedback and Future

Anshuman Mander - Sat 28 March 2020, 2:58 pm
Modified: Sun 29 March 2020, 12:37 pm

Online Classes -

This week, we got our first online class and surprisingly, it was pretty much the same as normal class. The teams presented and critique was given through slack. My team's presentation went well (thanks to Amraj for presenting) and we gained a few insights that has helped us expand the concept. Presentation slides can be viewed here.

Overall, every presentations was awesome and some of them had a really concrete concept. Interstingly, the presentations that had the least data and more idea exploration were the most eye gazing. Sketches helped them to get the idea across and this is something I personally can learn from. I will try to involve more sketches (even rough ones) to get the message across. Moving along, lets see some critiques we got.

Critiques & Moving Ahead-

Before going into presentations, we had some questions regarding future direction of concept. The questions aimed at acquiring info regarding everyday usage of concept, what other mediums can this be utilised in and if the message is of interest to people or not.

Reading the critiques, it becomes explicit that viewers are confused about our idea. Some viewers asked in critique, how the concept is supposed to increase trust in machines. There may have been some mistake on our part while presenting because our aim with the concept was to not trust machines.

Except that, when fellow mates commented about everyday interaction, the general advice was to move away from maze format and use decision making in everyday things.

  • One idea I can think of involves making the concept an assitant to help do daily task. While performing daily tasks, users are given choices that either compromises assistants health or the users time. The more time user saves and lower the assistant health goes, the more broken/sassy it becomes. In this way the concept encompasses both the goal and everyday interaction. The team with discuss this matter further as well.

Another thing the critics disliked was the interaction. Simple levers didn't satisfy their need for a playful interaction.

  • Going ahead, the team's looking at other forms of interaction but currently we have no leads. We will try best to accomplish once we decide on the form of concept (robot or assistant or such).

Moving ahead, we are going to focus on finalising the form and interaction in concept. Our goal still remains same since people did like our aim of distrust in machines.

Week 4: Concept Development

Anshuman Mander - Thu 26 March 2020, 2:03 pm
Modified: Thu 26 March 2020, 7:37 pm

This week has certainly been an interesting one. At the end of week 3, all students got anouncement about the pause in teaching for a week. During this break, our team consisting of Ben, me, Tim and Amraj (BATA SKWAD) got more time to further look into different inspirations for refining our concept. Our ideation process was bit on the messy side but its better to fail fast, fail often. Here's our process -

Part 1: Idea dump

Towards the end of third week, we had a lot of ideas regarding sassy tech, too many even. Some of them are:

  • Dad Bot - Stands arms crossed, disappointed at everything you do.
  • Evil Sister - Ignores you and gets mad when you ignore her.
  • Sassy shopping center - Eyes look at you in judgement when you go towards unhealthy food section.

There were a dozen more mixed ideas like these, most of them derived from session on Wednessday in Week 3. All of the new ideas were random and didn't have much to offer. We had difficulty choosing and finalising one idea. What this experience taught us was that without a goal, ideas are useless. The day spent on ideating with no goal has no outcome. We could not decide an idea because we had no criterias & criteria's are important in making decision. To generate criterias, we had to set a goal and that's what we did.

Therefore after another discussion, we decided to progress towards ideas that actually teach you something, be it a life lesson or increasing awareness about issues. From here on, we headed individually towards second part of ideation, inspirations.

Part 2: Inspirations

The group decided to do further background research before sitting down for ideation session, this time with a goal in mind: teaching through sassiness. While researching, my mind wandered towards tech with personality. I came across Westworld (a TV show), where human like robots are made as game for rich people. The rich people torture them and robots eventually gain conciousness and retaliate against humans. Similar to terminator but, it's main idea was you get treated as you treat. In our next meeting, everyone layed out what they found interesting and a basis for concept was formed - Asimov's Three Laws of Robotics. We aimed at exploring what would happen if the robots didn't obey these laws.

Part 3: The Concept

Research into dystopian worlds where robots retaliate against humans, be it the HAL 9000 (in 2001: A space odyssey), irobot or westworld gave us the direction, where the robots defy the Asimov's rule. Hence, our concept goal became to make people aware about treatment of robots and their possibility of retaliation.

The concept we came up with was a maze game. The maze was completed by a person with the support of a robot that suggests path to complete the maze. Whenever you reach a decision point, the robot suggests doors to open. If you listen to robot nothing happens but if you ignore his suggestion, robot's health goes down.


The more damage you do to robot, more sassy it becomes and eventually leads you to the wrong path. What we want people to reflect through our concept is their trust in robots. Somewhat blind trust that people may have in machines, even in online sites or softwares who share their data is too much. We just want people to pay more attention to these things.

Week 3, Wednessday - Team Formation

Anshuman Mander - Fri 13 March 2020, 9:26 pm

Wednessday Session, Team Formation -

After allotment into team of four and initial ice breaker, we started discussing about the on top of the head ideas. I think all our team really meshed together (blessing of the spirit animals) and all our ideas were based on two things - a bit of troll to accompany the sassyness and something used in daily life. Below are the rough sketches of the every idea we thought of -

Imgur Imgur Imgur Imgur

Out of these, my favourite was

  • Grammar Teacher - People are given chance to write a grammar sentence and if they get it wrong, every other person who wrote before them gets their sentence wrongly written to match the mistake. I like this because sassyness isn't directly given to the wrong doer but, through the eyes of other people who wrote before them by punishing them.

The week as a whole was success. I got plenty of insights into the ideation process. The week went smoothly and I am anxiously waiting for the next one.

Week 3, Tuesday - World Cafe

Anshuman Mander - Fri 13 March 2020, 2:23 pm

Hola Amigos

Tuesday Session, World Cafe -

During the Tuesday session, the students undertook World Cafe activity, where students discussed about aspects of the theme given to them and then rotate to another table. This ideation process was interesting, insightful and a good warm up to interest students in the various themes. Throughout the three rounds, I was intrigued by difference in the people's perspective of the same concept. An example would be mort for Creative Learning, I personally thought that this idea didn't belong in category because it was basically input and output (an inspiration to research about morse code, rather than product actually teaching the morse code), which I wouldn't put in Creative Learning. On the other side, my mates disagreed that inspiration does come under learning. At the end of session, we got to pick the three themes we want to work on. My top choice was - Sassy Tech . This was because humanising a technology and giving it a character looks like an intriguing way to look into people's mind and understanding their characteristics. My second and third choices were respectively bothersome tech and emotonal totems. The reasoning for these is similar to sassy tech, just fiddling with human physchology.

Week 2 - Concept Critiques and Answers

Anshuman Mander - Sat 7 March 2020, 3:59 pm


Here is an update to the Idea Inspiration which answers a few questions that fellow mates had. After going through the critique sheets, I saw some of the concerns and questions regarding the concept surface. Below are the main questions of critiques -

  • How can the mat know what you do?

The mat detect body movements of a user through multiple sensors. An example of these sensors is pressure sensor, which detects distributuion of body weight, which can analyse the activity you are trying to do.

  • How does the mat respond differently to different users?

Before the start of the exercises, you can calibrate the mat to your body shape by simply lieing on it. This activity tells the mat about your shape, any deformities, structural problems and your current health status. The AI in mat keeps these factors in mind when trying to help you exercise.

  • Why didn't you use light and sound to support user?

When users exercise, the robots help them via physically touching their body and supporting their posture. Using any other sense when touch sense is already present seems redundant. If any other sense could help users, they would be definately be included.

Also, other critique's included suggestions for the mat - massage feature for recovery, goal tracking & usage of foam for robot material. These all look like a great addition to the Smart Fitness Mat and I 100% support these. ;-]

Week 2 - Contact and Soldering.

Anshuman Mander - Thu 5 March 2020, 11:09 am

Ahoy Lads,

This week, all students had to present their ideas in the studio session. Moreover, I attended Soldering 101 session. My experience with both activities are summarised below

Contacts -

The contacts consisted of myriads of 2 minute presentations. Fair effort was put in by everyone during the task and many clear themes could be seen during the presentations. The most common theme that I noticed was: expressing emotions through various stimuli (mainly music and colours). Another apparent theme was the manipulation of behaviour via discomfort or imbalance of ordinary. At the end of Wednessday contact, the students came up with several themes which are shown below :

Imgur Imgur

The themes I would like to take part in would be -

  1. Negative Reinforcement - Cause who doesn't like to mess with other people. Interest's prime reason is the fine line between harm, crime and negative reinforcement.
  2. Emotion as Input - It would be because of novel inputs. Not just emotion but also weird inputs that achieve reasonable goals seem unusual enough to push for.
  3. Promoting Social Interaction - This category in my perspective crosses with negative reinforcement (since social awkwardness can be negative reinforcement). I just want to push people to awkward scenario's to test the what qualifies as a promotion of social interaction.

Soldering -

This week, I attended the Soldering 101 session, wherein I got to create an a basic circuit. Not gonna lie, at first I started soldering on the plastic side, not knowing the difference. Thankfully, my neighbour realised it just as I started and before it was sad enough to redo the soldering. Then, I was able to produce the circuit as shown below. The only part remaining was soldering the battery but except for that, circuit works so a self pat on the back ;)


Week 2 - Smart Fitness Mat (Project Ideas)

Anshuman Mander - Mon 2 March 2020, 7:53 pm
Modified: Mon 2 March 2020, 10:53 pm


My idea is a Smart Fitness Mat

Inspiration - The technology in thwe mat is based on microbots from movie Big Hero 6. Microbots are these micro sized robots , which could be combined together to form any object the user desires. You can read further here

Initial inspo for mat was this tech where people were trying to mimic touch via a suit.

The idea - At first, the concept began with a mat can be used to coach exercise from far distances by combining microbots and sensing technology. This seemed a good idea but then I wondered what I think of future as. Hence the initial design came to a close after researching more about the future being autonomous.

Now, the idea is to use microbots like robots embedded in the mat to assist the person exercising. Multiple microbots would be present within the fitness mat and they would come together to the aid of user, detecting their needs (autonomously). So, these autonomous robots would detect movement of user, analyse the task they are trying to perform and then automatically assist them via one of three basic functionalities -

  1. To make fitness equipment - As can be seen in the poster, robots can make equipments according to needs of user. The mat detects the gestures of user and presents the desired equipment automatically.
  2. To prevent injuries - It is often that unexperienced people get injured while performing an exercise. The robots look out for your body movements and makes sure you are not overstretching or harming yourself in any manner.
  3. To correct posture - Many people do exercise without paying proper attention to their posture, which can render the exercise useless. Microbots guides you into completing your exercises effectively.

Additionally, the mat is customised to a individual's body and hence can also help people with body defects or disabilities exercise.

To conclude, the mat is like a training coach but in form of a mat :-)


Week 1

Anshuman Mander - Thu 27 February 2020, 8:47 pm
Modified: Sat 29 February 2020, 11:08 am

Who am I? Who is anyone? What is world? Why am I living? Is there any reason for anything?

Hi, I am Anshuman. I am a 3rd year BIT student (UX major). I have various interests which mainly include lazing around. On more serious side, when it comes to doing work I like to be completely satisfied with the result. If I am not sure, I put in work till I am happy with the result. Also, if you didn't get it from the intro of the blog, I am facetious. Anyway, I can do both programming and designing (slightly better at design).


Before attending Innovate Induction Session, I had no hopes as I barely understood the course. But after the session, I got real excited and started researching more and now, I am genuinely interested in what I can learn. Physical creation, coding and designing, all these things get me thrilled about what's to come. My only hope is to be even more immersed into the course and it's possibilities at the end than I am right now.

To conclude, "You son of a *, I'm in"