Documentation & Reflection

Week 14 - Final week and Exhibiton

Sigurd Soerensen - Mon 15 June 2020, 1:59 pm
Modified: Mon 15 June 2020, 2:01 pm

The final week have come and gone, and we all made it through to the end. In the last days leading up to the exhibition, we revisited our team reflection one last time before handing it in and tested to see if our prototypes worked one last time after having merged our codebases. Finally, it worked to send messages to each other with all code merged! Although my prototype still was broken, we were able to show off our combined effort at the trade show.

As for the tradeshow, at first, we did not get many visitors, so we had an extended chat with Ben before Matt later arrived. Marie mostly held the presentations as her ball was one of the more reliable ones in terms of sending information to the others, whereas Tuva's worked half of the time and Thomas' broke midway through. Thomas was able to fix his again for the rest of the exhibition though. Later on, we had a flock of visitors who wanted to check out our concept, which was nice, and we did receive a lot of positive feedback. It was also nice to see how some of the other projects ended up and peoples' creativity in finding a way to prototype their concepts. All in all, it was a great ending to a long and exhausting semester, so I'm glad we can put that behind us and keep focusing on the thesis.

I didn't really receive any feedback on my portfolio, except from within my team. Seeing some of the other portfolios I decided that my time would be better spent working on the thesis rather than fine-tuning a few more things on the website, as it is rather good already.

Thank you to all for a good semester!

week 14 exhibition final week

[Week 13] - Portfolio and Prototype

Sigurd Soerensen - Mon 8 June 2020, 5:52 pm

At the end of last week, I mentioned some issues I had with my prototype. Sadly, both the accelerometer and bend sensor are broken, rendering my prototype unusable. I have spent a lot of time on this subject this semester, but most of the time has been spent trying to solve technical issues, which in the end have stolen countless hours from building a prototype that we can use to gather data from. Given that I've dedicated most of my time to PhysComp, I don't really have any more time to spend to rebuild my prototype if I ever wish to finish my master thesis. It's a sad way to end the semester by having your prototype break down. Luckily I have a couple of pictures, a short video and my test results to show for. The test results are in the end why we made the prototype in the first place.

Tuesday

After Tuesday's stand-up, our team met to discuss how we should work on the team reflection. We ended up with a divide and conquer method, splitting the parts amongst ourselves. Some of us started to write a draft on the team reflection the very same day before we iterated on the text later during the week. Then we proceeded to check if our combined code base worked after everyone had updated their code. This didn't turn out as we expected, we found some new issues that hadn't been there before that didn't allow us to receive data from each other anymore. Moreover, the overall experience and functionality were rather janky and unreliable, as it jumped back and forth between the different states, seemingly at will. Thomas, Tuva and I spent many hours that day trying to figure out what the issue was but were unable to fix the issue. The reason for these issues in the first place is probably a lack of communication and planning out the system ahead of time, which likely would not have been an issue if the semester ran as normal where we could meet up and discuss things in-person. We did discuss a plan B in case we did not get it up and running again, as Thomas and I have our previous combined codebase, where sending and receiving worked on both ends and Tuva and Marie have their individual working code. However, if possible we would like to present one working codebase and functional prototype. For the rest of the week, given that Thomas and I had previously spent the most time on the combined codebase, Tuva volunteered to spend some time fixing the issue, as Thomas and I were falling a bit behind on the thesis.

Portfolio

I did spend some time finishing up my portfolio this week. I've been pushing ahead a bit to finish it ahead of time as I need to spend as much time as possible in the coming weeks on my thesis. Most of what I did on my portfolio for this week was to write down more content and make sure the website was fully responsive and accessible. Working on the portfolio took significantly longer than I expected it to, and it all feels a bit repetitive given that we are re-writing the very same things we have already written about here in the journals, the proposal and in the prototype delivery previously. Also, building the site took more time than expected as I'm not used to working without a framework. I could really see the differences in all the heavy lifting a framework does for you and the time you save as a result of this, but then again, the content is what took the most amount of time.

week13 portfolio prototype

[Week 12] - Building the Second Prototype

Sigurd Soerensen - Mon 1 June 2020, 12:27 pm
Modified: Mon 1 June 2020, 6:39 pm

I spent most of my time last week working on the next prototype and my annotated portfolio.

Studio & Workshop

In the studio, we had our regular stand-up with this week's focus on having a one-line pitch for our concept, show what we have been working on, what our priorities are for finishing the project for the exhibition and questions regarding the portfolio. Although we have slight variations of pitching the concept and thoughts of the ideal product, given that we are still exploring different aspects, our current one-liner is "E-mories, a distraction-free physical platform to remotely share personal emotions with close friends and family". Moreover, for my progress, I showed the state of the ball, which at that point in time was the new ball with a bend sensor attached with silicone to the inside. As for priorities finishing the project, we had already fixed it so that all devices could communicate over the server, so we mostly just had to continue focusing on making sure our individual prototypes works and to conduct user testing for the last prototype.

Imgur

Prototype

As for the prototype, I found a nice transparent ball at K-mart which I could use. The ball had a nice pattern to it which I believed could reflect the colours in a neat way and it also contained some glitter water inside. At first, I didn't think much of the glitter water as I mostly wanted to use the ball. However, looking back at our additional features suggested in the proposal, one of them was to add water to the E-mories device. Given that I am building this prototype to test material and how to make the device more of a personal artefact, I decided to test how adding glitter water could make for a unique look and feel and test whether it made the device feel more personal.

Imgur Imgur

I drained the water from the ball and started to place the various Arduino components and sensors inside, making sure they were all fully covered in silicone to avoid any water touching the electronics. I covered all electronics I could in clear plastic wrap and black tape before I covered them in silicone, to better protect the electronics. I let the silicone dry before I carefully tried to put some water inside to see if it was still leaking. Three times I had to put more silicone in to stop the ball from leaking, which was strange as I by the end had covered the entire bottom half of the ball with silicone.

Imgur Imgur

When I finally had made sure it did not leak, I tested to see if everything still worked, which it did, but the accelerometer has ever since testing it for the first time seemed quite unreliable as it seems to mix angles. As for the working parts, they can be seen in the images and video below. At this stage, everything from recording to picking a colour, sending data and getting an incoming message notification worked as intended. However, when I picked the prototype up the next day, the bend sensor values were all over the place, which made nothing work. I inspected the ball for water leakage, but there was none. I knew from when I received the bend sensor that the connection was somewhat loose which I had taped earlier to avoid these issues and later put silicone on top of to hold in place. Despite this, I seem to have some issues with the bend sensor. Having tried to fix the issue for a couple of hours, I decided to drain the ball of water in case that had any effects on it. So, I'm going to let it dry off before trying again. If not, I might have to stimulate the squeeze interaction as I would have to pull everything apart to access the bend sensor and fix it at this point, basically meaning I would have to purchase a new ball and start over from scratch.

Imgur Imgur

Web Portfolio

As for the rest of the week, not counting the time I've spent working on my thesis, getting ready for the prototype demonstration there, I spent working on the portfolio. My current progress can be found here: portfolio

Most of the time I've spent working on the portfolio has gone to rewrite and condense what we have already written about in the proposal, journal and first prototype delivery. Re-writing this content feels rather repetitive and as a result, made my motivation take a hard hit. I'm still struggling with motivation in both courses as there is a lot of repetitive work and every day feels the same, not being able to have a social life for the entire semester. Still, I believe I'm on track for the exhibit and portfolio in PhysComp, while I still need to catch up on the thesis as PhysComp requires most of my time throughout each week.

week12 building prototype portfolio

[Week 11] - Working on the next Prototype

Sigurd Soerensen - Tue 26 May 2020, 6:53 pm

Feedback

On Monday we had a meeting to discuss the feedback we had received and our path going forward. Both as a team and for my individual project, we received some useful data, however, some things that were mentioned were answered in the video and document and provided little value. Most of what we received was helpful though and does correlate with the data gathered from user testing and interviews.

As for my own project, feedback and user testing data suggested that I should look into the material and also how the device could make for a more personal artefact. Other than this, most of the feedback I received only requires minor fixes in the codebase, such as not having to hold after squeezing until the audio is done playing and smoothing out the quick flash at the end of the notification cycle for a more pleasant experience.

We decided during the meeting to focus on putting all our code together into one codebase to better be able to showcase our concept on the tradeshow. We also set up another meeting for Friday to start merging the codebase. We chose to focus on merging our code before continuing to work on other features on our individual projects as more code would mean more refactoring. Given that all of us had to focus on our thesis for the coming days, this did not cause any issues for us.

Midweek

As for Tuesday and Thursday, we had our regular stand-ups. I did like that we were all going to say one positive thing given that a lot of stress with covid on top quickly makes for a negative pattern. All week up to Friday, except for Monday's meeting and classes, I had to spend working on my conference paper for my master thesis as I had mostly been focusing on PhysComp and had that due on Thursday.

Friday's Meeting

On Friday our group met at Uni to start merging our code. Whereas Thomas and I had an easy time merging our codes, Tuva and Marie had to start from scratch using a new library for their MPU6050's. Given that we had an easier time putting our code together we put in place a couple of functions so that Marie and Tuva could easily merge their code with ours without having to read through and understand it all.

Weekend

During the weekend, being inspired by Thomas' solution to create a ball from silicone, I chose to try doing the same, only instead exploring a different shape. I went to Indooroopilly, to purchase some clear silicone and then headed back home to make a mould for my shape. I decided to try to make a cube due to how it is easier than most other shapes to make and then Thomas and I would be able to test two different variations to see which one felt better. My thoughts were also that using different shapes could be a way of making the artefact more personal as people could pick their own shapes or a pair where two and two E-mories devices would have the same shape to distinguish them from others. However, after two attempts, one time with only small amounts of corn starch to retain some translucency and another time with a lot of corn starch, it still would not dry, so I ended up scratching trying to make my own cube out of silicone. My plan B would have to wait until Monday as I had previously seen some clear balls laying around at K-Mart on Toowong that I could work with.

Imgur Imgur

week11 prototype codemerge

[Week 10] - Prototype Delivery & Feedback

Sigurd Soerensen - Mon 18 May 2020, 3:15 pm

Before Submission

By Tuesday, we had all delivered our prototype documents and uploaded the videos. I did most of my video and document last week but made some last changes on Monday. Given that our team chose to build separate parts of the same concept, as explained in previous posts, we also found it most useful to create a team-based video and include that in our videos. Looking at the finished video, I believe it turned out quite good and that we did well in creating the team-based video. Everything revolving the video and document took much longer than I expected, so for the last couple of weeks, I have focused more on PhysComp than my thesis.

Tuesday and out

From Tuesday and out, we focused solely on writing our feedback to the other teams, responding to the few questions we got and starting to look over the feedback we received. After meeting up for the studio on Tuesday, we all gathered and started to write out feedback. First, we tried to have one person share their screen and watch the videos together like so, to be able to play and pause along the way to comment. However, we quickly found this ineffective and started to watch the videos on our own and then meet up between videos to read the documents, discuss them and come up with rough bullet points with feedback. For the first group, we started to write summaries of our bullet points before going to the next person's video. For the second team, we just wrote down bullet points from the video and documents, discussed them to come up with more and then moved on. Before starting on the third group some members of the team wanted a break, and some wanted to go on. After discussing for a bit, we came up with an asynchronous solution where two of us, Thomas and I, continued to the last group straight away and the two others to come back and do their review of the last group later. Moreover, Thomas and I were to summarise the comments for the second group as everyone had already written their comments for them and then Tuva and Marie could summarise the third group when they reviewed them later that day. In my opinion, this solution worked much better and was more effective. We decided to go through all summaries after Thursday's workshop, before commenting on Miro.

For the rest of the week, I had to focus on my master thesis given that I had focused on PhysComp for a long time and had to pick up the slack on the thesis. Besides the feedback, this week was quite uneventful.

week10 prototypedelivery feedback

[Week 9] - Prototype Delivery

Sigurd Soerensen - Mon 11 May 2020, 2:40 pm
Modified: Mon 11 May 2020, 2:45 pm

For this week I have been focused on creating the individual and team video in addition to writing the prototype document for delivery.

Studio

In Tuesday's session, we had our regular Stand-up. As with most weeks, there's not much to say about the stand-up. We talk about where we were at with our prototype, what we had recently achieved, last-minute tasks and concerns. I had in the previous week finalised my prototype and started to conduct user research. I've noticed that it's rather difficult to get testers these days as both Thomas and I combined have only been able to recruit a few. My guess is that people do have a lot going on these days. As for week 9, it was mostly going to be spent working on the submission itself and I didn't really have any concerns around the delivery.

After Studio

After the studio, we had a short team meeting to discuss who was going to work on what parts for the team-based sections of the assessment. Thomas and I were to create a script for the team-based video and for the interaction plan whereas Tuva was going to record a voice-over and draw the interaction plan. We decided that Marie could put the video together once all elements were in place.

Workshop

The workshop was rather uneventful. Few people showed up and it took a long time before we got started. Once started there didn't seem to be a plan of content other than the tutors being available if we needed help. Given that I did not need any help I kept on working with my deliverables.

After Workshop

On Thursday we had a short meeting to discuss the interaction paradigm of our concept and then went on to work more on the deliverables.

Rest of the Week

By Friday Thomas and I had filmed some footage to put in our team-based video and handed that off to Marie for her to put it in the video. For the rest of the week, I mostly worked on my individual prototype document and video. This week was pretty straight forward. The most confusing and difficult part was to analyse and understand what we actually were meant to have in the video and supporting document as descriptions were a bit vague and abstract, resulting in a lot of back and forth to get it right. As for my own video, I had to redo a couple of video clips as I found out I had filmed it in the wrong orientation and then later found out I had to few clips to fill out the movie. So, I decided to shoot some filler clips.

Lastly, I took Sunday off just to have a breather as I've been struggling with a severe lack of motivation lately.

week9 prototype delivery

[Week 8] - Finalise Prototype

Sigurd Soerensen - Mon 4 May 2020, 12:02 pm

Decisions

Both Thomas and I decided at the beginning of this week to scrap our plans to have audio working on the Arduino prototype. Both of us experienced a lot of issues with the SD card reader which turned out to be faulty SD card readers, which lost us around 1.5 weeks of progress. Because I couldn't get the speaker and card reader to work I instead started to look at how I could play sounds on my computer based on a command from the Arduino. I looked into using the navigator.mediaDevices function found in JavaScript, which I made work when tested in an isolated environment. However, new issues started to arise as I tried to merge the code into my existing client file. Given that navigator.mediaDevices only exist in the browser and my client file needs to be locally run on the machine to access the USB port I had a difficult time finding a good solution. The most promising solution I found was to use Puppeteer, a headless browser based on chromium to have access to built-in browser functionality while running the file from the local environment. However, I still had some issues figuring out how to use Puppeteer for this specific task. Given that I had already spent a lot of time on this issue I did not want to waste any more, so I instead opted to simulate all audio for my prototype just using a phone.

Studio & Workshop

As for this week, we had a standard speak-up at the beginning of our studio class and the rest of the time for both the studio and workshop focused on just keep working on the prototype. As for the studio stand-up, we were to answer a couple of questions, the first one being the one big question we have about the prototype deliverable. I didn't really have any questions, as my main question was how to get audio up and running, but since I scrapped that plan, I knew where I was going from there. As for the main thing I wanted to have working was the actual interaction of squeezing the sphere to listen to the recording and a notification function to display incoming messages.

Building

As for my prototype progress, I started with changing out my RBG LED light to a NeoPixel LED strip as our team decided that we wanted to make our prototypes look similar given that they are all smaller parts of a complete experience. I refactored my code from the RBG LED light and had a working notification pulse running in a short amount of time. Now that the lights worked, I focused on getting the interaction up and running by using a bend sensor to sense squeezing. The only difficulty I found using the bend sensor is that the numbers seem to change from time to time, even when I haven't touched the prototype, so I have to calibrate the sensitivity every now and then. When both the lights and squeeze interaction was up and running I chose to implement some basic haptic feedback using a vibration motor inside of the sphere. At this point in time, the prototype has three different states, as shown below.

STATE 1: No Notification

Imgur

STATE 2: Notification - Pulsating Light

Imgur Imgur

STATE 3: Squeeze to Listen

Imgur Imgur

STATE 1: No Notification

Imgur

One milestone both Thomas and I achieved this week was to link up our two prototypes. After his prototype is finished simulating a recording a simulated colour, which in the future would come from Tuva's prototype is sent over the server Marie set up and then received by my computer and forwarded to the my Arduino which then starts to pulsate the colour that was sent. This helps us give a sense of context to prototype testers and helps demonstrate the core functionality of the project.

Preparing Prototype Test & Recruiting Prototype Testers

Given that Thomas and I have similar functionality, just that my prototype is the receiving end and he is the sender, we chose to create a joint prototype test. We are planning to conduct two to three group tests with two participants in each group, depending on how many groups we get access to. We sat down together and created a two-sided test with an interview where we both get to test our own aspects in addition to testing a combination of both sending and receiving a message. Luckily we both live with two other people, so we are able to conduct one group face-to-face prototype test. However, for the other groups, we have had to reach out to our friends and do remote interviews. To do this, we have started to videotape our prototypes and written down questions of which will be sent to our testers next week.

General Thoughts

I have felt a huge change in motivation lately where I've seen my productivity level sink drastically over the last couple of weeks. I worked several weeks to get my thesis prototype up and running and I faced a lot of technical issues that I had to brute force my way through to get a working prototype. Now that a similar thing happened for the Physical Computing Prototype I have lost a great deal of motivation. Working with Arduinos in both subjects, a technology we barely know how works and not knowing a whole lot about how electricity works feels daunting. Online education for these types of subjects is far from optimal as I don't feel like I get the help I need, even if the teaching staff tries their best. It also seems like we spend most of our time learning to use a tool instead of learning about interaction design. Hopefully, I will see a return of motivation soon as we are closing in on the end of the semester.

week8 prototype

[Midsem & Week 7 - Post 1] - Prototyping

Sigurd Soerensen - Sun 26 April 2020, 9:56 am
Modified: Sat 30 May 2020, 2:01 pm

Midsemester

I didn't get to do a whole lot over midsem as I had to focus on my master thesis. However, during the first weekend, Thomas and I went into the city to find some suitable spheres to work with. The most optimal ball we found was plastic Christmas decorations, as they were both transparent which would let us use lights inside and somewhat squishy to allow for a squish interaction. These balls weren't ideal, but definitely enough for us to get our first prototype up and running given that we aren't going to focus on form and feel for the first prototype. Towards the end of midsem we received our parts in the mail, which at this point had been delayed for about six days. Thomas and I, given that we live in the same household and that we are working on two sides of the same interaction, started to test the parts (SD card reader, microphone, speaker and lights). I had some progress working out a solution with pulsating lights for a notification feature by sending in RBG values through the serial port, simulating the data transferred from another Arduino.

Imgur Imgur Imgur Imgur

Week 7

As for the weekend leading up to week seven and until Thursday's workshop, except for the regular stand-up at Tuesday's studio, I spent all my time trying to work out an issue with the SD card reader. Both Thomas and I was working with SD card readers, but none of us was able to make them work. We received some help from Steven and Ben after the workshop which led to us figuring out that the SD card readers had a fault in them so we had to give up on trying to make it work. Steven and Ben suggested that we could look into streaming over serial, make use of a raspberry pi instead or use a computer to simulate the interaction of voice recording and playing the recording. Having looked into the complexity of streaming audio over serial, and not having time to learn a new tool we chose to pursue the option of using JavaScript to record and play audio from the computer-based on input from the Arduino. I tried to look up how I could use JavaScript to record and play audio files, but soon after ended up stuck again not being able to get this to work either. By this point, since I hadn't had a day off in weeks I chose to take a couple of days to rest.

Concept

My team chose to work on the same concept and divide it into smaller prototypes that together make a whole concept. The overall concept (E-mories) consists of a basket full of spheres that will double up as an emotional sharing platform and a decoration in the house. Each sphere is capable of recording and playing audio for people to send each other messages. When a user wishes to send a friend a message, they simply pick up a sphere and squeeze it to start recording. When done recording he or she can then shake the sphere to choose a colour corresponding to the mood of the message they are sending. When the user is happy with their combination of message and colour they can throw it up in the air to send the message. On the receiving end in another household, there is an identical basket of spheres. When a message is received a sphere will start to glow in the colour the sender chose to notify the receiver of an incoming message. The receiver can then pick up the sphere to listen to the message. The concept tries to facilitate and encourage positive emotional sharing for people that are unable to meet face-to-face and give them a sense of being closer to each other through a physical platform.

We divided this concept into four parts, where mine revolves around the receiving end of the concept, where I'm looking into how to show a notification of incoming messages and play the audio recording.

In an ideal world, I would say that this concept included more than two senses, as in not only audio or lights. I would like the concept to include deep meaningful memories that people wish to share with one and each other consisting of smell, video, images, sound, touch and more. My personal idea of the concept is for it to contain complex combinations of memories consisting of the above-mentioned factors and for users to be able to build upon old memories with new memories and reflect on their memories together. However, given that we have yet to prototype test our current solution it is difficult to say if this is the correct path to pursue.

midsemester week7 prototyping

[Week 6 - Post 1] - Miro

Sigurd Soerensen - Sun 26 April 2020, 9:47 am

For Tuesday's studio class, we had a stand up as per usual, talking about how far along our team have come in the process and what we are planning to do next. I find these stand-ups to be wasteful given that we spend a lot of time doing something that does not move our project forward. I would prefer it if each group had their own breakout room and a tutor came by to check in on the progress as this would be far more valuable.

After the stand-up, we moved on to an online exercise using the tool MIRO. Our task was to find a way to conduct user research in COVID times and how to get valuable first-hand data without the need of meeting face-to-face. For this task, I looked up a youtube video of passengers on a train and observed their behaviour. Although I can see why the teacher team wanted to make us student open our eyes to the possibilities that still exist, I didn't find it overly helpful as the exercise wasn't relatable to conducting a prototype test, which is the next assessment piece in the course. I find that I get the most value out of meeting- and discussing with the team instead of performing stand-ups and these types of tasks as they take away time that we could have spent on our project.

The next task on MIRO was really helpful to us. Even though we already had a good understanding of our concept, it is always nice to map out the complexity and fully comprehend the quirks and commonalities of the project you are working on. Our team sat down together and went through our concept and tried to map out as much as possible to further define our concept as can be seen in the pictures below.

Imgur Imgur Imgur Imgur Imgur

From this exercise, some new questions emerged that we had to take into consideration when working on our prototypes and conducting user research. These questions were such as: "does it need to be a ball?"; "is flashing lights the best metaphors for incoming messages?"; "is sound the best input and output?"; "Can it be anything else than a bowl?"; "Is squeeze the best interaction?"; "Is the living room the best place for it?" and more. These questions are important to consider moving forward and to include in our upcoming user research. Moreover, from this exercise, we did figure out that our target audience was rather vague and needed more working on.

On Thursday we had a Arduino tutorial. Having already worked with Arduino previously I found the content too basic and decided to instead focus on refining the assignment prior to delivery. Moreover, our team sat down to order the parts we needed so that we could get them in time for our next deadline. We decided to purchase our parts with express delivery to get a headstart on the prototype.

week6 miro

[Week 5 - Post 2] - User Interviews and Concept Proposal

Sigurd Soerensen - Mon 6 April 2020, 6:47 pm
Modified: Tue 21 April 2020, 9:35 am

Workshop - Analyse User Interviews

Between Tuesday and Thursday, we all did one user interview to gather some initial data to further move along our project. To start with, in the Thursday workshop, we summarised and analysed our interview findings. What we found was that people aren't very comfortable sharing emotions and when they do they prefer to do it face-to-face. Moreover, our users told us that they feel it helps to talk about their feelings. We also gathered some similar data on how our users perceive emotions and thoughts around emotional sharing. I feel we gathered some good data that would easily help us decide on the one concept we will pursue as well as an initial understanding of our target audience. Doing these interviews wasn't ideal given the current restrictions set in place due to COVID-19, but I feel we managed it nicely as we all live with other people that were in our target audience. One thing to consider though is that the people we live with won't provide a complete picture as the demographics are fairly limited. It's also going to be interesting to see how we can manage our way around these same restrictions when we have to test our prototype, but I'm confident that we will be able to perform some quality testing sessions when that time comes.

Based on this feedback we started to discuss some ways to change our focus and came up with some ideas that revolved around moving to a self-awareness focus, making it controversial, making it easier for people to talk or focus on the positive emotions. After a short discussion, we all figured we were most excited by exploring sharing positive emotions with others. This choice was made due to how our data told us that people don't really talk about their positive emotions and that they are not very comfortable sharing their negative emotions.

Around this time we got some guidance from Steven as we had some difficulty focusing our concept and getting down to the nitty-gritty details. Steven gave us some great advice which helped us immensely, which was to take a step back and look at what questions we are trying to answer. By now we had decided on wanting to focus on lifting the spirit in the home. Steven told us to focus on what we would like to achieve, as in, find a goal. He gave us some examples from his own experience that helped us understand what he meant with finding our goal.

After the meeting with Steven, we took a step back and went at our goal. We started off with "How can we lift the spirit in the home?" and went through a couple of iterations where we asked ourselves questions and tried to answer them with a new version of the goal. One example of this is how we asked ourselves "How can we help sharing positive emotions between homes?". We figured based on our data that people were most comfortable sharing with close friends and family that we should make it more personal or humanise the emotional sharing. This led to a new goal of "Help personalise emotional sharing and lift spirit of others when remote". We went through a couple of more iterations before we landed on something we were satisfied with that we all felt covered what we wanted to achieve. Our new goal at this point was "Encourage positive emotional sharing with close ones remotely". We had a short brainstorming session on new concepts that fell under this new goal before we decided to take a break and meet back after a couple of hours. We all agreed to write down some new questions based on our goal for when we got back and to create a concept with a rough sketch, a target audience and an intended experience.

Later that day, we all met back online and went over each of our questions and sketches. My questions were as follows:

  • Can positive emotions be shared?
  • Can positive emotions be shared in the form of memories, music, sounds, color, lights, haptic feedback, and or heat?
  • Are positive emotions equally valuable when shared digitally?
  • Can we encourage emotional sharing?
  • Will positive emotions overshadow people’s need for feeling down?
  • What does positive emotions look like for different people when shared?
  • When do people want to share positive emotions?

These questions, in addition to the ones the other team members come up with are meant to be used throughout our development. Moreover, I presented a concept of a wearable that would encourage sharing memories with people you had experienced them with previously. This device would allow for recording information in various formats and had the intended experience of encouraging people to reflect and reminisce with close ones on the good memories you shared. Without going into too much detail the high-level concept is a wearable that when visiting places with memories from before, can ‘rewind’ and see, hear, feel a collection of those memories on your wearable device. Friends who was there will be notified as well that you are reliving that memory and can join in on it. After all ideas and questions had been presented we went over all of them and gave feedback to each other. Finally, we picked pieces from the concepts we liked and put them all into one final concept to present in our proposal. We divided the concept into two parts, one that we would like to present as our intended MVP and the other part as a future version where we would like to be in an ideal situation.

Before the meeting ended, we decided on some research topics we all were to explore further and divided the concept proposal amongst ourselves.

Over the Weekend

I got to write about and detail our goal and research question, which was nice as I got to take a look back and see how we got to where we were now and why we chose our path. Moreover, I found it useful to dissect our research question and define every bit of it and how it was all connected back to our domain of Emotional Intelligence. Like everyone else, I too found a peer-reviewed article to be used in our proposal. I found an article on how shared experienced regardless of communication is amplified, which to me were quite interesting. This article had some limitations for our purpose as it didn't explore if the same effect would be possible if people shared an experience remotely. Hopefully, through our prototypes, we will be able to provide an answer to this question. On Monday, I wrote the section on how our concept was relevant to the brief. Having already written about our research question I was quite confident on what our concept was and how it relates to the brief, so I had no real difficulty doing so. Lastly, before our meeting on Monday, Thomas and I went through the document and made grammar and content suggestions to improve what was already written.

Potential Methods of Inquiry

As for my own concept, where I'm looking into the interface response to incoming messages through coloured lights and how to play the recorded message, I'm focusing on post-prototype methods rather than pre-prototype methods. The reason for which I choose to not focus on pre-prototype methods is due to the team's initial research which covered enough for me to move forward with my prototype. With that said, I will, to the extent it is possible and ethical, do a combination of prototype tests with interviewing and observations. Where this approach is not feasible, I will instead focus on an online format trying to retrieve similar feedback. Moreover, given our restricted situation, I have considered conducting a heuristic walkthrough of the prototype using the other team members as experts, but this has yet to be decided in the upcoming weeks.

week5 userinterviews proposal

[Week 5 - Post 1] - Idea refinement

Sigurd Soerensen - Mon 6 April 2020, 5:43 pm

Tuesday - Contact

On Tuesday we started off with a 'stand-up' where each person in all teams was to explain what the team and they themselves had done so far in the course. Although interesting to listen to what the other teams are doing there was a lot of overlap and repetition from many team members. So far in the process, most of what we have done is as a team and therefore there is little extra to tell when asking a second team member. I believe we would hear more unique aspects moving forwards as we are moving into the individual parts of the course. However, I'm also concerned about the amount of time this takes up and if it's worth the time as I didn't feel like I learned anything from this exercise.

After the stand-up, we jumped into team chats to work on our concept. In this meeting, we talked about starting to write the report and work out how we could split the concept between ourselves. We decided that we wanted to split prototypes based on outputs and inputs. Given that Thomas and I live in the same house we decided that we could take one input and one output so that we are able to test if everything works as it is supposed to locally before testing remote. Moreover, Tuva was to prototype automatic input and Marie were to prototype an output. However, at this point, it was still unclear what the individual prototypes should be.

At this point, Ben came in to give us some feedback on our concept. He brought up some good points on how manual input could be an extra barrier of entry. We also discussed how we should consider the physical aspect of the concept such as what the balls are doing and how they could be interacted with. Ben also gave us some technical suggestions to look into such as ESP32s or ESP8266s for the connectivity and Galvanic Skin Response(GSR) as a sensor in case we wanted to measure emotions as users were holding the ball in their hands. Finally, we asked for some tips and tricks too for user interviews.

After our feedback from Ben, we went on to create an interview protocol to get some early information on how people understand emotions, how they display them as well as if and when they are comfortable sharing emotions. We decided to write some open-ended questions to begin the interview with, continue with a task to allow users to show us how they think and then end with some more directed questions.

week5 idearefinement

[Week 4 - Post 2] - Pivot and idea refinement

Sigurd Soerensen - Tue 31 March 2020, 3:29 pm
Modified: Tue 31 March 2020, 3:31 pm

In our team meeting, Friday of week four, we decided to pick up where we left off earlier in the week, by focusing on the rest of the feedback we received after our presentation. Moreover, we aimed to refine our concept idea further and come up with solutions to how we could split our project among our four team members.

Feedback

We summarised and condensed the feedback we had received to make it more actionable and easier to take into consideration when moving on with our project. There were a lot of interesting ideas to consider from what we received, such as how users themselves could input a mixture of emotions of their making, combined from for example clay or colours. Handing over control of the input parameters would allow for a more exciting approach which could potentially be more reflective of how complex emotions are. Moreover, it might feel better for users to have complete control over combining and creating a representation of their feelings at that given time. Another suggestion was to move to a household to ensure everyday use. Our original concept did lack a specific everyday use-case, which is why we especially took this suggestion to heart and which formed the basis of our pivot. A third suggestion was to have more complexity, as in increasing the number of emotions. This leads me to believe that either people weren't listening or that we were unable to get our message across as our intention was always to reach for such depth, but that our prototype would have to have certain limitations, which is why we talked about various ways of measuring the emotional intensity and which emotion users would pick. Another suggestion was to attain data that would tell us the connection between colour and emotions, which we also did mention in our pitch that we aimed to figure out in our initial user research. Others suggested making the installation a safe space where users would feel safe sharing their emotions, which is a important suggestion and one we will take into consideration moving forward. Some people suggested more collaborative interaction between people which we also have taken to heart in our pivot that I will talk of later in this post. Another suggestion was to explore shape and form together and how this could elevate a sense of certain emotions. Shape and form is something we are looking into that we haven't landed on as of yet, but we would very much like to explore how senses can be combined to emphasise emotions. The last types of suggestion were to have other types of emotional input such as heart rate, heat, and so on. Automatic inputs are something we are taking into consideration moving forward with our concept.

Pivot and Refinement

The rest of our meeting revolved around how we could pivot our project to be more of an everyday concept. We started talking about what type of experience we would like people to leave with and came up with four possible paths to pursue; individual and collective awareness, emotional release, spark discussion and improved communication skills. Which opted to figure out which ones to focus on specifically through user research. With that said, we did agree on the purpose of our concept to revolve around collective awareness plus a display of feelings outwards. This comes as a response to our feedback where people suggested more collective interactions, in addition to how we as humans share emotions daily would be a nice foundation to build upon in terms of making the concept more of everyday activity. As for target audience, we were all intrigued by exploring our concept in a multi-person household focused on student accommodation, given that isolation in COVID-19 times will likely have a significant impact on people living together. Towards the end of our meeting, we had gone through various types of inputs and outputs and how they could be applied to a household and landed on a vague concept that we wanted to pursue the next week. This concept was a cylinder with balls inside of it, where each ball correlates to each person in that household which shows that individuals emotions throughout the day. This came from some requirements we set for ourselves, that the concept should be easy to display and easy to access. We played around with input and output methods for the concept, but in the end, we did not land on one specific input or output. This will have to come during our next meeting.

week4 teammeeting pivot idearefinement

[Week 4 - Post 1] - Presentations

Sigurd Soerensen - Wed 25 March 2020, 1:32 pm
Modified: Tue 31 March 2020, 3:32 pm

Emotional Birds

Looking back at our presentation, I believe we managed to cover a lot of information and give the audience a good sense of our concept. Our team started to go through the feedback after Tuesday's session and will continue to go through the rest in a separate meeting on Friday. From what we have seen so far, the concept fell short in the everyday aspect of the brief, which we will discuss in further detail on Friday's meeting. We have already looked into how the concept could be moved into a household, so it might very well be that we end up pivoting from our original concept during this week. There was also some feedback related to a lack of output interactivity. This is something we have to look further into as we believe too many features might take away from the core purpose of the concept and rather be a distraction. More features aren't always better. Moreover, I believe a concept could just as well focus on one place of interaction instead of splitting the focus between two places of interaction. With that said, we're going to discuss this in further detail on Friday's meeting and might end up with a different concept by the end of this week.

General Thoughts

I found that most presentations were, for the most part, within the requirements of the brief, with a few exceptions. Most concepts had minor issues only which I think is natural at this stage of the development as ideas are still rough and unrefined. However, there was a lot of potential and creativity in the various concepts pitched. My only concern is the feasibility of many of the presented ideas. It is nice to see the ambitions and dreams of a finished concept, but I do believe several teams will soon find it more difficult to create than what they initially assumed. Especially those who seek to measure emotions in one way or another. It is definitely going to be interesting to see how the various team will prototype their concepts and see the concepts evolve as we move forward in the course.

Imgur Imgur Imgur

My feedback

Fire safety using audiometrics

Consider how it can be made accessible to the elderly given that the concept users AR technology. Some people might find it difficult to use and navigate through a phone screen.

Could probably be easier if used with AR glasses instead of a phone as it will react to body movement instead of having to point a device around.

In terms of immersion, I would argue that VR is a better option as AR lacks the same sense of depth. If that is important to the concept that is.

EMS – enhanced mundane spaces

You could consider AR to open up for multi-room play/inclusion.

Consider using different sound effects for each household tool and how they all could be different games that support one core gameplay.

Team Zoomista

Could you use a combination of a touch interface and body as a controller to make it even more interactive?

The Nagging Cushion

Will the user actually have to carry the pillow? This seems like an additional hurdle to overcome to use the pillow instead of just having a 'away from pillow' timer and general trust of users

Team Garfunkel

How will users be able to distinguish between which item makes what sounds. Given that music is a combination of several tones it would be nice to distinguish between various tones on sight, a connection between the sound and the physical item.

Team Twisted

Could this work with only one user? E.g., after having stopped on the yellow colour, you will have to move over the other colours to get to the apply pad. You should consider how people won’t mistakenly activate another colour when walking past other colours.

As a colour-vision impaired user, I would definitely enjoy having something that separates hue, saturation and lightness for me to help me understand what colour it 'actually' is given that many colours can look the same. E.g., use voice, text or icons to further distinguish colour combinations.

Also, as a colour-vision impaired user, I find HSL to be the most helpful way of learning and distinguishing colours.

Lome

Consider using the already developed voice assistant on various phones to be the sound input of your device given that some of these already use AI to understand language and tone. This could also provide users with a sense of control as they have to speak to the phone to activate when the device is actually listening in on conversations.

Ninja Run

Consider how you could use the physical space other than just horizontal block alignment to visualize what happens. E.g., could blocks be stacked on top of each other to visualize the number of times a loop runs?

Consider how you could teach other things too, such as HTML, CSS not only scripting languages or programming languages.

Helping Hand

Could it also help people get into social situations, not only shoo people away?

Bata Skwad

What is the connection between the problem space which talks of device sentience and the concept pitch presenting a maze robot?

How does a robot giving suggestions and ramming into your leg fix the problem of devices not working for you and people not trusting tech.

It sounds like the machine is focused on a really broad set of functionalities; can it be scoped down?

Mobody - Handy Aero 2020

It seems rather similar to leap motion. Could you make use of this existing technology and improve upon it to use other body parts as controllers in addition to hands?

Fitlody

It would be interesting to see if it could work both ways, as in the music adjusting to your actions and you having to exercise according to how the music and floor lights change.

Give me a beat saber version of fitlody.

Team Hedgehog

It could be interesting to have a visualization of your movements after the game finishes.

Could be interesting to use sound to navigate around in a dark room / maze.

Negative Nancies - Energy Saving

Maybe giving Emily ‘human or animal traits could help the users care about the messages it gives.

Half-ice no Sugar - ITSY

What will the various interactions do? E.g., what will an ear twist vs an arm tug mean, and how will this correlate to a specific learning outcome?

Output could be glowing patch, heat, buzzing.

Team Triangle

Is there a timeline where you can build the music sequences using these vials? It’s a nice and creative way of capturing, mixing and playing with music. How would sounds mix? Are there any controls to adjust the volume of each sound, when they enter and exit and so on?

Team 7

Could it be something else than a game or add it as a gamification on top of some mundane task to make it more of an everyday thing?

It would be interesting to explore how to teach people of danger signs, such as teaching users to sense dangers using electro haptic feedback.

Team CDI

Could the elevator take you to the wrong floor if you are performing the incorrect dance move? As in not stopping there for you to walk the rest of the floors but move one floor closer or further away for each successful or unsuccessful dance move.

Think about how dancing in the elevator may impact people who are uncomfortable in the elevator to begin with, that wouldn’t enjoy people jumping up or down.

concept: instead of elevator, use a horizontal escalator such as the ones at the airport. It could be a joined effort to make it move faster and would look like a neat line dance for everyone else watching.

Team Zookeeper

It would be more interesting if questions were difficult to choose between as in would you take a train vs a bus and what ramifications will the various choices have.

Could questions be based on activities in your home so that people become more aware of their choices?

Team Hi-distinction

Could you move away from the screen to make it less similar to consoles with motion sensing? Could boxing output some interesting artwork such as being interpreted as different hues and saturations to colour in an image and hitting certain areas to colour in that area?

week4 presentation

[Week 3 - Post 3] - Team Formation

Sigurd Soerensen - Tue 17 March 2020, 11:04 am
Modified: Tue 31 March 2020, 3:32 pm

Video Call - Bash

I enjoyed the video call from Britain as Bash had some excellent advice. Although I was already familiar with the information that he gave, having spent countless hours looking at employability before, I still enjoyed hearing about his story.

Team Formation

I wasn't too surprised with the team I received given that I tried to team up with people I know can do good work by putting my notes close to those I'd like to work with rather than the most exciting theme. This decision is a result of previous semesters where I've been in teams where a couple of us have to pull the weight of other group members. Given that I'm doing my thesis, I can't afford yet another semester where I have to spend enormous amounts of energy and time making up for the shortcomings in other people's work, as my thesis progress took a significant hit last semester. The negative aspect of this is that this team will have similar input to me given our same cultural backgrounds, whereas the positive side is that I know they can all do great work. I believe our group will be fully capable of producing a captivating concept and have a lot of fun while doing so. I hope that, in this course, we can focus on ideas that are more physical than digital as I'm interested in learning more than just web and app design.

The first thing our team did was to sit down and write up a team agreement, with one team member joining us on Discord. After the class, we decided to have a meeting to start ideation. There, we discussed the theme, emotional intelligence, and produced as many ideas as we could. We chose to focus on two aspects, input and output, whether they should be internal or external, physical or digital, and how they could be combined. After coming up with some initial ideas, we opted to plan a meeting for Friday to choose one concept to move forward. Previous to the meeting, we were all to come up with two additional ideas.

My Two Ideas

Thomas and I sat down to ideate a couple of ideas on a train ride back from a friends house. The first concept I ideated is a wristband that senses when you fidget with it, as one often can do when bored, making the device glow brighter to show your boredom and promoting human interaction. The second idea was the 'shrieking box', which you throw as hard as you can into the air. The further and faster it travels, the more loudly it shrieks, emphasising the emotions of the pitcher.

Friday's Meeting

After having presented and discussed our additional two ideas, we tried to look for combinations between them and the ones we came up with, in the previous ideation session. We chose to pursue the high-level concept of angry birds of emotions. This concept combines the emoji balls, different coloured balls with emoji faces to throw at people, from the first ideation session with a physical Angry Birds like game. We explored various input and output methods for the concept, amongst others, including sounds in the emoji balls from the shrieking box idea; Filling up different emoji balloons to visualise emotional output from the first ideation session, and many more. What we realised is that we have a lot of great ideas for input and output, and should conduct user interviews to narrow down to the best options.

After coming up with a concept to present, we started to put together the presentation and divided tasks amongst ourselves. Thomas and I were supposed to create a storyboard, Marie was to make illustrations, I was to make a visual theme for the presentation and Tuva put together most of the presentation. Moreover, we planned to have an additional meeting on Monday to finalise our presentation and rehearse ahead of the pitch on Tuesday, however, due to uni closing down a week, we postponed the meeting in case the format of the presentation would change.

week3 teamformation meeting ideation

[Week 3 - Post 2] - Actuated Pixels - Updated Version

Sigurd Soerensen - Tue 10 March 2020, 8:33 pm
Modified: Tue 31 March 2020, 3:33 pm

See the concept poster here: actuated pixels

Actuated pixels' (AP) is inspired by the interactive, seemingly tangible holograms from Marvel's Iron Man and conceptualized based on the InForce project developed by the Tangible Media Lab.

An apparent issue with our modern-day everyday touch devices that many a sci-fi universe has solved is the ability to reach out to grasp and feel digital elements with our hands. Take the smartphone, for example. These devices can be drastically limiting to vision-impaired users as many of them are dependent on software such as text-to-speech, or similar, to use the phone.

AP creates physical elevations behind its digital counterpart, providing a sense of depth and various touch sensations based on the type of element that the user interacts with through hydraulics and electro-tactile feedback. Use cases could be as simple as braille text for the visually impaired to browse the internet, but even more interestingly, its application for navigational purposes.

Ed is vision impaired and reliant on his white cane to navigate. Ed was recently introduced to the new AP Phone and is excited to give it a go. Ed finds google maps by reading the braille text on his screen and immediately feels a map forming under his fingertips. He can sense the buildings on his left, the park and river on his right and the pavement he stands on. Ed notices an elevated wave moving towards him on the screen, realizing it's a person walking that the camera has translated into actuated feedback.

Using his phone to feel what others can see has provided Ed with valuable new opportunities. Now Ed can find bus departures by reading braille text and navigate there using elevated real-time maps, making his everyday activities much more accessible.

week3 actuatedpixels actuated update

[Week 3 - Post 1] - World Cafe

Sigurd Soerensen - Tue 10 March 2020, 6:28 pm
Modified: Tue 31 March 2020, 3:33 pm

On Tuesday of week three, we went through an ideation method called World Cafe. This method included three rounds, the first being context, the second being different target audiences, and the third being refinement. Throughout this exercise, we had a chance to look at and re-evaluate the previously presented ideas, which at this point were introduced through the final themes, to spark new ideas. All themes were spread throughout the tables, one theme per table. After a certain amount of time, all but one person at that table moved to a different table to work on another theme while one person, the host, sat back to explain the previous discussion to the new people sitting down at the table. One person could not be a host on the same table more than once.

I believe there was some confusion throughout the tables on what the task was, given that over half the tables I visited had just listed keywords from the posters, summarised what they were about and never moved on from there to generate new ideas. Therefore, at some tables, it wasn't very easy to do the task in later rounds as we needed to catch up first. At these tables, I tried to facilitate a discussion and come up with quick ideas to race along so that we could catch up again. For most tables, this went fine; however, I found that there were always a couple of people that did not participate, even though I tried to include them in the discussion.

I forgot to take pictures throughout this session from all but one table, so the following ideas and reflection are based on memory alone.

Ability-centric Interaction (hosted)

First of all, we started off trying to define what the theme was about, so we began with writing down some points on what the theme is and what it isn't based on the posters. After that, we tried to see if we could combine some of the existing ideas from the posters to generate new applications or twists to the original ideas. Given that I hosted this table, I'm not 100% sure which idea came from the first or second round.

One of the ideas that came from the first round was a bodysuit that used actuators similar to those presented in two of the posters, Actuated Pixels and Mind Speaker. This suit would give vision-impaired people a layer of actuators that would both sense objects before the users would hit them and also soften the blow of hitting objects. It's an interesting spin on the original ideas and could be realised using a camera to sense objects and some soft material to soften the blow if the user hit an object. This would later be built on by twisting the suit to be used in a game where people would be blindfolded and use the suit in a dark maze to try to find their way out.

Another idea was to have an entire floor covered with actuators so that they could carry you or objects around in the room. A later design built on this by thinking the actuators could be used to lift people upstairs, similarly to how elevators work to enhance our everyday living room into a moldable 3D environment.

From the ideas generated, I would say I was most intrigued by the capabilities of a moldable 3D home which would be possible to simulate in a small scale model.

Body as Controller (hosted)

When I first sat at this table, we had to catch up as they were falling way behind, having just written down stuff about the posters and not continued to generate new ideas. I took the lead to bring the table up to speed and bring forth some ideation of new concepts to the table.

We started with animals as this was one of the subjects the last team had as a talking point. From this, we came up with a concept where you could play with your dog in a ball pit. Both you and your dog would have different sensors on you to differentiate between you and the dog. When 'swimming' around in the ball pit, all the balls the dog would hit would turn one colour, and the ones you hit would turn another. The person would be allowed to use his and the dog's body movement to create art in a fun and playful way, whereas for the dog it would be fun to have the power to turn on lights by touching balls and jump around in the ball pit. We discussed that this could be beneficial to have in an animal shelter for people to play with animals before choosing to adopt them, or just for people to come in and play with the animals.

Another concept was inspired by playing fetch with your dog. The idea was that currently the dog would sit and wait until you throw something that it will then fetch, but what if it could be made otherwise? We came up with the concept of a ball that you could pitch to the dog for it to fetch, but then the dog assumed control over the ball, steering it based on its movements.

Afterwards, when new participants arrived, we started to look further at the idea behind the Running Hand concept. We generated concepts where the game could be used to train fine-motor skills either for people with fine-motor problems, for children learning fine-motor skills for the first time or for certain professions that need to keep their hands exercised, such as surgeons. Moreover, we looked into how the concept could be used in a kitchen. This led to an idea which is best explained as fruit ninja for chefs. The design would translate hand gestures to determine how a machine cuts the food which is behind glass just in front of the chef. Given the recent coronavirus, someone mentioned that this could also be beneficial in terms of chefs not having to touch the food when making it, which could be a more hygienic way of handling food in the future.

Imgur

Musical things

This discussion ended up being my all-time favourite, and our table had a blast ideating on this theme. We based our concept on what had been discussed previously by the people sat down before us, but we believe we brought the idea to new heights. What we were presented to was a music-making concept where blocks would be put on a grid pattern on the floor to make music. Each row on the grid determined what instrument was being played and they had discussed how to get a high or low pitch and long or short tones, but seemingly not landed on anything. We figured long versus short tones could be made with the number of cubes placed in the x-axis, whereas the pitch would be determined by the number of cubes placed on top of each other. We discussed several methods of adjusting volume, one being pressure plates around the grid for everyone to stand on, another being the proximity of people's hands to the cubes. In the end, we came up with using Hue and Saturation to determine what I think was the volume and what would be similar to how pedals on the piano would work. We discussed whether all cubes could be rubber ducks instead or small aliens with different aliens meaning different volumes. Moreover, we looked at how we could make it more sci-fi, so we came up with the concept of having the blocks you build the music with represent skyscrapers, the grid be the planet and when the music was played a laser would go over the cubes as they were being played. Also, we ideated that a history of that planet could be showcased as an alien invasion destroying the buildings as the song plays on and that we could shoot the cubes up as the song played. When the cubes were fired into the air, a streak of colour could be painted behind its trajectory, making it an art show.

Beautify the Self

This was also one of the tables that had fallen way behind in terms of idea generation.

We started to ideate off a jacket that could sense emotion and looked into how we could build on this. Some of the concepts that emerged were a jacket that would straighten up and give you a better posture when your posture was bad, or your mood was down. Other ideas involved a fabric that changes colour or text based on your mood and the mood of people in your vicinity with the same jacket and a jacket that would thank people as they were being friendly or gave you compliments. At this stage, we had reached the last round of the world cafe. Therefore we had to look at how it could be made feasible. We figured that reading emotions could be done through bend sensors for posture, heartbeat, AI analysis of language and body temperature amongst other methods.

Other

I also partook on the guided movement and digital sensations made physical tables. However, as I forgot to take photos of the butcher's papers, what our discussion revolved around on these tables elude me.

week3 worldcafe

[Week 2 - Post 3] - Theme Generation

Sigurd Soerensen - Tue 10 March 2020, 2:49 pm
Modified: Tue 31 March 2020, 3:34 pm

After the presentations, both on Tuesday and Wednesday, most students attended one of the inductions, whereas the rest stayed behind to sort the presented concepts into themes. Given that I had previously completed all my inductions, I attended both the Tuesday and Wednesday theme sorting sessions.

During these sessions, we used post-it notes to write down possible themes and what ideas could fit under that theme. I expected most of the categories we generated throughout this session, as some are repeated in other courses as well, such as Sustainability, Social, Health, Learning etc. See images of my post-it contributions for Tuesday here.

While some people kept on writing post-it notes and other people just stood around talking, a group of us started to put it all down on butcher’s papers. I took on the responsibility of writing down the themes that emerged when some of the other’s brought in the notes. We put the post-its down side-by-side and discussed which ones to combine to form broader themes to place on the paper. At the end of Tuesday’s session, we ended up with the themes: Emotion, Health, Learning, Change, Stress, Game, Music, Accessibility, Fitness, Physical, Negative Reinforcement, Smart, and Sustainability.

imgur imgur imgur

As for Wednesday, we started off doing the same as with Tuesday’s session, writing down themes that the new presentations could fit into. However, this time around, I tried to form more specific categories than the previous day, as we had to split up the broader themes to form more specific ones. After having done so, we started to collect the themes on butcher’s paper as we did the day before. This time around, I focused more on creating new themes rather than sorting through them on the butcher’s paper as I did the day before.

Following this, we started breaking apart broader themes to create even more specific categories. Here too, I took the job of writing on one of the boards. A couple of us discussed through the various themes from the butcher’s papers and how we could break them apart or make them more specific. The themes we ended up with was: Negative Reinforcement, Sustainable Visualisation, Negative Reinforcement for Behaviour Change, Supportive Fitness, Technolgy for Security, Bothersome Tech, Shadow as Input, Promoting Social Interaction, Social Awkwardness, Emotion as Input, Technology for Conative skills, and Centering Emotion. I believe the lecturer came up with the last one, Life Beautification.

Imgur Imgur Imgur Imgur Imgur

week2 themes themegeneration

[Week1 - Post 3] - Card Ideation

Sigurd Soerensen - Mon 9 March 2020, 7:23 pm
Modified: Tue 31 March 2020, 3:35 pm

The card ideation method was a new and exciting approach to me. Our table came up with some fun ideas, although most of them were toilet-related due to the cards we drew.

However, I feel our creativity was limited by having to stick with the first cards until they had been exhausted. This issue became especially apparent when we were supposed to do the same task individually, where most of us had one or two weird words that didn't make sense in that context or cards that had words that didn't make much sense at all. Even though I am fond of ideation methods, I've noticed that in most subjects this far, we have not used any ideas after we came up with them. I believe this is for the simple reason that they are either not captivating enough as a concept or not feasible, making the ideation process somewhat wasteful. I'm hoping that this will not be the case for this subject.

In the end, we did decide to present the idea of a mechanical plant that would die as you are wasteful with water in your home; A simple but intriguing concept. This concept came to be when we ideated on the sentence "Design for enlighten in a bathroom using ridged with quality of detached". The idea is enlightening due to how it visually represents your water consumption with a familiar, easy to grasp metaphor, that of a plant dying, which equals bad. Although the plant isn't required to be in a bathroom, this is where the idea originated. The mechanical aspect of the plant came from the ridged requirement, and the plant itself can be picked up and moved around, hence fulfilling the detached requirement. We did continue to ideate on the concept with a new card reading semi-trailer instead of a bathroom, as we felt like we wanted to ideate something else than bathroom ideas, but eventually, after voting, ended up presenting the original idea to the class.

All images can be viewed here if not shown below.

Imgur Imgur imgur

week1 cardideation

[Week1 - Post 2] - Seven Grand Challenges

Sigurd Soerensen - Mon 9 March 2020, 6:46 pm
Modified: Tue 31 March 2020, 3:35 pm

The first week started with running through the seven great challenges as presented in the international journal of human-computer interaction. I was tasked with reading through the seventh task on social organization and democracy, of which you can find my highlighted sections here.

As for the in-class discussion afterwards, I took the role of facilitating a conversation and trying to involve everyone on the table, while another person in the team took notes as we discussed. Facilitating the discussion proved to be difficult as most people at the table sat silently by not interested in giving their point of view on the matter. Hence, we ended up being three to four people keeping traction in the conversation. After having discussed for a while, we chose one team member to present our discussions. The gist of what we discussed was to focus on a broader scope when creating technologies in the future that encompasses Glocal (global + local) thinking and how the product can affect people other than the user. Moreover, we looked at how technology can manage resources and its impact; how technology should be more inclusive, active technology co-design with users; and technology to give equal voice to everybody, with an example of cryptocurrencies.

Although I understand that for the discussion it would be helpful to place us in groups with those who had read the same chapter, I did not feel like I learned anything from listening to the presentation of the other groups. Moreover, I had a hard time hearing what was presented. Without reading the other chapters, I would not even know what they were about from the presentations held. Moreover, I don't understand why we had this exercise in the first place as we didn't use the challenges when generating ideas the next day.

week1 sevengrandchallenges

[Week 2 - Post 2] - Presentation Reflection

Sigurd Soerensen - Mon 9 March 2020, 6:02 pm
Modified: Tue 31 March 2020, 3:34 pm

Looking back to the presentations, I would say that the general level of concepts pitched was superb. It was interesting to see all the different ideas people had come up with being presented, and I have to say that most of them were delivered understandably and comprehensively. Moreover, only a few, in my eyes, did not meet the criteria of being a novel, playful everyday interaction.

As for my presentation, I believe I was able to convey the concept both through the pitch, poster and blog entry. With that said, I would have liked to simplify the idea to more easily be able to communicate the concept given its complexity at the time I presented it. I did find it a bit confusing what the poster was supposed to include as the brief didn't specify it and there were some contradictions as to whether it should be a simple sketch or a fully-fledged poster, so I tried to go somewhere in the middle. I wouldn't change my idea much based on what I saw, but it would be nice to specify a use case to focus on as screens with touch functionality is a vast field.

Although many concepts were interesting, I didn't get much excited by anyone project as many were quite vague. If I were to pick a couple of ideas that I found most appealing, I would choose either Artificial Conscience, the smart wardrobe concept pitched on Tuesday, or the Chroma concept which teaches colour theory.

week2 reflection critique

[Week 2 - Post 1] - Actuated Pixels

Sigurd Soerensen - Mon 2 March 2020, 11:11 am
Modified: Tue 31 March 2020, 3:34 pm

See the concept poster here: Actuated Pixels.pdf

Actuated pixels' (AP) takes on the sci-fi concept of Iron Man's floating holograms and glass-based touch screens and combines them with recent prototypes in haptic interaction, such as the inForce concept developed by the Tangible Media Lab.

Imgur

A noticeable issue with modern touch technology that sci-fi often don't have is the ability to reach out and touch the information in front of their person. Looking to our everyday life technology, it is mostly limited to flat 2D screens that offer little to none sense of physical realism upon touching the pixels on the screen. This issue is ever more present for vision-impaired or blind users who require additional assistive software such as speech interfaces to interact with the same technology most of us see as integral to our everyday life.

AP makes use of micro-actuators that pushes on a flexible screen to create an elevated area where its digital counterpart is located, such as text, braille (text for visually impaired users) or a button, allowing the user to see and touch the element simultaneously. In this way, developers can elevate their design to include how users not only see their front-end but also how the information should feel like on touch. With that said, AP's technology is not limited to screens only as it could easily be repurposed into other areas such as sensing electric wiring in walls or handheld devices allowing blind people to navigate streets more easily.

The playfulness of AP comes from the added dimension of sensing information using your fingertips and through allowing digitally generated physical representations that take shape of interface metaphors that are familiar to the user. AP can allow for a novel way of interacting with information that may very well become the new normal for a wide range of users.

actuatedpixels actuated touch tangible neumorphism week2

[Week 1 - Post 1] - Introduction

Sigurd Soerensen - Thu 27 February 2020, 11:45 am
Modified: Tue 31 March 2020, 3:35 pm

Hi there 👋

I'm Sigurd, a second year master student in the Interaction Design program at UQ 🎓. Prior to studying at UQ I did a bachelor's degree in International Marketing and also completed a diploma in Information Technology back in Norway. Over the last 10 years or so I have developed a natural curiosity for technology and I'm therefore intrigued by the contents of this course, given that we get more time to work on a project and a more hands-on approach.

My expectations for this course, amongst other, are to attain deeper knowledge in the field of mechatronics, learn to think about user experience in terms of physical products, and to build something awesome! 💪

💡In terms of my inspirations, there are too many to list; However, some relevant ones are: Maximilian Schwarzmuller from Academind, Chris Do from Blind Design, CJ from Coding Garden, Mark Rober former NASA engineer, Jinha Lee and Hiroshi Ishii from the Tangible Media Group at MIT, etc.

In this course I'm looking forward to taking a more hands-on approach building something 🛠 and spend more time working on aspects in the design process that often end up being overlooked due to time constraints of a two unit course. I'm eager to be able to focus on the little details and not just rinse and repeat what we've done in every other course prior to this.

With that said, I'm slightly concerned that a prolonged ideation phase will give us less time to work on the prototypes and that other assessment items will steal time and focus from the major project which often is the case. Moreover, I'm concerned that I will end up in a group where several members don't pull their own weight, which happens too often.

As we have mostly been focused on digital prototypes to this point, I'm aiming to achieve a level of competence in mechatronics which makes it just as easy to create a physical prototype as a digital one in future projects. 🤖Moreover, I hope time will allow for us to spend more time on making something I can be proud to present as we mostly run out of time before the prototype reaches a state where it is presentable.

week1 introduction