With the excitement of the big and final day, the exhibit has finally arrived. With the intent to show of their semesters work, everyone was eager to show their prototype. With this semesters exhibit being online, it went a bit differently to how I imagine at the beginning of the semester (no drinks or food ☹ ). While it was great to have the exhibit given COVID-19, it was underwhelming. With no one appearing in our discord chat for the first 30 minutes (or so), we were a bit lost in what to do. Even when presenting it was a bit awkward as some people had come into the chat and did not say anything. Additionally, it was very hard to switch cameras and show of some of the prototypes while being professional. At some stages I found myself a bit overwhelmed and led to me losing useful information when explaining the concept to the viewers. Although, it was still a very fun and new experience and I am very unsure how else the exhibition could have been planned. Below is a little preview of the end of the exhibition, where we said our final goodbyes for the semester.
With the last week of classes, it was both exciting to see the end coming closer but also very stressful with all the work upcoming. We started the studio with explain to everyone what has been nice with the ease of restrictions. Although since I have been working and seeing my colleagues, the only difference has been my uni work – so it felt the same for me.
I had finished the prototype, both interactions. This included the automated interactions as well as the audio information that automatically played. I also made the website a bit prettier so that users would like the display. you can view my prototype here in my portfolio:
For the portfolio I had completed everything. The video was completed in which I had my friends over so I could film them. After filming them I went back to Vegas studios and edited it all together. Although for this video I found that I was going for more a voice overlay then my previous videos. Video seen below.
With the website I had trouble incorporating the prototype as they were web based but in the end, I got it working. I also filled out all the information in the website. For the content of the website I found myself stuck asking myself ‘is this too much content’. As it was a website, I didn’t want to overload the user with too much information but I want to make sure the reader knew my thought process. The link to my portfolio can be seen here:
Week 12 was a very busy time for me. Like every other student, lots to do to finalise our project. This included finishing of any iterations from feedback and ensuring that it is ready for the exhibit. With the exhibit creeping closer I want to ensure that I stay on top of the workload and work on it little by little.
This week was the begging of fixing my multiuser interaction so that it was usable. Up to this point I had mocked the testing and demonstration. Sadly, I tried to salvage the already existing code that I had so that it would work. Bellow is an example of one of the several issues that I had. In the image below there was an error that I coded that would make the words being said be registered as unidentified. Additionally, they would not disappear unless you refresh the cookies.
With all hope looking lost, I decided to bite the bullet and start from scratch. With a further dive into the speech recognition for web side I manage to find a very helpful tutorial. In this tutorial led me to another tutorial that the user had posted. This was the holy grail as it was exactly the help I needed. The tutorial can be seen here https://mdn.github.io/web-speech-api/phrase-matcher/ . With this by my side I was able to create a working speech recognition that would compare to an array of words that have been titled ‘the top 50 easiest words for charades’. With a few more tweaks I was able to get it working with my already made websites for the multiuser interaction (the introduction/starting/informative page).
With this completed all that was left for the digital side was to create audio clips and make all interaction automated.
For the physical build, I had to fix the Arduino button press as it was currently creating several pages when pressed . Due to it being an Arduino uno, it is difficult to find what I want for my prototype but I found something similar enough to work. Although needed to edit and apply it.
Portfolio wise I have started the initial design and have a guideline I want to follow. Background is all set and am writing away. Waiting on myself to finish the interactions and my friends to be free so I can start filming for the portfolio.
This week started with the Wednesday studio sessions recap in which everyone expressed their pros on isolation as well as their current progress. Followed by this I investigated my prototype feedback and where/how I can improve my concept ( but also not change the concept). This includes making the system easier and quicker to use such as making the system hands-friends. Furthermore, I need to do investigate more formal research towards my user audience and the interaction.
I have also investigated creating my portfolio which will be created over a period, where more and more will be added. My team has also had a chat about the final report and what that entails for us.
Over the next few weeks, I am starting to adjust the prototype so that it is ready for the final exhibit. This includes adding in a working multi-user experience and a more hands-free experience.
This journal will be composed of both week 9 and week 10 because of all the hecticness that occurred during week 9 (the video and prototype document being due). The first recap will be for week 9.
With week 9 being the final week to put everything together for the prototype it became filled with a fair bit of work. This included getting the coding working for the prototype and completing the build of the prototype (which was just setting up the elevator frame with all the wiring) and to script, film and edit the video. The first thing that was planned was to finish the coding for the single user interaction which was the posing. After following several websites that linked into a deep dark rabbit hold of different JS tutorial I was final able to get the posing check to work using posenet. After getting that complete I than decided to make the website a bit prettier and added in some CSS (please be kind on my quick designing skills). Below is a snippet of the final looking interface with me posing and it getting it correct.
After I finished that, I believe that brought me to Tuesday. I then looked at my long-forgotten code that I had for the multiuser interaction as I actually started this a few weeks ago then stopped cause I ran into a big error. I edited this code a bit so I could do a bit of smoke and mirrors for the video demonstration. The website uses a tutorial I had found online and I had edited it to accommodate what I wanted (but sadly didn’t work fully). The website is bellow
The issue with this being that after words are said you have to manual click save and manually click check answer which I wanted this to be automated (but ran into that wall/changed my focus on single user interaction).
After I had completed that, I filmed the build and functionality part for the video as I could do this by myself. This included me filming the videos of the build and recording the screen to show the coding and website aspect. This took a fair bit of time as there was a lot to cover. I than called Two friends over (only two cause of covid19) and got them to act out what I wanted for the live video interface. Them not leaving the house for a couple of weeks made them excited to see people. Although with their excitement, it made them incredibly distracted while filming. Even in the final video, you can witness them laughing as they could not take it serious enough. Below is a snippet of one of the bloopers from the filming (and them laughing).
After filming the interaction. I spliced together all the current filming I had done in Sony Vegas 15 as this is the video editing software that I’m used to (may or may not have learned to edit videos through youtube videos of me and my friends playing videogames when I was younger...). Below is a snippet of the mess that sony vegas looks like.
Through editing, I realised that I struggled to place together different clips as there was no linking, so I decided to make a character that introduces and links the concepts together.
After filming this by myself I then stuck into Sony Vegas and edited it all together. This then completed the video. After watching it a few times and fixing some audio issues I then rendered the video which can be seen below.
That completes the intense week of week 9.
Compared to week 9, week 10 was very much relaxed. This just included my team and myself reviewing other videos and documentation and commenting on them. We did this by watching the videos as a group and creating dot points within a collaborative document (so we would not re-iterate points already made) then read their document. Then we would move onto the next video. If we wanted, we would re-watch the video to find more points to add. After we went through all the videos, we took a big break, as watching all those videos can be very draining (don’t know how tutors do it). After the break, we came together and separated the videos with each other and created paragraphs with the dot points made.
Sadly, I did sleep through the Thursday practical as my sleep has schedule has never been worse due to COVID-19. So, I aim to work on my prototype to make up for that.
In the following weeks, I aim to get the multiuser interaction working where the user is able to speak words and it automatically gets stored within an array and is able to compare with the actual charades words without the need of interacting with the screen.
This week I have been working on two major parts (programming-wise) of the prototype. The first being the single user’s experience in the elevator (which is a pose mimic game for those who do not remember). In doing so I first investigated TensorFlow and went down a big rabbit hole that has led me to PoseNet. PoseNet is a script which allows for a human body to be mapped out with 32 points. The image below is an example of this.
PoseNet works both on a still image and cam as you can see. What I’ve been trying to do with PoseNet is to be able to call on specific images from a local folder and compare it to the live webcam feed to ensure the poses are similar. I have found that using cosine similarity, you are able to do this Article here. Originally, I intended for the system to use an image classifier to scan the webcam and a picture saved and compare the two but ran into a few issues with loading in the images and comparing it to the webcam. (This is all done using JS and HTML).
Additionally, I have also been working on the multi-user experience for the elevator (which is the charade game) audio response. Using an API, I am using speech recognition, to detect if the words being said is correct. At the moment it can hear the words and send to text and I am still working on checking if the words being said is correct. As this isn’t the major functionality that I am focusing on, I haven’t worked on it as much as the posing. Although in saying that I do want this to be completed. The image below for the layout of the audio to text that I’ve made.
Lastly, I have only done a bit for the report that is due soon (whoops). And should start focusing on that and the video as that video will take a while to make.
Focusing on Elevators and their surrounding space to improve the dull area. I am specifically focusing on how I can make residential elevators an exciting and new space. Team CDI came up to have two or separate interactions for if the elevator has one or several people within the elevator. To add a physical factor, for the single person, the elevator gives the user an image of a human silhouette posing. To continue, the user must pose the same for the elevator to move. When theirs several people in the elevator, the elevator gives one person a charade word then the person then acts out the charade topic and the other people must guess what that user is acting out. After one person correctly guesses the charade topic, the elevator verifies and starts moving to the selected floors.
Specifically, I will be focusing on how users interact with the elevator and their emotions in how they use it (if they get frustrated, happy, excited). More specifically I will be looking into how residential people access and use elevators on a day to day basis. As residents of a high rise use an elevator every day, my focus is to find out how i can make this mundane space a more exhilarating and unique elevator journey(focusing on their emotions). I want to be able to find out if these users are willing to move away from their comfort zone and participate in something that could be potentially embarrassing. Being in an elevator it will be interesting to see what tasks users would complete going to their apartment quicker.
My Individual responsible for this project is to re-create and simulate an elevator experience as well as test these on potential users. This includes making an ‘elevator’ as well as programming and connecting devices to test the functionalities of the new elevator. This incorporates using a camera to detect the poses (this also will require to use a machine-learning algorithm to learn specific poses) as well as a microphone to detect the words for the charade game. As well as using an Arduino kit with led and buttons to create the buttons and lights you would see in an elevator.
Ideal Finished Progress
My ideal finished product includes having an elevator frame alongside the camera and microphone working to detect if what the users are doing are correct. This includes using the camera to detect that the poses are correct and that the audio responses from the computer are correct with the charade word being generated. Below are some sketches of the ideal finished product.
Building the frame is the first step. Just a basic rectangle-prism would work
Adding a sheet or something that acts as a wall is the second step.
Lastly is then setting up the mic, cam and Arduino which all require programming at some point.
The Following is the current set-up
This is the make shift working bench.
The Wood Pieces for the elevator frame
Elevator frame just needs more supporting wood for the corners.
This week, team CDI started working on the proposal report. IN doing so we refined the concept into what we wanted. Initially, we re-designed the concept to be more mentally focused. The idea that Alistair came up with was a voice recognition system that gets people to think on the spot while going in between floors within the elevator. From this, we investigated how we could prototype this concept and research into the mundane space of elevators.
After the studio session, we spoke to Clay about prototyping and how we would be able to prototype and test users seeing as we are now limited due to coronavirus. He responded in saying that we would need to think outside the box to replicate a scenario of someone using an elevator, he suggested that we could use a cardboard box or a cupboard and set up buttons and lights to give a sense of being in an elevator. Below is a snippet of our after-workshop meeting in zoom.
After our workshop session and a team meeting (and with a chat of tutors), we realised that we got rid of all the likeable and physical interaction that was part of our pitch proposal. So, we went back to brainstorming an idea that incorporated both the original pitch idea and Alistair’s idea. We wanted the concept to still allow users to interact with each other while also adding in the embarrassment of moving around in the elevator. Finally, we came up together with a cross between voice recognition and a charade-like game for the elevator to move. If there is a single person in the elevator it gives the user an image of a human silhouette posing, the user must pose the same for the elevator to move. This uses a Kinect-like camera to see that the person mimics the silhouette. And when there are multiple people in the elevator, The elevator gives one person a charade topic i.e. something really easy etc the person then acts out that and the other people have to guess what that user is acting out for the elevator to move. If the users guess it correctly, then it goes to all the floors that have been selected in the elevator.
Moving forward, the team is refining the report as well as researching more into the concept and how the different possibilities for prototyping.
This week, team CDI created the pitch and finalisation of the idea in which we presented to the class (via zoom and Youtube). Additionally, we received feedback as well as gave feedback to the other teams that presented. The purpose of this was to collect information from our peers to help us iterate our idea so that we create something both realistic and fun.
This week was also the commencement of the 'Virtual Classes' where everything was moved onto a zoom meeting session. Of course, this has made everyone quite nervous with the layout and alterations to the course. Although with the predicament (COVID-19) that we are in, Lorna has managed to work her way around this (which is very impressive for a subject that was very physically demanding). Of course with these changes, there is a bit of a disappointment but there is nothing I can do about that but try and enjoy the work from home. With the first day of Zoom we had a few tech issues but by the second day, we managed to avoid anything major. One major concern is that without being there it's hard for people to confidentially speak out in the zoom, whether this is they are nervous or have nothing to say I foresee that this can be an issue for the subject. I myself will try to give more input in the following weeks.
Individual Preparation Work
The work that i solely conducted for the presentation pitch included writing the pitch (which was a collaborative effort from team CDI) and making the video for our presentation. In making the video I didn't want a boring presentation that was a whole bunch of words on the screen but something a bit entertaining. I tried to include actual faces from our teams, and different images for the speaking. Not only that i tried to make the video start with a big bang with a quick 'old style MW2 Compilation Youtube like' introduction to introduce us and our concept. The video can be seen below:
Feedback from peers
Following the presentation, Team CDI had a zoom call in which we went through the feedback that we had received from our peers. Our team were happy with the feedback that we had received, it gave us a better perspective on our idea and how we can make it into something better. The feedback summarised can be put into a few words, this being:
How can you dance with other people in the elevator?
how can you make it more fun
what is the purpose? calorie or just for fun?
does it exclude people (those who can't move)
where is the ideal place (no hospitals)
could it be placed outside the elevator
could the dancing affect the environment inside the elevator
Preivew of our zoom call
Lastly, team, CDI has started their work on the report and is taking their time through it to ensure that is has everything covered.
Day 2 of week 3 started off with a meeting with Bash Isai in London (felt sorry for him cause it was 11 pm there). He went through the process of what to expect after we all graduate, how we will be competing for jobs. He explained that we need to understand what we are worth so that we can find our self a job where we are treated right. He definitely suggested looking at a requirement agency. In terms of money, he said that each job would only increase salary by a small percentage each year so he recommended that after 2 years you should find yourself a new place to work where they will pay you more. A fun exercise that we completed was were we wrote 50 words about ourselves then cut them down to which we only had three left. This showed us what really defines us as a person and asked us to look deeper in ourselves for this. The following is what I had written and the three words that I had selected.
My name is Dimitri Filippakis and i Am a cloud architect, I am unique because I’m an understanding and realistic person who can be optimistic in times of hope and understand that not everything can be solved right away. I can work hard to achieve goals and tasks, and sometimes allow myself to lose track of time cause of this attribute.
Following the guest speaker, we met with our team and got to know each other with some warm-up questions. After we warmed up, we worked on making a team contract for the semester, this covered topics such as meetings, financial management and emergencies. Following the team contract, we were looked into what we needed for the assignment and how we could develop ideas and the proper process to do so.
Tuesdays contact was organised as a word café. After lasts weeks organisation of ‘ideas into themes’, this week we had the pleasure to explore these themes further. It was a very enjoyable exercise (the free food made it a bit more relaxing and less linear), it gave the contact a deeper understanding of each theme and allowed for us to develop new ideas that weren’t created. First round was broken into two mini rounds, in round one we through of the potential problem space and alternate ideas from what was already made. After the first mini-round it was difficult to come to a new theme as you’d have to pick up all the thoughts of the last group. The second round was to look at potential users and how they would use the potential ideas as well as the ideas made through the presentation.
Since there was a lot of tables I was only able to make it to a few, but during the break I tried to go through and read the tables butchers paper. The ones that I had sat at included:
Change through discomfort
Enhanced Mundane Spaces
Change through positive reinforcement
Digital Sensation made physical
At the end of the contact we were tasked with choosing the top three out of all the themes, this was:
Enhanced Mundane Spaces
Overall it was a great day to make everyone think of new ideas and think deeper into the themes.
Below are some photo's of the work done in the class.
Tuesday was the soldering induction in which we learnt how to solder, this was also a refresher for me on how to use a bread board as it has been a while since i've used one. After completing the bread board circuit we placed the same circuit onto the copper board. With this board we were able to apply the same connection to get the light to turn on and off with the switch.
Below are some images from the session as well as a little video of the working light.
In class we listened to everyone's ideas through their pitches and presentations. Alongside this we had critiqued some of the presenters, saying if there concept was relevant to the task, if the presenter communicated their idea clearly and any comments if we wanted to leave them any.
Charlie is a digital eating assistant that helps promote healthy eating and wellbeing of children who are picky eaters. Charlie will live in a physical sphere that is connected to a placemat which can be placed under plates, bowls or cups. The intended use for Charlie is that after something has been lifted off the plate, the placemat detects this and tells Charlie, Charlie, then performs an encouraging emote or display something similarity to virtual cheering. This is intended so that children are inspired to eat all their food (especially greens). This is mainly focused on those children who refuse to eat their greens or certain foods whilst also implementing a no food wastage policy at a young age.
Charlie will have different animations that play on a display, these animations will vary for how much food is eaten, what kind of food is eaten, and if no food has been eaten after a certain time. Using these animations, hopefully, the children are encouraged to eat everything on their plate
Another Idea of Charlie was to use a dinner tray that corresponds with the dinner mat so that Charlie knew what food the child was eating. Each section of the tray could be set to greens, salad, meats etc. With these selected areas, depending on what was on the tray, Charlie could demonstrate specific animations.
What is cool about this idea as not only does Charlie help promote healthy eating but can help distract the child as the parents focus on getting ready for the day. Charlie is then used for both the child and parents needs. This also helps those parents who dislike placing their child in front of the T.V or an iPad at such an early age.
Hey I’m Dimitri and I’m in my final semester of Uni (4th year) studying a bachelors of I.T. majoring into User Experience and Information Systems. I have a wide variety of skills within the Industry, many that come in handy when working in large group projects. I'm most excited to get hands on for this subject and enjoy the semester. Although I study I.T. I have also have completed some trade work throughout the years doing odd jobs as a builder, and plumbing apprentice.
For PhysComp I am hoping to improve my design skills, and improve my creative thought so that I’m able to think more out of the box. Additionally, I want to expand on my knowledge of electrical hardware.