It comes to the final stage of the project that every team member has to stay together and combines all the work. We spent 3 days together for building physical installations and evaluating.
All the devices in our peoject would occupy a big space, and also there are many devices (3 IR receivers, Radar ranging, voice recognition...) Therefore we decided to remove some devices from bread board and extend them with longer wire. We first labled the wires with post-it note to prevent confusion. And then used electic iron connect the series circuits. We sticked LED and IR receiver on a printed zombie.
To make sure all the devices can work together perfectly, we made evaluations ourselves. For examplem, we tested the interval of zombie's attack sound, we tried arrangements of the IR reveivers with different distance and angle, we tried the safe zone for the Radar ranging. Finally, we came to the fianl arrangemnt.
To add responsive sound effects and music managment system to the project, first i have to learn how control music playing in Unity. I followed a tutoril about how to control music with buttons in Unity, since i felt it shares the same working principle with my project. As long as i know how it works, i can replace these buttons with the signals in my project.
After the successful attempts to control music, it became easy for me to use the signal from Arduino control music.
Then i decided to play the sound "bang" whenever the player shoot at the target, which sounds like you are shooting actually.
In the game, there should be sounds of zombie's attack played from time to time. To let this happen, i though i can let this sound play in loop with certain delay time. For example, i can play the zombie's attack every 5 seconds automatically.
In the intended game, the user have to listen where the sound come from and shoot towards that direction. To judge whether the player make a correct action and give them audio feedback, I choose to combine Arduino with Unity 3D. In this way, i thinkg the Unity could know which IR receiver received the siganl and then give corresponding feedback. For example, when zombie's attacking sound effect comes from left, a zombie's getting hurt sound effect can be plyaed if the player shoots towards right, but a human's getting hurt sound effect can be played if player shoots at other directions. Now, the first task for me is to figure out how to let Unity 3D read siganls from Arduino.
At first, i watched some Youtube videos introducing how to connect Arduino by Unity with some plugins, like the Ardity and Uduino. However, they all failed to work. At last, i learned that they just can be combined directly without any plungins.
Following this tutoril, i can roate a cube in Unity by swinging a knob(B10k) in Arduino.
In the Arduino, i just need to print the signal needed.
In the Unity, i have to use IO.Ports system first，and then state the same serial port as Arduino.Check this serial port is open, and finally read that signal in the right form. Basically, these are the steps to connect Unity and Arduino.
Then i moved to my project. I let the three IR receivers print 1,2 and 3 when they receive the signal.
In the Unity, i tried to print the signal to see whether the connection is successful.
Actually, i tried hundreds of time before i made it. The problem is about the way to read serial, and the type of serial. We have to change the form of these according to project we work on. For example, i should use"Readline" to read the serial in the Youtube tutorial i followed at first, but use "ReadByte" in my project.Thanks to Ben, i finally figured out the errors in my codes.
Week 14 & Exhibition
Sicheng Yang - Fri 12 June 2020, 6:28 pm Modified: Fri 12 June 2020, 6:28 pm
Finally it was time for the exhibition. Although there is no fixed schedule for the course from now on, there is actually a lot of work to be done.
For the exhibition, I re-created a short live demo of about 1 minute, which was starred by my dear roommate.
The storyboard is almost the same as the previous video, but in the previous version, using two different view shifting between was found with really good effect, so that the audience can see the small display in the helmet more clearly actual effect. Although it is still a bit vague due to shaking, I think it is better to recognize with the help of the new UI than the previous version.
Project code online
In order to facilitate customers to view the complete status of my project code, and avoid damaging the structure of the website (in fact, the code is quite long). I chose to host the complete code on GitHub. Only a few fragments are presented in the portfolio. The link is at here.
As for the rendering of the code, I found a useful plug-in Rainbow, which provides some commonly used editor themes to highlight the code in web pages. I ended up using Monokai theme, which is one of my favorite theme, and it also fits the dark theme in the portfolio.
Live stream equipment
I purchased a mobile phone holder that can be mounted on a tripod, which allows me to demonstrate my prototype at a more flexible angle. Especially for showing of my small display.
Unexpected IDE breakdown
Unfortunately, my Windows system had an update at noon on the day of the exhibition. I seriously suspect that this patch broke my Arduino IDE, and even the web Arduino IDE cannot connect to the serial port. This makes me very anxious, because I may not be able to solve any temporary problems. But in the end the exhibition ended without serious errors, and I was very lucky. But in fact, my IDE cannot be opened so far. Come on, Microsoft.
Despite the exhibition is online, the event is generally very lively and even a little busy. I didn't even have much time to visit other people's projects besides my own live demo, which is a pity for me.
Thanks for Nick having this screenshot for me, cause I totally forgot about this.
Many visitors came to our channel, many of them were our tutor or lecture before, and even my current bosses. It's actually quite nervous to show them my work. Fortunately, the presentation process went smoothly and there were no big bugs.
But I didn't count the issue of phone camera fever. Prior to this, I had no experience in using mobile phones for long-term live stream. But its fever issue seems to be more serious than I thought. When the live stream went to the second hour, the phone was almost stuck and I couldn't even quit Discord. But the strange thing is that other people can still see me in the camera. As a result, when Clay arrived, I could only crouch in front of my laptop and shake my head up and down. But anyway, it is still working.
It seems that a thorough rehearsal can guarantee the good effect of exhibition. I hope I will have the opportunity to do such a rehearsal next time.
This is the last time I exhibited at the master's period. But anyway, the online exhibition has brought us a very novel experience. It's nice to see many familiar faces gathered in a virtual space to talk to each other.
I spent some time noting the results of the exhibition in the portfolio. But in fact, most of the visitors did not provide a lot of feedback. After all, the online display can do more concept-related demonstrations, and cannot actually experience it. But I still got an interesting suggestion. He believes that this helmet can be used not only in jogging, but also as a meditation breathing device. If the device can eventually be made portable enough, this is actually a direction worth exploring. Through the built-in different modes to meet the breathing training in different use scenarios, because they can carry this device to anywhere. Also, such changes can further integrate the exploration of breathing training in different directions in our group, including the use of AR effects to achieve the concept of Paula's art installation.
It is a remarkable experiences during the whole semester, glad to spend this hard time with you all 🎉 .
Week 13 Journal
Sicheng Yang - Fri 12 June 2020, 5:19 pm Modified: Fri 12 June 2020, 5:21 pm
This week is the last week of class this semester. After dealing with other messy things, I put most of my energy back on this course. So I developed a lot of new progress in this week.
This week I made improvements to the prototype. UIs that were previously considered too informative to understand in jogging from user testing were redesigned this week. In the previous design (images below), I presented the next 4 steps of breathing at a time, the purpose is to provide users with predictions. But in fact, this will increase the user's cognitive load zone, especially in sports, which has a negative effect, that is, the user cannot obtain effective information.
But users generally think that graphical visualization is a good idea. Therefore, in the context of synthesizing this conclusion, I decided to present only the information of the current step number on each screen, but also use the appearance rule of the graph to provide the user with the next prediction effect.
The new UI design is shown above. It uses a circle metaphor to connect with the mouth and breath. A filled disc indicates that the user should inhale, while a circle indicates that the user should exhale. The different sizes show whether this step is the first or second step. And I also used larger fonts to mark the sides to assist users in understanding the graphics in text. After I talked with the team members, they all thought it was a better solution. But then I will conduct user tests to verify its actual performance.
Making a portfolio is actually a challenge. I have to admit that its production cycle is too short, and I spent most of my energy on writing content. I actually allocate less time to design.
The origin of the design comes from a cool helmet photo I took against the door panel. I did not hesitate to use it as a banner, as I mentioned in the previous journal. I then decided to create my website in a cyberpunk style.
I refer to a cyberpunk-style color scheme in an attempt to present the title in a neon way, but in fact this does not seem to bring a good visual effect to the website. So I finally gave up the plan. I decided to use yellow and black color as the main color scheme for design.
But at least, I tried hard to improve the visual effect of the homepage. I added a fluorescent light effect to the title. I think this effect is quite cool. I hope some users will notice it.
Next week, we are about to usher in the exhibition. It feels a little unreal, and the semester is over in an instant. But looking at the prototype in my hand, I think I have done a lot of good work this semester. Looking forward to the exhibition next week.
Yifan Wu - Fri 12 June 2020, 4:32 pm Modified: Fri 12 June 2020, 5:07 pm
The feedback after exhibition
Photos from the exhibition site
Working under these conditions was a unique experience. The exhibition allows us to show our concept in the expected environment.
We must always pay attention to whether the audience's attention is placed in the place being displayed. Basically, the audience's attention is following the rhythm of the display.
There are also some unfinished aspects. The audience is not satisfied with the exploratory nature of the collection system.
In terms of exploration of the collection system, I hoped to add more items to make the game more exploratory, and Zhuoran was concerned that too many game elements would cause teenagers to overindulge in games, which would not be consistent with the original intention of assisting with housework. After some negotiation, we decided to keep limited items in the collection system. Although the characteristics of teenagers and the exploratory aspect of the game were taken into consideration at the same time, in fact, both sides did not get the ideal result.
The audience also reflects that the interface can be understood directly, for example, some hints can be added to the interface to let users understand the dynamics. For example, the number of hits can be displayed on the top of the monster's head, allowing players to understand their own score status in real-time.
Some audience feedback on why the play interface is not an AR interface. AR cameras should be used instead of ordinary cameras in the scene.
Some game players report that the difficulty of the game is too high and can be appropriately reduced. They may give up the game after losing the ball again and again. The excessive difficulty will affect their enthusiasm for the game.
Zhuoran Li - Fri 12 June 2020, 4:28 pm Modified: Fri 12 June 2020, 4:33 pm
I didn't take any photo of the exhibit!!!
We set up the living room for the exhibit. We used two computer screens(one to control Unity, on for live stream on Discord), two mobile phones(one work as the webcam, one for the connected accelerometer to Unity), and two iPads(one for Discord, one for show the simulate pictures.) to support the prototype.
We have invited our friends to the exhibit. They have given some feedback to us, including the concept and the prototype.
We haven't think about the people who live alone
Two concepts of our team are all encouraging all the family members to join in the housework. So for the people who live alone, they cannot use our prototype.
Our audience has questioned about the AR view. We need to explain so that they can understand the prototype.
difficuly of the game
conntection between the game and housework
get more audience
online exhibit - audience
It's quite awkward to talk to audience online. We want to communicate with them, but they always keep silent. So I don't know if I should carry on. Or should I wait for the response?
Not like the normal exhibit, each of the team members can explain the concept to several people. In the online exhibit, we have to explain at the same time. So when we explained to the first audience halfway through and the second audience came in, we don't know whether we should continue or explain the former things again.
Leading up to the exhibition day, I was feeling very overwhelmed that I had to explain and answer hard questions about my concept and approach. I was afraid that users might react very negatively towards my idea, but I think this was just my anxiety acting up. Reflecting on this past week, I was very happy with the outcome of my prototype and the overall experience of having this course being offered online. Initially, I was dreading having online courses, and the fact that my entire group dropped the course in week 5 was very demotivating. Despite contemplating on dropping the course, I’m happy that I didn’t and that I pushed through despite being very stressed about joining a new group at such a late stage. I was very blessed to have found a group that was exploring a similar problem space which made it very easy for me to assimilate myself into this new context. A huge plus that my new team was super friendly! We had an enjoyable time every week not only discussing course content but also chatting about really random stuff.
After the exhibition we had our final discord call as a team to discuss the form of our team reflective report, outlining what we wanted to write about as a team, going off individually to word out individual sections before coming back together to integrate each component together. Surprisingly, we finished up the report relatively quick meaning that well have a full break on Friday to relax!
To the teaching team, thank you so much for all your hard work and support throughout the entire semester! Replying to slack messages 24/7, helping with very last min technical issues, having to speak to an entire class without response, and having a podcast with very few students isn’t easy! So, thank you so much for making the transition to online learning as smooth as it was. I rate this course 11/10 😊
OMA is designed to help children deal with stress within the context of the classroom during break times or in between classes. The main focus and goal of the concept were to help children develop stress management skills by invoking a sense of calmness and peace through colors and sounds, utilizing a natural interaction paradigm where users would use the sense of touch to interact with the concept.
The initial and ideal concept that was developed involved a large play mat that is pressure sensitive. Children would be able to apply pressure anywhere on the mat and a different musical note would play corresponding to the location and amount of pressure that was applied, allowing a multi-user experience where children are able to produce music together. Ideally, though multi-user interactions and functionality described above, the concept would also promote self-expression that might positively influence self-confidence and creativity along with the main intended goal of improving emotional intelligence.
Jen Wei Sin - Fri 12 June 2020, 2:19 pm Modified: Fri 12 June 2020, 3:38 pm
This week was all about getting stuff together, the prototype I have been working on for the past few weeks was looking to be on track for the exhibition but there were a few minor components that I had to fix up. One of them was the circuitry of my board, whilst prototyping I did not care too much of the physical form and presentation of the final delivery so I spent some time working on tidying up the wires, wrapping them up with electrical tape and securing them neatly on my main board. Besides that, I worked on securing all components together and edging my board with tape to provide a more finished outcome, ready for the final presentation.
As part of the final delivery, an individual website portfolio needed to be created to outline the development of the concept. A large proportion of this week was dedicated to pitching 13 weeks of content into a concise yet insightful document for potential readers to explore. The main challenge for me however was coding the HTML pages. I recently had an assessment piece that utilized PHP for web dev so I had to switch that part of my brain off and utilize pure HTML to code this portfolio. I also consider myself to be a pretty plain and uncreative individual which made it hard to apply some of my flare to the individual portfolio. In the end, I decided to follow a very simple and minimalistic design throughout my webpage that utilized a lot of white space as content dividers and to make the page look less cluttered as there was a lot of text that was included. I tried my best to be as concise as possible without losing the value and insights I gained over the past semester but we will see how that goes.
In the next week, I aim to be done with the major components of the final deliverable by Monday, allowing time for small changes over the next two days before the exhibition. I have already tested the prototype and it works for now so I am planning to leave it alone in hopes that I won’t break it before the exhibition. Besides that, I plan on repositioning my desk and computer to prepare for the demo on Wednesday to help with reducing clutter in my background.
the design of my website was inspired by the designer Sean Halpin(http://seanhalpin.io/) as I liked his simplistic yet fun approach to web design.
Hao Yan - Fri 12 June 2020, 1:53 pm Modified: Fri 12 June 2020, 3:12 pm
It took more than two months from inspiration to the exhibition. Because of the impact of the epidemic, we were forced to make many changes to our project. For example, from the original samurai to a gunman and the weapon from a katana to a laser gun. But for us, we have largely completed the plan in the proposal, and we didn't build a castle in the sky.
This video can show how to use this device for those friends who have not participated in our exhibition. And to facilitate the visually impaired people to use what optimization we have done. And how this device works,
In fact, I have experience using Arduino last semester, but the rejection was severe at that time because I wouldn't say I liked programming. However, in this project, I was responsible for a lot of code-related work. I had to learn about programming. Through this project, I know how to use the Bluetooth module and wifi module to form a small Internet of Things. I even now know how smart homes work. In the process of our project, we also made an intelligent watering device, which looks interesting.
Over the weekend, I have been prepping for the live exhibit on discord. I have managed to set up a spare room in my house just to hold my prototype on the wall. I also will have room to have my laptop camera set up as well. Hopefully, all goes to plan with that on Wednesday.
Also on Saturday, I did some filming for my website. I need some demos of the various interactions with Emily and how it worked. I decided to split up all the demos into 5 videos, this makes it quicker for the viewer of my portfolio to see the key interaction.
Below is just one of the demos, but the rest can be found on my Portfolio. I thought it was fun filming, I got my dad and brother involved with it. It wasn't the best acting but I guess that all I have to work with (haha). The main point is that my message got across!
My portfolio is pretty much complete and ready for the exhibit. The only thing left is adding the additional feedback sections and Team report once the exhibit is complete on Wednesday. I am pretty happy with the final look and layout of the website. I think I balanced the majority of text and images evenly, making it no so overwhelming for the but also visually appealing so they are interested in my final product.
The final things to complete this week will be: my team's final report, updating the sections on my portfolio website, and finally the reflective essay. I have started breaking down my essay into the sections I want and planning what I will write about. So I should be on top of that by Friday.
Week 13 - Journal
Edward Zhang - Tue 9 June 2020, 1:29 am Modified: Tue 9 June 2020, 1:33 am
Many restrictions have been lifted on June 1, what do you most expect to do? Summarize concepts in one sentence.
I really want to eat in the restaurant with many friends,
If you have a magic wand and can use it throughout the project, what do you want it to do? (Specifically, I know that all of you want it to "finish"-what is the one thing you really want it to accomplish?).
I really want users to experience more activities in my project. Imagine a game I recently played, JUST Dance. I really like this. The time for a song is not only relaxing but also very happy
What is your most pressing problem now? (It doesn't have to be related to the project, it can be more widely related to courses, universities, life and the universe).
Graduation smoothly, got a good grade, so that I can start my next plan
It's finally the last week, and I'm really nervous now.
I redrew the map of the maze.
Then I took a formal demo video of my prototype
Then I also made my portfolio
Ready for Exhibit！
I am very grateful to my teaching team, and most of you have been with me for two years of master time. Special thanks to Lorna. The first lesson I came to UQ two years ago was the conatct of your digital prototype. Now that two years have passed, I really learned a lot. Thanks. Then I would also like to thank my team. Although there is not a lot of face-to-face communication, it still gives me a lot of help.
Hao Yan - Tue 9 June 2020, 12:51 am Modified: Mon 15 June 2020, 5:15 pm
After the first version prototype of voice control, we know that the Google Voice service is not available in Australia. Therefore I intend to use the Internet of Things-based service to design voice recognition functions. I have check some information about this kind of service. I found that Blinker is a home IoT app similar to the Home app on ios devices. We can use it to connect the smart devices via Bluetooth or wiﬁ, and It is highly editable, including Interface and function settings. So this software is much suitable than 'Arduino voice control' for our project.
This time, I choose the JDY-16 Bluetooth module as the bridge connecting the different parts. It has several advantages, low power consumption，Bluetooth 4.0. These two advantages allow us to connect Bluetooth terminals more steadily. We don't need to worry about too much power consumption that can cause Arduino to drive too many modules (I have encountered too many modules on Arduino before, and Arduino cannot Stable power supply).
We did a lot of things this week, re-changing the voice control code so that it can run without the support of Google voice service. And after testing, it can be successfully identified.
More importantly, I only need a small amount of code to complete this function, which is very suitable for novice programmers like me
void button1_callback(const String &state)
if (state == BLINKER_CMD_ON)
digitalWrite(7, HIGH); //relay on
else if (state == BLINKER_CMD_OFF)
digitalWrite(7, LOW); //relay off
The mention of 3D sound eﬀects reminded me of ASMR. I am very impressed by the unique and realistic eﬀect of this audio, so I thought about using Adobe Audition to synthesize ordinary sounds into ASMR-like sounds. So I tried to put two identical mono ﬁles into diﬀerent channels. After synthesis, you can hear the diﬀerent sounds from the left and right headphones when you put on the headphones as long as we placed the sound document on the left channel. The user can recognize the position of the sound through the headset. So we can complete the sound in the left and right directions. Through the analysis function of Adobe Audition( right part of the image). We can view the virtual position of the sound. In this way, by adjusting some parameters, you can get a sound in front of you. So based on these principles, we can make 3D surround sound.
In the next time, I will work with my teammates to adjust some details, hoping to have a perfect performance during the exhibition.
Week 12 - Journal
Edward Zhang - Tue 9 June 2020, 12:47 am Modified: Tue 9 June 2020, 12:48 am
One sentence description of concept
My concept is to help college students reduce the pressure from sitting at the desk for a long time.
Show us what you’ve been working on
From last week to now, my main work is to draw the maze for the maze game I made, and the production of the balance board
Exhibit in 2 weeks-main priority to make it feel “finished”?
For me, my prototype is basically complete if the main function of controlling the balance ball is realized. Because the implementation of this part is indeed the most complicated of the prototype's mechanism functions.
Questions about the annotated portfolio?
There are not too many doubts, because I have drawn the framework of the annotated portfolio
When I first made the maze, I used the manual construction method because I felt too much trouble, so I now learned a 3DMax software. Although the learning of this software took a while, the learning process is still smooth, but In contrast, the software for building simple 3D models is much stronger than Unity. After playing, I imported the maze into Unity. At first, I didn’t do a lot of operations, but this week when I officially went to the maze, the problem appeared. It always went through and I tried a lot of things. Unsuccessful.
Therefore, in the next process, I will rebuild the maze.
I have finished filming and editting my demonstration video!! I am really happy with how everything has come together and was stoked that I did not have to simulate any form, features or interactions in this video. I decided to narrate over the top as a means of catering to my target users - visually impaired users. Aside from that I believe the video is able to actively convey the new large-scale form, the updated audio feedback and the new interactions with the mat.
To date I have had 5 users come and evaluate my design (one of which is actually colour blind - so bonus!!). I was able to evaluate Mixed against my objectives that I detailed in the prototype document and I was really happy with the outcomes. 100% of users understood interactions with Mixed without being prompted and all users could also generate complex artworks of their own volition. It was the audio feedback which could do with some work. 80% of users could identify a colour that they used in their artwork from a group of three random audio tracks. This was better than I was previously anticipating! But only 60% of users understood the audio equivalent of colour mixing. One even noticed that they were listening for my (the facilitators) movement in relation to the mat rather than understanding the frequencies themselved.
A recurring suggestion was to use this tool in music classrooms to help students with pitch and tone recognition. I really liked this idea and believe it could definetely add to the cognitive benefits that Mixed aims to facilitate
I am super proud of how my portfolio is turning out. I have designed with accessibility considerations in mind so things like 'title' is used frequently as well as alt text and a high contrast aesthetic.
There is a few things I need to tweak tomorrow in order to have my portfolio exhibit. The first is a link to a referenced page of sorts and the second is styling on some screen sized is a little off. I don't think this should be too much work
Still to do
Download unity onto separate drawing tablet so that I can use my laptop for the discord call
At the end of last week, I mentioned some issues I had with my prototype. Sadly, both the accelerometer and bend sensor are broken, rendering my prototype unusable. I have spent a lot of time on this subject this semester, but most of the time has been spent trying to solve technical issues, which in the end have stolen countless hours from building a prototype that we can use to gather data from. Given that I've dedicated most of my time to PhysComp, I don't really have any more time to spend to rebuild my prototype if I ever wish to finish my master thesis. It's a sad way to end the semester by having your prototype break down. Luckily I have a couple of pictures, a short video and my test results to show for. The test results are in the end why we made the prototype in the first place.
After Tuesday's stand-up, our team met to discuss how we should work on the team reflection. We ended up with a divide and conquer method, splitting the parts amongst ourselves. Some of us started to write a draft on the team reflection the very same day before we iterated on the text later during the week. Then we proceeded to check if our combined code base worked after everyone had updated their code. This didn't turn out as we expected, we found some new issues that hadn't been there before that didn't allow us to receive data from each other anymore. Moreover, the overall experience and functionality were rather janky and unreliable, as it jumped back and forth between the different states, seemingly at will. Thomas, Tuva and I spent many hours that day trying to figure out what the issue was but were unable to fix the issue. The reason for these issues in the first place is probably a lack of communication and planning out the system ahead of time, which likely would not have been an issue if the semester ran as normal where we could meet up and discuss things in-person. We did discuss a plan B in case we did not get it up and running again, as Thomas and I have our previous combined codebase, where sending and receiving worked on both ends and Tuva and Marie have their individual working code. However, if possible we would like to present one working codebase and functional prototype. For the rest of the week, given that Thomas and I had previously spent the most time on the combined codebase, Tuva volunteered to spend some time fixing the issue, as Thomas and I were falling a bit behind on the thesis.
I did spend some time finishing up my portfolio this week. I've been pushing ahead a bit to finish it ahead of time as I need to spend as much time as possible in the coming weeks on my thesis. Most of what I did on my portfolio for this week was to write down more content and make sure the website was fully responsive and accessible. Working on the portfolio took significantly longer than I expected it to, and it all feels a bit repetitive given that we are re-writing the very same things we have already written about here in the journals, the proposal and in the prototype delivery previously. Also, building the site took more time than expected as I'm not used to working without a framework. I could really see the differences in all the heavy lifting a framework does for you and the time you save as a result of this, but then again, the content is what took the most amount of time.
Finally, our prototypes work together as intended, or that is to say almost, or good enough. My prototype in particular, for reasons that are beyond me, still struggles to send messages to other balls. However, both Tuva's and Marie's balls can now send and receive messages after Tuva fiddled with the code over the weekend and somehow got it to work. Looking back none of us are actually sure why it did not work before and why it works now, but that is hardly important now. The fact that it works means that we have a proof of concept to show during the exhibition, which is a huge relief after all this effort.
As it turns out I was able to finish my website on Friday and is up and running on the server with seemingly everything working as intended. It's not as visually pleasing as I would have liked, however, the reality is that I simply have no time to perfect the visuals, as is often the case towards the end of the semester due to other assignments, in this case, the daunting task of writing my thesis paper. That said, I'm happy with the content of the portfolio as I provide a detailed walkthrough over how everything works, including the code. In addition, although I'm not entirely sure my team has the same vision, I've included what I have had as the entire intended product, what I would have wished it could have been with more time and resources.
Finally, I would like to say that although we've had a hump and hills and that our prototype still has some simulated bits that I'm happy where we have ended up and excited and anxious to see how it will perform during the exhibition and how it will be received.
This week, we are in the final project sprint. Solomon and I had three meetings to discuss the map, conducting user testing, and solid our system. In user testing, we refine our tutorial section, the robot will introduce more details about the system, like the function of each block, the usage of the map, and the rule of the interaction. Besides, in terms of the limited time, we use the material from our daily life to build and decorate the map. Here is our map below:
We only decorate this map, there is another map we draw on its back, it is designed for the people to seek for more challenges.
In the first meeting, we also fix the bug mentioned last week, which is wall detection by inserting a digital matrix map in the system. The digital map will record the current location of the robot and list all the moveable paths, thereby once the robot attempts to leave the main path, the system will focus the robot to raise the error warning. The images below are the map class and object (wall, road, and final target) setting.
There is another issue in the refining system section, in terms of the drawback of the camera capture(capture multiple images in a short period), it is hard to input two same commands in series because the list needs to remove duplicates. Therefore, we created a new code block called 'Duplicate' that enable users to duplicate the previous command, for example, when users want the robot to move forward three times, instead of using "W" three times, they should input "W", "Duplicate", "W".
Here is our need tutorial section:
Zhuoran Li - Mon 8 June 2020, 1:39 am Modified: Fri 12 June 2020, 3:41 pm
Most of the focus is on building the website. And I also made some changes in Unity.
This is the final week. These are the final try on the Unity.
The accelerometer in the mobile phone is used to detect the movement of the mop. I install the phone on the mop.
I used paper to build simple support. Not quite beautiful, but useful.
The scene change is based on the function SceneManager.LoadScene("sceneName").
I also use a countdown before the ball is served. The number is put on the plane. I used the Invoke function to control the time of the countdown.
The difference is I changed the navigation list. To build the website for DECO7385. I used a home page to briefly introduce the project and myself.
On the concept page, I used a video to demonstrate the process of playing. To ensure fluency when viewing the webpage. I insert the videos inside the text. Each video is less than 1 minute expect the concept video.
The page is quite long, and many images and videos have been put into it. Therefore, inside each section, I used a sub-navigation for the audience to view the content.
Actually, all of the content is based on the index page. I separate different content into several sections and use the display property to control them.
The fadeIn is used to control the animation when show the webpage. So it would not be too unexpected.
The section that is showing on the current page has a white background in the navigation bar. This is also controlled by the JS. The key function is the addClass and removeClass. Siblings function is used to focus on other sections that haven't shown.