An additional user interview was conducted this week. The participant was a person with jogging experience. He introduced to me the breathing method of “two-step, one-breath”. This provides new ideas for the project. He also said that in the sports app he uses, functions such as history and medals are very important to him, so that he can have the motivation to stick to it.
At the beginning I had the idea of linking running pace and breathing pace together. The original idea was to use a compass. But in fact, the use of compass is not good at collecting running movement, because it mainly collects rotation data. Luckily enough, I borrowed an accelerometer from Lorna. Accelerometer is sensitive, but it provides more complex data than usual devices. At Clay's suggestion, I used a tilt switch as an alternative. The use of the tilt switch is very simple of course, and it performs well in tests with hand shaking. But in fact, when it is fixed on the body, the running swing seems to be insufficient to activate it. So, in the next test, I still want to use an accelerometer as a means of detecting the exercise cadence. I hope it will succeed.
Based on user data, I decided to use the “two-step, one-breath” training method. A simple UI is designed accordingly. It is critical to allow users to predict the next breathing method during sports, so I adopted a line-like graph, which is also similar to music games. Up is to inhale, and down is to exhale, and it is executed when the graph reaches the far left. However, because its movement principle is based on the number of steps, there is currently no animation connection in each stage, which may cause difficulties for the user's understanding, which is a relatively priority solution.
How to fix the prototype on my body has puzzled me for a while. Although the initial ideal design of this project is a design similar to Google Glass. However, due to the volume limitation of Arduino, it cannot be achieved at least at the current stage. So I’m looking for a reasonable way to fix it near the head. Finally, I found a way to fix it on the helmet. This method works well and is also suitable for sports.
Work to do
As I mentioned earlier, the current prototype has been fixed on the helmet in a relatively stable manner. However, the battery compartment has not yet been fixed, and the actual effect needs to be tested. Another problem is the fixing of the display. The current fixing method makes it easy to shake, which is fatal to users in motion. I am still looking for a solution.
I am also currently exploring the possibility of completing the entire interaction without using buttons but simply using breath. This will bring the project closer to the theme of body as controller, and at the same time make it more convenient for users in sports to interact.
One of the inspirations for the final design of this project is Google Glass. It has always been one of my favorited projects, but for ethical reasons it has not been successful. But I think its design ideas are valuable. Using glasses directly as a display is very cool and intuitive.
At present, I choose to fix the monitor farther in front to help eye focusing. But I am also curious how Google Glass displays at such a close distance. This is something that needs further investigation.
I also explored several methods for reading serial data using Python and Node on PC. In the end, I found that using SerialPort module from Node to read serial data and then using Socket.io to render the content and updating in realtime to the front-end is very convenient. This may be helpful for making and testing high-fidelity UI interfaces for this project. And I can also use this method to simulate a extended function that uses a mobile app to manage exercise history data.