Week 11

Hao Yan - Sun 24 May 2020, 10:42 pm
Modified: Mon 15 June 2020, 5:17 pm

Plan for next two weeks

This week I'm starting to check anything about voice control and sound effects. Because these are two essential parts of our project, voice control can maximize the use of visually impaired people; they do not need to use any physical remote control, only need voice commands to complete the relevant operations. Sound effects can directly improve the user experience of the device. The central part of the safety alarm has been completed, so this time, I want to discuss these two aspects mainly.

voice control

Regarding voice control, I considered using Siri as a terminal to receive user voice commands. Shortcuts in iOS 12 let get things done with our apps, with just a tap or by asking Siri. In addition to running shortcuts available on your iOS device, we can use the Shortcuts app to create custom shortcuts, simplifying everyday tasks by combining steps across multiple apps.

I have successfully used shortcuts to control my desk lamp (not a smart desk lamp) to turn on or off. So my current idea is to use software that can remotely control Arduino as our remote control. Then through a series of measures, the app is connected with shortcuts. In this way, you can bind Siri's voice commands to the Arduino's buttons. Thus, a series of operations can be completed through voice commands. So based on this theoretical basis, what I want to do now is to find a suitable app that can connect to the Arduino Bluetooth module. Then I need to write some code for the Bluetooth module to make it work. When these tasks are completed, our voice control should be ready to use.

Sound effects

In the initial plan, it was considered that people needed to wear eye masks for testing. So we don't want our users to wear headphones. Because the human ear is also an important organ to distinguish directions. So we still want to use sound to play sound as much as possible. As it happens, Bonnie and I both have a HomePod, which is a speaker consisting of 8 speakers and a series of sensors. It can detect the surrounding environment based on where it is placed. With spatial awareness, it automatically analyses the acoustics and adjusts the sound based on its location. Direct sound, including the main vocals and instruments, is beamed to the middle of the room, while ambient sound is diffused into the left and right channels and bounced off the wall. So the entire space is filled with rich, well-defined tones. (From https://www.apple.com/au/homepod/)

In other words, we may only need these two speakers to create a very realistic sound effect. So what we need to do is to use Adobe Audition to make some analog surround sound. As can be seen from the following figure, in the audition, we can set the left and right channels of a music clip, and through some adjustments, a single channel of music can become multi-channel, or from the left channel to Right channel. In this way, to the user, they can actually hear a sound from left to right, just like someone running in front of them. And audition can also analyze the location of the sound. We can carefully adjust some parameters by viewing the analysis results to make the final sound more realistic.

So based on these theories, I will try to complete a sample in the next two weeks. If successful, I will invite some students to do a user test.