final-design 1

Hao Yan - Tue 9 June 2020, 12:51 am
Modified: Mon 15 June 2020, 5:15 pm

After the first version prototype of voice control, we know that the Google Voice service is not available in Australia. Therefore I intend to use the Internet of Things-based service to design voice recognition functions. I have check some information about this kind of service. I found that Blinker is a home IoT app similar to the Home app on ios devices. We can use it to connect the smart devices via Bluetooth or wifi, and It is highly editable, including Interface and function settings. So this software is much suitable than 'Arduino voice control' for our project.

This time, I choose the JDY-16 Bluetooth module as the bridge connecting the different parts. It has several advantages, low power consumption,Bluetooth 4.0. These two advantages allow us to connect Bluetooth terminals more steadily. We don't need to worry about too much power consumption that can cause Arduino to drive too many modules (I have encountered too many modules on Arduino before, and Arduino cannot Stable power supply).

We did a lot of things this week, re-changing the voice control code so that it can run without the support of Google voice service. And after testing, it can be successfully identified.

More importantly, I only need a small amount of code to complete this function, which is very suitable for novice programmers like me

code sharing


void button1_callback(const String &state) 

{ 

  if (state == BLINKER_CMD_ON) 

  { 

    digitalWrite(7, HIGH); //relay on 

    BLINKER_LOG("Toggle on!"); 

    Button1.color("#FFFFFF"); 

    Button1.print("on"); 

  } 

  else if (state == BLINKER_CMD_OFF) 

  { 

    digitalWrite(7, LOW); //relay off 

    BLINKER_LOG("Toggle off!"); 

    Button1.color("#FFFFFF"); 

    Button1.print("off"); 

  } 

} 

The mention of 3D sound effects reminded me of ASMR. I am very impressed by the unique and realistic effect of this audio, so I thought about using Adobe Audition to synthesize ordinary sounds into ASMR-like sounds. So I tried to put two identical mono files into different channels. After synthesis, you can hear the different sounds from the left and right headphones when you put on the headphones as long as we placed the sound document on the left channel. The user can recognize the position of the sound through the headset. So we can complete the sound in the left and right directions. Through the analysis function of Adobe Audition( right part of the image). We can view the virtual position of the sound. In this way, by adjusting some parameters, you can get a sound in front of you. So based on these principles, we can make 3D surround sound.

In the next time, I will work with my teammates to adjust some details, hoping to have a perfect performance during the exhibition.