This week I am trying to use Unity to achieve the speech recognition feature for my prototype since I found that it is too hard to build a new system to collect voice.
I decide to do one of the functions of our group project, speech recognition. The expectation of this feature is to detect the negative words that users said and light the petal.
However, in this stage, I only achieve to recognize all the words that users said and transcript it on the screen.
I used Unity to build this function, and install the Speech SDK for Unity so I can use the microphone from the laptop to collect voice.
Creating this stage demonstration lets me consider the potential problem that may occur in the individual project. Because there are only three main technical functions in the group project and the way we separate the project is by different functions appear in the whole team concept. However, this may lead to three familiar outcomes due to the same technical functions using. When it comes to this area, I realized that our team project is not completed. There is no solution to the negative emotions that have been detected. I am going to think about this gap in the next week.