As our team chose to use Anki Vector as the intelligent character for our project, it is essential to know how it works and how we can re-program this robot. Here is our new member - Ninja vector
Anki Vector has already integrated some functions with the app, like audio-based commands, facial recognition, and interactions with a cube. Besides, there is the official SDK (Self Development Kits, https://developer.anki.com/vector/docs/index.html) for users to customize their own vector by using Python.
It is a little bit hard for me to fully understand its original python files and the only functions I can program for now are the basic movements, audio playing, and charger finding. Although I can't understand the detail codes, I can read the construction and imitate its functions from the tutorial files, which are the current approaches I use.
For the future plan, I will keep exploring the usage of SDK and try to build the object recognition and animating functions.
I found a useful AR tutorial:
This tutorial introduces the Vuforia that we can upload any image as the tracking target. And combine with unity to achieve the tracking and demonstration. As the video published in 2017, some procedures have been changed. The new version of unity integrates the Vuforia tech itself. The detail information is indicate in this site: