Vision
This application allows the robot to use his camera and the Google APIs to take a photo and process it to say what it is about.
First we need to import the libraries that we are going to use.
We need to import the apis of google. (Google Cloud API)
Explaining the code.
First we need to get the credentials of the json file to storage at "Google_Application_credentials"
Then we connect to the vision.googleapis and open a connection.
We also use the translate api so we could change the result to spanish, we select spanish at the target.
We use speak so the user could know when the robot is going to take the photo.
We take the photo using the command fswebcam.
We send the photo to the cloud of google as a json file, and we wait for the result.
We get the first result, and we ignore the 4 next, because the first one is the one with more percentage of success.
We use espeak so the robot could say that it believe that the object is.
Finally we show the photo that we take and finish the app.
Last updated
Was this helpful?