Project #5 Artificial Companion

LAUREN, Lauren McCarthy


The relationship between human and robot has been one of the most important topics since the industrial revolution in the 18th century. Western fantasies like to anthropomorphize robots out of the fascination and fear of being replaced. We've all have seen movies, TV shows, or click-bait articles positioning human as "us" and robots as "them" — will they steal our jobs? Will they guard our home and provide emotional and care labor to our children and the elderly? Will they realized their oppressed position one day and take revenge? Will they live on when we are gone?

In reality, as AI and machine learning are being rapidly developed to make robots "smarter", we need to be asking better questions not just on the power relations between human to robots, but also human to human. Does Alexa really cares whether someone says thank you to them? If robots are taking your job away, who is designing those robots and who should really be held accountable for your lost job? Can we imagine new relationships between human and robots that goes beyond the fantasies of the dominant - surbordinate?

For Project #5 you will imagine a new human-robot relationship by designing an artificial companion. This is the final project for Code 2, which means you now have the option to draw from your ability to generate dialogue, use the camera and include facial recognition, add a DOM interface, and process images in surprising ways. For the remainig of the semester we will cover p5.speech, the Teachable Machine, and a few additional libraries listed under Resources. In essence you're creating a communication software that may or may not have an anthropomorphic appearance. Returning to the conversation we had in our very first class on input and output, the kinds of cues and expressions you end up designing will have a great impact on the meaning of your relationship.

Timeline & Deliverables

April 22: Submit an illustration + written proposal for your artificial companion to Canvas — be very specific about what your companion can do and cannot do.
April 29: Complete white-boxed version of the project.
May 6: Project due, present during class.

Submission Guidelines

Submit proposal and white-boxed version of your project to Canvas
Add your final project to your Glitch portfolio, and submit your project link to Canvas

Design Constraints

(1) Your can determine the size of your canvas, but please be very intentional about the size you choose
(2) Your project should use p5.speech of the Teachable Machine

Required Readings

(1) The Automation Charade, Astra Taylor

Further Readings

(1) Robots, Race, and Algorithms: Stephanie Dinkins at Recess Assembly, Jacquelyn Gleisner
(2) Computing Machines and Intellegence, A.M. Turing
(3) Chatbots: Principles, Methods, Ethics - a Noncomprehensive Reading List, Lee Tusman


(1) p5.speech, a p5 library that translates speech to text and text to speech
(2) The Teachable Machine, google creative lab
(3) ml5.js, friendly machine learning for the web
(4) Voice Commands Starter Template, Lee Tusman
(5) Vida, a simple library that adds motion detection and blob tracking to p5.js.
(6) Machine Learning for Artsits, Gene Kogan


(1) Conversations with Bina48, Stephanie Dinkins
(2) Not the Only One, Stephanie Dinkins
(3) Robotica video series, New York Times
(4) Faith and Baby Faith, Ryan Kuo
(5) LAUREN, Lauren McCarthy
(6) Build the Love You Deserve, Fei Liu
(7) Bot projects by Kate Compton

Study Guide

Table of Content

(1) p5.speech
(2) JavaScript toLowerCase()
(3) JavaScript find()
(4) switch Statement Pt.2
(5) Teachable Machine


p5.speech is a p5 library initiated by Luke DuBois. It enables you to add text-to-speech and speech-to-text features into your software, which could come in handy for building an artificial companion! Unlike previous libraries we've worked with, p5.speech isn't hosted on a CDN. This means you need to manually download the p5.speech.js file, upload it to your sketch folder, and add a link to the file inside your index.html <head> tag.

Speech Synthesis

Speech synthesis is a text-to-speech feature that translates a string into spoken words. You have a list of different voices to choose from, and some of them sound a lot more robotic than the others.

In order to initiate the speech synthesis in your sketch, you would need to create a new p5.Speech object at the start of your sketch. And from there you could load a series of methods, such as speak()

Source Code

Next, you could use listVoices() to check out a list of possible voices you could use in your browser, and use setVoice() to select a voice.

Speech Recognition

Opposite from speech synthesis, speech recognition is a speech-to-text feature that translates your spoken words into strings. In order to initiate the speech recognition in your sketch, you have to create a new p5.SpeechRec object at the start of your sketch, and use start() to begin speech recognition, and resultString to fetch the most recent detected speech —

Now there are two settings you could add to your speech recognition device. One is called continuous, which sets up a continuous stream between the browser and Google's cloud service. Setting this to true can cause some serious privacy concerns, nevertheless it could also increase stability of your software. I guess in this case you can't have your cake and eat it too. Another feature to pay attention to is intrimResults, which decides whether the speech rec would print out a partial, incomplete reading of the speech —

Now that you have the foundation for speech recognition set up, we can dive into using the resultString to trigger some actions!

Source Code

JavaScript toLowerCase()

Sometimes p5.speech voice recogintion would interprets something you've said with a capitalized first letter. For instance, the speech recognition example above prints the word "circle" as "Circle". And since JavaScript is a case-sensitive language, this could cause some of your IF statements to fail. Luckily you could use toLowerCase(), a JavaScript method that turns your string into all lowercase —

As you can imagine, if there's a toLowerCase() there might also be a toUpperCase(). This method isn't immediately useful for this project, just something to keep in mind if you ever need it in the future.

JavaScript find()

Next, let's move onto a more complex step and integrate a keyword searching feature for your artificial companion. For instance, instead of only saying "circle", I could say "could you draw a circle for me please" and still have my companion recognize the keyword "circle" and understands that it should be drawing an ellipse on the screen. There are many different ways to do a keyword search in JavaScript, in this example we are going to focus on using find(), a method that searches for a matching element inside the array.

However, the first step we need to take here is to split a string into an array of individual words. To do this we could use the JavaScript split()

Our next step is to use find() to search for the keyword "circle" inside of the words array. Let's first create a local variable result, which would store the outcome of the search, followed by the ideocyncratic syntax of the find() method —

Source Code

What is happening here is that there is an argument function built inside of find(), and it's called to check every single element of the array until it finds a matching keyword. You can think of this argument function working like a boomrang — it gets thrown out towards the first element of the array, returns when it finds nothing, and gets thrown out again to look for the next element of the array. The search ends when it returns a value that matches the keyword. This means even if you've said "circle" twice inside a sentence, find() would only find the first one and the search would end.

Besides find(), there are other ways to search and sort words inside an array in JavaScript. Here is a summary borrowed from Mozilla's MDN web docs —

As you can see, the journey of JavaScript is fruitful and filled with adventures. Proceed safely and have fun!

switch Statement Pt.2

One of the common issue in bot creation is that you have to teach the bot to understand that there are many ways to say the same thing. For instance, to greet someone I could say "hi", "hello", "hey", "howdy", and ideally the bot should understand those different words as having the same meaning and respond accordingly.

This is where switch statement can come into play! When we omit "break" inside the switch statement, it would basically work like a waterfall, so the detected string would get compared to multiple different cases until it finds a match —

Adding an additional feature here so that the bot would also be able to tell the time ^_^

Source Code

Teachable Machine

For the Teahchable Machine tutorials, please refer to this playlist.