ALT DOCS | DEPICTING A DATASET

This week I was instructed to craft a scene in Unity that evokes the New York Times’ complete list of The 282 People, Places and Things Donald Trump Has Insulted on Twitter

Here’s a screenshot of my scene:

 

To construct this scene, I purchased a $5 Trump.obj from Turbo Squid and used Mixamo’s animation library to make Trump cry. I then enclosed Trump in a circle of the people, places and things he’s insulted on Twitter. The collage is composed of .png files imposed on 2D sprites.   

I thought it would be great to be able to look at a person, place or thing in the scene and hear Trump’s tweets about it, recited in his voice. The interaction being, if you look at Ted Cruz or Whoopi Goldberg, your gaze would trigger audio of Trump saying all of the insulting things he’s tweeted about them. I don’t think the same effect could be achieved with text- to me, audio in VR feels more impactful. 

I first turned to https://clash.me/trump, a text-to-speech program that draws from a pool of Trump speeches on YouTube to create a corpus of words you can hear in Trump’s voice. Unfortunately, it was only able to provide a few words. Even the word “politician” was missing, so I wonder if the program might just be dated. 

For today, the program simply opens with an audio mashup of insults Trump has lobbed verbally/in person. I don’t feel this achieves my interactive vision for the project, and I’m considering working with AudioGrep to create the Trump speech corpus I would need.

ALT DOCSSerena ParrVR