Your everyday objects, things in your room all inspire each and every sound and note in this performance. It's 2021, we're in a pandemic, and the Zoom fatigue is real. Conversations have become heavily reliant on one-to-one interactions and have lost all senses of their background. Machine learning and Snap have decided to stay at home, do something creative, and help make these Zoom calls more exciting.
Personally, my interest lies in consuming and creating music. I want to explore ways in which I can leverage my hobby of creating music as a starting point for finding ways to bring the background to the foreground with the help of music?
The Big Idea - "Grow Up, Kid!"
Computer Vision has existed for a while now (1960-Present) and its time that it grows up and moves beyond giving redundant information to us. Right now, its capable of telling us with enough confidence that the table it is seeing is really a table. What it does not tell us is, what other characteristics does this table possess? How does this table sound when a plastic chair collides with it? Is this table made of wood or something else? It needs to be more creative with its outcomes instead of telling us what we can already see. It’s time the training wheels came off.
Everything around us is made of vibrating atoms and sound itself propagates as a result of vibrations. How can we utilize the property of sound created by objects in our surrounding, by virtue of actions such as object-human contact, object-object contact, falling of objects, etc. to bring the environment into the foreground and create meaningful music.