Today, I met with my Processing Professor, Michelle Harris. In addition to teaching Processing and other computer languages, Michelle is a performance artist exploring issues of gender, race and beauty in her performances through Processing, Leap Motion, Kinect and live video.
I sat down with Michelle to talk a bit about my thesis project, and what she thought of it. I was afraid that what I am presenting may not be “new” or “groundbreaking” enough. She put my fears to rest – even if it has been done before, I am bringing my own artistry to the table. (Whew!)
So I laid out a really rough diagram sketch of the process and interconnection of the various parts of the type of experience I want to put together.
I explained that I wanted the software to be the central brain – analyzing the music, working from a database of assets and receiving environmental feedback from a VJ (an iPad controller interface) and kinesthetic inputs. “This is all very do-able.” Michelle also explained that there are some existing software out there that helps to do some of the heavy lifting such as the database integration, music analysis, projection mapping, etc… Some are even free! So I need to investigate isadora, vvv, MaxMSP and PureData more. Michelle warned that PureData can be a bit difficult to deal with, so I will need to get my head around these applications
The important thing for me is to focus on the hard coding of the project – basically, getting it right in Processing, and setting up visual assets that will work. So at this point, I will first need to focus on getting code setup and moving and first figuring out how to get this moving forward. I will need to watch both of Josh Davis’ tutorials again on using the HYPE framework to get back into that…