Monday, February 29, 2016

Syphoning Processing

As some of you may know, I recently discovered Processing.  My focus is open up the the Kinect to the Mac side of the house.  OpenNI is kinda hard to access and the official Kinect SDK is Windows only.  There is a build of OpenNI being maintained at structure, though at the time of this post, all of the versions are in beta.

Through developing that workflow, I discovered that you can build Syphon servers rather easily in Processing.  As some of you have discovered syphon servers are another way to bring in content for MadMapper.  The Syphon library is easy to install, and requires only a few lines of code to implement.  An example showing the lines you'll need to add on Github.

I discovered last week that complex Processing sketches with multiple tabs of code, can cause an otherwise working sketch to fail when the Syphon code is added.  After performing a number of face-palms, and shouting a variety of expletives, I discovered that if all the code is in ONE tab, then the syphon server will work as expected.

From what I can tell, when using the sendScreen command, you're essentially sending a copy of the Processing output window to a Syphon server at the same time.  Might be some performance gains if that wasn't the case, but then again, was running this test on a 2014 MacBook Air... not exactly the normal use case for that particular machine.

There is another method that uses a "Canvas" to feed the syphon server, and appears to work with most primitive graphical elements.  The sprites I made were not easy (for my Processing noobie self) to send to a separate PGraphics object.  On the other hand, using this method you can have your sketch only go to the syphon server, which may result in a higher FPS.

All that said, I'd suggest that you build sketches any way you need to, as far as organizing code.  When you're ready to add in the Syphon part, put together a separate version with all the code on one tab.

Looking forward to seeing the work this evening.

Sunday, February 28, 2016

Projection Mapping UTD ATEC Project 2 : Classroom Mapping
by:
Phillip Baker, Ayaz Ismail, Nathaniel Propp

Concept:

For this work we knew we wanted to focus on movement driven by sound. When considering designs for our geometric relief sculpture the keyword we decided on was radiate. We created a base center and created several corresponding smaller pieces to not only radiate off the body, but  also direct the views eye around the piece. In the previous assignment our design was driven by the shape of the human eye, and I believe that persisted through to this composition as well.

Once our structure was completed, we created several different presets to help us explore new tools and concepts.



Process:

The first step in this assignment was to create the relief sculpture. Phillip has extensive experience in building forms using Pepakura. We first started with some sketches and then created models in Maya and eventually Rhino. In 3D space we put our model through another iteration of design. We manipulated some points and faces to create a form that plays on the expectation of symmetry, but the asymmetrical realization really encourages the viewer's eye to move around the piece. 

Part of the form is the off shooting segments that radiate and imply rotational movement. To get these just right we used the projector and Photoshop to trace the outline of the body and pin point the center pivot on the wall the relief sculpture was hung. We then needed to trace only one radiated segment to then duplicate and basically discreet rotate the outline to evenly space out the consecutive radiated segments. Using nails we then were able to mount the relief sculpture easily and with great accuracy.

The next part conducting a spacial scan of the sculpture using MadMapper and a DSLR camera. This gave us an ideal shot of the piece from the projector's view so we could easily compose content at home away from the actual piece. 

UV unwrapping of the model was simple enough and allowed us to use the UV's as the mask. This enabled effective automatic masking that saved time and was very accurate. 

At this point were we able to start gathering resources. Utilizing MadMapper's presets we were all able to create several different compositions and easily switch between them. 

After some content was generated, we imported some content directly into MadMapper. Some other content was combined and composed in Modul8, which gave us a lot of freedom with moving, looping, coloring, filtering, and manipulate our content to a whole new degree. This was then fed into MadMapper through Syphon. 

With Ayaz's expertise we were able to incorporate live sound. Using a microphone we anaylized any sounds or music and utilized it to drive some of our movement. Setting speed and color to certain values such as Bass and Midtones really brought the piece to life.

Finished Work




Additional Thoughts

We had a really difficult time processing live sound with in Modul8 which was disappointing because so many of the elements could have been driven by the sound levels. In the future we hope to conquer this problem to create dynamic performances. 

Monday, February 22, 2016

TouchOSC

Hope you all enjoyed this evening's quick run-thru on controls.  I think you'll find that while there are innumerable things that can be controlled, many elements of your designs may not need them.  I think there's also a mode where you have granular control of every aspect of a piece of media while you're developing content and looks, and then there may need to be a whole other mapping that is geared toward playback/performance.

And as we saw during the demo, MadMapper has made adding controls pretty painless.  If anyone has a PS3/4 controller laying around, I'd love to see an effort made to hook up an analog joystick to the lighting controls for the 360 project in the atrium…

Here's the webpage with links to the app store and the editor download about halfway down the page.  There's decent instructions on how to sync a custom layout.  Essentially, if you have TouchOSC communication with mad mapper, it's pretty easy to upload any layouts you have designed.

http://hexler.net/software/touchosc

Full disclosure, the TouchOSC layout builder app (java file) is pretty juvenile.  It lacks polish, and is not particularly well optimized, in my opinion.  That said it does function just fine, and once you get the hang of it, customizing layouts for an iPod or iPad is pretty elementary.  You will discover, I think, that when you're working out spacing and whatnot, you'll end up with a lot of dead references to controls that you didn't use.  Haven't noticed that it costs anything with regard to response or speed, just think it would be nice if the program went back through and re-numbered the controls that "make the cut" to streamline the final product.

One other note:
Some of the drop down menu type things in MadMapper are difficult for TouchOSC, or maybe MIDI in general, to deal with.  One case that comes to mind is the drop down of line chase types.  There's not really an appropriate built-in control for that, but you could "hard-code" an array of buttons to recall each type of chase.  I've considered making a custom rotary knob as an option, but that seemed like more trouble than it was worth, when you consider that more often that not once the line chase pattern is picked, it's unlikely to change.

Happy Coding!


Assignment 01: Classroom Mapping

Concept:

Our first assignment's concept evolved over three weeks. We chose to use a blank wall in our studio class room so we had to come up with ideas to make our project more engaging. Since our classroom felt more like a warehouse, we wanted to transform the space by creating a window and projecting an alternate world through it.

Next we found a 1950's style T.V. screen while searching for inspiration so our minds jumped to the idea that shows are like another universe in themselves. The T.V/Movie stars we watch on our screen more often then not are living double lives and when these stars pass on we still continue to connect with them in some way; thus, fictitious worlds created by these actors and actresses are timeless.
When we had decided to go with this theme, we set out to select an actor or actress that represent this timeless quality. We scrapped the idea of the T.V. itself since it was too literal and came up with something more figurative--an hourglass shape.

Process:

We 3D modeled an hourglass form using Maya. Then we exported it as an .obj and imported it into MadMapper. Next, we took a screen shot of the model and used that as a template to create our content in After Effects. While creating our content, we searched for gifs of our selected actor/actress from the 1900s. Then we masked the primitive shapes of the hourglass and positioned the gifs within the mask. The gifs were arranged from their early years to their later years. Finally, we hid the template image and rendered a video of the gifs with alpha for transparency.

In MadMapper, we imported the hourglass model and positioned the rendered gif compositions on top. To incorporate the universe theme, we layered multiple videos that represented a starry worm hole. We created a template for the lines and animated them to follow a path around the hourglass to outline the shape of an infinity symbol. Since the top and bottom of the hourglass was flat, we found an image of the moon to place halfway behind the hourglass to produce the curved shape. To add extra detail, we added falling sand over the hourglass and reduced the transparency so it wasn't overbearing. Lastly, we found popular music-video footage that best represented the actors and actresses.


Final Result: