Updates 1/26/22-2/4/22
Updates 1/26/22-2/4/22
Projection
Should projection be 1 image and 1 image each subsequent minute till the end of the day culminating with 1440 images being projected all at one time. Not overlapping, the image grid grows as more images are added. OR should the projection be a real-time video with a 1 minute “trail window” basically showing the trail or path of the movement in a sliding 1 minute window. I think I can accomplish this using touchdesigner.
The below video shows a test of long-exposure images layered in a video with a 20 second overlap. Each still was displayed for 30 seconds, at the 10 second mark the image opacity starts to decrease from 100% to 0% over the next 20 seconds.
Massive Bob
Modular design allowing either speaker/LED combo or just LED. This will make prototyping easier.
Installation: Before entering
Wall text: Take time to relish in the present, do not move quickly through the space as the longer you view the more is revealed. Entrance to space is
XY Gantry
I looked into four different pre-made solutions for the gantry they vary a great deal in cost from $600-$8k (haha). I also looked at repurposing an inexpensive laser cutter system, however, these systems aren’t much less than a gantry system that has more flexibility. The laser system costs about $500, so for another $100 I would have something that is expandable and more versatile. The other up side to the gantry I’m looking at is the support community. I have already had some great guidance on how to make the system communicate with sensor to create the movements. The XY gantry I think I have decided on is made by OPENBUILDS. I haven’t decided 100% on this gantry but I will make a decision by 2/7/22. Gantry link https://openbuildspartstore.com/openbuilds-acro-55-20-x-20/
Call with mentor: Owen Foster
Meeting with Owen I spoke to friend/mentor/colleague/professor Owen Foster about the interactions and overall idea behind the project. I explained and visualized a walk-through of the space and what the viewer would see. I asked him three questions:
- How would you want to interact with the space and/or the pendulum?
- What would you want to see as a projection? (three options)
- What would you want told to you before entering the space?
Feedback from Owen:
- Maybe use a heat sensor as one of the inputs, as more people enter the room the temp will rise showing the effect that community has a path or journey
- Instead of velvet rope barriers could something on the floor be used? Suggested ash or sand piled in a circle.
- Have only ten artifacts, stacked with decreasing opacity ie: top image has 100% opacity and uses “screen” layer style so that all black becomes transparent, second image has 90% opacity again with “screen” layer style, and so on till the last image on the stack has 10% opacity.
- The lowest image on the stack drops off when a new one is added. The attributes of the image are tied to the place in the stack so as each image drops in the stack it takes on those attributes.
- His question to me after saying that my thesis question is “Is your experience enough?”, “Are You, You?”
- Possibly put quotes on the wall before entrance to evoke a thought
- Sound could be a metronome type ticking sound to make viewers think about time.
- Further expanding the installation could include a “feel panel” for low visibility guests.
After speaking with Owen, I thought a lot about the interactions. And I started to think about how the interactions are tied to the output and if those interactions need to be explicit the viewer in order for the pendulum to draw a connection to the artifact being projected in the next room. By changing the projection from every image captured to only ten and stacking them the evidence of your personal contribution to an image is much much less important. Furthermore I’m also looking into having the projection be a one minute sliding window of the movement, this way the viewers in the projection room are seeing what is happening in real-time.
Installation Space
I have requested space in the media commons room on the second floor of 370 Jay ST, I’m still waiting on confirmation.
Code: how to use the data
I’ve been looking into how to take the raw data from the sensors and pass it into movement. I found some good information on the github for grbl, there are a few tricky things to work out but I am scheduling office hours with Tom and Danny Rozin this week to tackle those issues. Link to the github https://github.com/gnea/grbl/wiki/Grbl-v1.1-Interface
Past ITP Thesis Projects
https://itp.nyu.edu/thesis2019/#LucasKuenKule’aChung