mind and medium
Monday, June 13, 2011
Thursday, May 19, 2011
Sunday, May 1, 2011
complete project
After spending far too long climbing around on ladders in the atrium, my installation is finally up and running. I fried my proximity sensor right before the open house on Friday, so the flytraps were opening and closing and changing colors with timers rather than with human interaction. I kept the non-working sensor out for the show, and I think a few people were able to convince themselves that they were somehow controlling the flytraps. Perhaps for them, this was more interesting than if the flytraps were actually responding to their movement. I have since swapped out the fried long-range sensor with a short-range, but very sensitive, infrared proximity sensor. In the end, I was only able to have a few of the flytraps open and close with my servo motor. If I were to control all 10, I would need to buy a few more servos and it just didn't really make sense to purchase any more motors for a temporary installation. If I were to do the installation again, I would definitely find a way to use less speaker wire and embed the sensors in the existing trace paper installation; I also wish I hadn't fried my sound sensor... I need to be more attentive with my wiring of positive and ground. Overall, I am very happy with the outcome, and I am excited to give Arduino another go somewhere down the road.
Wednesday, April 27, 2011
changing the paper
The Personification of an Impartial Digital Agenda
Abstract:
Although energy efficiency is economically and ethically the justification for the advancement of digital interaction in architecture, I am interested in giving a digital product an identifiable personality.
If I can master the means and methods needed to create and program a digital work with personality, I certainly will have developed the technical skills to create something environmentally reactive as well.
In fabricating and programming a set of unique “digital” plants that react with the environment and interact with human passersby, I will become more aware of the embedded digital possibilities and environmental strategies of a project proposal.
Introduction
Architecture is a decidedly reactive entity; it exists only out of necessity for human shelter. Architecture, however does not need to be limited to the reactive realm. When augmented with digitally controlled systems, the built environment can become interactive; personifying space.
Reactive digital systems, centered upon automating modern convinces such as temperature control systems, fire and crime detection, and automatic teller machines, have been around for some time. Their usefulness and functionality is not debatable; obviously these systems have found their niche in contemporary culture. According to Usman Haque, these single-loop systems “provide us with a situation where a person is at the mercy of the machine and its inherent logical constructs. [We may get unexpected results (for example the machine tells us that it is out of cash), but the fact that the machine itself was selecting from a predetermined set of responses precludes any constructive interaction].” These single-loop, narrow “minded” machines have no personality, their existence is justified strictly by their ability to serve a purpose.
Largely beginning in the 1960’s with the Cyberneticians Gordon Pask and Cedric Price, interactive architecture has come a very long way. While the technology available today was not at the disposal of these pioneers, the foresight of Pask and Price is still very much relevant. Gordon Pask spoke of how “rather than an environment that strictly interprets people’s desires, an environment should allow users to take a bottom-up role in configuring their environment in a malleable way without specific goals.” Cedric Price, attributed to the creation of the inspiration for the reconfigurable Pompidou Center, was very much an advocate for the notion his so called “anticipatory architecture.”
As technology continued to develop of the years, a drastic acceleration in the application of technologic advancements occurred in the nineties. With the creation of such projects as Michael Mozer’s adaptive house, MIT’s Intelligent Room, and Bill Gate’s house, programming architecture to respond to the needs of its human inhabitants became reality.
On any given day we come into contact with a plethora of digital devices designed to assist us as we go about our daily tasks. Smart phones, televisions, ATM’s, home security systems, GPS’s, laptops, iPads, and even our cars have become staples in our daily routines, but it can be argued that few, if any, of these are indeed interactive. Usman Haque defines many of these systems as single loop systems, where a device reacts only to human input. For example, Haque defines an ATM machine as a single loop system since it only reacts to a human’s request to output money. A banking experience isn’t an interactive multiloop system for a user until they come into contact with a bank employee, and perhaps strikes up a conversation about the weather or something unrelated to the banking task at hand. Essentially, Haque is explaining that the act of automating a task does not necessarily make it interactive. Making something react to stimulus, interpret that input, perform a task, and then reinterpret the consequences of said task, does indeed make something interactive.
Today, digitally interactive architecture seems no more a reality than it was when it was originally conceived in the 60’s. We still don’t talk to our homes, nor do they speak to us. Contemporary paradigms of interactive architecture seem to best exist in installations and expositions, and have not yet become mainstream applications to the built environment.
Technology as of recently has showed great promise in user tangibility and interactivity, especially with the advent of tablets and open source applications. Programming is becoming much simpler, and the learning curve of new software is much less steep. Computer controlled circuit boards such as Arduino can be readily found online and easily programmed to automate any number of tasks; perhaps an open source library home automation is not so far off in the future?
While certainly sustainability and improved energy efficiency are the future for digitally interactive and reactive architecture, I am interested in the creation something digitally interactive through a programmed ‘personality’.
What if a digital product were freed of catering to human needs, in a single-loop, reactive sort of way? What if the digital product was allowed to develop its own personal agenda, and subsequently, its own personality?
Project Description and Expected Outcomes:
I believe that sustainability is the future for interactive architecture. As we discussed in class, many future applications for digital technology will likely go towards improving the efficiency of existing appliances and devices. While I will not contest that this is a necessary and admirable goal, I am more interested in the kinesthetic opportunities interactive architecture presents to improve the qualities of space. I think this is where we as architects can excel; in an area that otherwise lends itself to being dominated by the technical know-how of engineers. As architects, we are supposed to be sensitive to our environments, so it should not be hard for us to find a niche in providing sensual, personalized interactive spaces that appeal to its inhabitant’s emotions, as well as being sustainable.
In the spirit of creating a personified digitally interactive that enhances the qualities of its space, I will attempt to create a project in which its success is not evaluated purely by its technical performance. Instead, I conservatively plan to create an interactive project that is free of functional assistance to humans, and simply livens the character of its environment. In doing so, I will still gain the basic technical skills to code, wire and install an environmentally sustainable interactive device. For now, familiarizing myself with the required technology is an accomplishment enough, and I just want to keep it simple and fun.
That being said, I have decided that through scripting and digital fabrication, I will try to capture and personify the reactive and interactive movements of a plant; the Venus flytrap. The Venus flytrap is not only reactive to the environment, with its sensitivity to sunlight and soil moisture content, but it is also highly kinetically interactive with potential prey. I wish to fabricate a number of these digital venus flytraps, and program each one with a different “personality.” Some will be shy, and move timidly, and startle and subsequently close at the first detection of movement. Others will be bolder, and take a high degree of detected movement to close. However, all will cycle through these stated levels of agitation if enough stimulus is introduced:
Natural State: zero human stimulus
The venus flytraps are calm an emit cool colors, pulsing between greens and blues. The agenda of the flytrap is to optimize the amount of light striking its petals, opening and stretching its petals towards the sunlight.
Slightly agitated state: some human stimulus
Reds become introduced to the green and blue pulsing colors, and some of the flytraps open to assume a predatory stance.
Predatory state: high levels of human stimulus
Red colors are now dominant, and green is completely absent; blue pulses occur at an increased rate. All flytraps are now fully open, in a predatory stance.=
I will be using Firefly, a plugin to Grasshopper to code the installation’s ‘personalities’, an Adrunio board to wire its components, an ultrasonic transducer to record human proximity to the sensor, a photo resistor, RGB LED’s and servo motors. I have decided to hang the flytraps in Alumni Hall’s atrium, above the second story’s bridge that leads toward the vending machines. There is already an existing installation here, but I have decided that it would be best to wire the flytraps through existing installation and attach to its tensile structure.
Results (in progress)
My current approach towards the creation of a multi-loop system is flawed. I initially felt that there is a linear relationship between how many stimulus a project can react to and its level of interactivity. I now believe that if there is only one or two means to express a reaction, it does not matter how many inputs there are; the project will simply never be interactive. In my case, I felt that if sound, proximity, motion, and sunlight control the kinetics and emitted light of my flytraps, that my project would ideally approach the definition of interactivity. However, I never did order the sound sensor because I began to realize that no matter how many different inputs control the color of emitted light from the LED, it will always be predictable. People may not understand exactly how all of the inputs affect the flytraps, but they assume that if they do something; wave, walk away, clap, stomp, jump, etc, that they will eventually be able to find a way to change the LED or open a flytrap.
Academically, perhaps I have failed to create something interactive, but learning the technicalities of how to create something that reacts to its environment has been a complete success. I can see myself using this system again later in my career professionally or in the mean time, just as a fun side project.
Works Cited (in progress)
Design Museum. "Cedric Price." http://designmuseum.org/design/cedric-price
Haque, Usman. "Architecture, Interaction, Systems." www.haque.co.uk. 2006.
"The Architectural Relevance of Gordon Pask." 4d Social - Interactive Design Environments. Wiley & Sons, 2007.
Kulkarni, Ajay. "Design Principles of a Reactive Behavioral System for the Intelligent Room". Artificial Intelligence. 2002.
Interactive Architecture. Fox, Michael and Miles Kemp. Princeton Architectural Press, New York.
Monday, April 25, 2011
4.25 progress update
The revised file for the art building laser cutter allows for less wasted material, smaller hanging apparatus and more flytraps. The tabs on the fingers on the left were also made larger to interlock with the body pieces on the top right.
This is the original file I tested at the engineering building; the proportions of the hangers (on the top right) to the assembled whole were not pleasing, and the pieces were so large I could only fit 6 on my material. I was able to fit 12 on the revised file. The tabs on the fingers on the left worked surprisingly well, and I decided to try my luck and make them even larger on the next set.
I have finished writing my script that opens and closes the flytraps and changes their coloring as well when people walk by. Before the open house on Friday, I hope to have all 10 of these installed on the catwalk in the atrium, and I look forward to seeing people walk by and experience them. As for my paper, I have decided to expand upon the theatrics of interactive architecture, and the implications of personified or 'living' interactive architecture.
Wednesday, April 20, 2011
4.20 progress update
Here is a working prototype of the flytrap attached to arduino.... A few kinks need to be worked out in the programing its movement, but I now have the files ready to send to the laser cutter on Friday.
Subscribe to:
Posts (Atom)