Thursday, May 19, 2011

test



Sunday, May 1, 2011

complete project




After spending far too long climbing around on ladders in the atrium, my installation is finally up and running.  I fried my proximity sensor right before the open house on Friday, so the flytraps were opening and closing and changing colors with timers rather than with human interaction.  I kept the non-working sensor out for the show, and I think a few people were able to convince themselves that they were somehow controlling the flytraps.  Perhaps for them, this was more interesting than if the flytraps were actually responding to their movement.  I have since swapped out the fried long-range sensor with a short-range, but very sensitive, infrared proximity sensor.  In the end, I was only able to have a few of the flytraps open and close with my servo motor.  If I were to control all 10, I would need to buy a few more servos and it just didn't really make sense to purchase any more motors for a temporary installation.  If I were to do the installation again, I would definitely find a way to use less speaker wire and embed the sensors in the existing trace paper installation; I also wish I hadn't fried my sound sensor... I need to be more attentive with my wiring of positive and ground.  Overall, I am very happy with the outcome, and I am excited to give Arduino another go somewhere down the road.

Wednesday, April 27, 2011

changing the paper



The Personification of an Impartial Digital Agenda

Abstract:

Although energy efficiency is economically and ethically the justification for the advancement of digital interaction in architecture, I am interested in giving a digital product an identifiable personality.

If I can master the means and methods needed to create and program a digital work with personality, I certainly will have developed the technical skills to create something environmentally reactive as well.

In fabricating and programming a set of unique “digital” plants that react with the environment and interact with human passersby, I will become more aware of the embedded digital possibilities and environmental strategies of a project proposal.  


Introduction

Architecture is a decidedly reactive entity; it exists only out of necessity for human shelter.  Architecture, however does not need to be limited to the reactive realm. When augmented with digitally controlled systems, the built environment can become interactive; personifying space.

Reactive digital systems, centered upon automating modern convinces such as temperature control systems, fire and crime detection, and automatic teller machines, have been around for some time.  Their usefulness and functionality is not debatable; obviously these systems have found their niche in contemporary culture.  According to Usman Haque, these single-loop systems “provide us with a situation where a person is at the mercy of the machine and its inherent logical constructs. [We may get unexpected results (for example the machine tells us that it is out of cash), but the fact that the machine itself was selecting from a predetermined set of responses precludes any constructive interaction].”  These single-loop, narrow “minded” machines have no personality, their existence is justified strictly by their ability to serve a purpose. 

Largely beginning in the 1960’s with the Cyberneticians Gordon Pask and Cedric Price, interactive architecture has come a very long way.  While the technology available today was not at the disposal of these pioneers, the foresight of Pask and Price is still very much relevant.  Gordon Pask spoke of how “rather than an environment that strictly interprets people’s desires, an environment should allow users to take a bottom-up role in configuring their environment in a malleable way without specific goals.”  Cedric Price, attributed to the creation of the inspiration for the reconfigurable Pompidou Center, was very much an advocate for the notion his so called “anticipatory architecture.”

As technology continued to develop of the years, a drastic acceleration in the application of technologic advancements occurred in the nineties.  With the creation of such projects as Michael Mozer’s adaptive house, MIT’s Intelligent Room, and Bill Gate’s house, programming architecture to respond to the needs of its human inhabitants became reality. 

On any given day we come into contact with a plethora of digital devices designed to assist us as we go about our daily tasks.  Smart phones, televisions, ATM’s, home security systems, GPS’s, laptops, iPads, and even our cars have become staples in our daily routines, but it can be argued that few, if any, of these are indeed interactive.  Usman Haque defines many of these systems as single loop systems, where a device reacts only to human input.  For example, Haque defines an ATM machine as a single loop system since it only reacts to a human’s request to output money.  A banking experience isn’t an interactive multiloop system for a user until they come into contact with a bank employee, and perhaps strikes up a conversation about the weather or something unrelated to the banking task at hand.  Essentially, Haque is explaining that the act of automating a task does not necessarily make it interactive.  Making something react to stimulus, interpret that input, perform a task, and then reinterpret the consequences of said task, does indeed make something interactive.

Today, digitally interactive architecture seems no more a reality than it was when it was originally conceived in the 60’s.  We still don’t talk to our homes, nor do they speak to us.  Contemporary paradigms of interactive architecture seem to best exist in installations and expositions, and have not yet become mainstream applications to the built environment.

Technology as of recently has showed great promise in user tangibility and interactivity, especially with the advent of tablets and open source applications.  Programming is becoming much simpler, and the learning curve of new software is much less steep.   Computer controlled circuit boards such as Arduino can be readily found online and easily programmed to automate any number of tasks; perhaps an open source library home automation is not so far off in the future?

While certainly sustainability and improved energy efficiency are the future for digitally interactive and reactive architecture, I am interested in the creation something digitally interactive through a programmed ‘personality’.

What if a digital product were freed of catering to human needs, in a single-loop, reactive sort of way? What if the digital product was allowed to develop its own personal agenda, and subsequently, its own personality?


Project Description and Expected Outcomes:

I believe that sustainability is the future for interactive architecture.  As we discussed in class, many future applications for digital technology will likely go towards improving the efficiency of existing appliances and devices.  While I will not contest that this is a necessary and admirable goal, I am more interested in the kinesthetic opportunities interactive architecture presents to improve the qualities of space.  I think this is where we as architects can excel; in an area that otherwise lends itself to being dominated by the technical know-how of engineers.  As architects, we are supposed to be sensitive to our environments, so it should not be hard for us to find a niche in providing sensual, personalized interactive spaces that appeal to its inhabitant’s emotions, as well as being sustainable.

In the spirit of creating a personified digitally interactive that enhances the qualities of its space, I will attempt to create a project in which its success is not evaluated purely by its technical performance.  Instead, I conservatively plan to create an interactive project that is free of functional assistance to humans, and simply livens the character of its environment. In doing so, I will still gain the basic technical skills to code, wire and install an environmentally sustainable interactive device.  For now, familiarizing myself with the required technology is an accomplishment enough, and I just want to keep it simple and fun.

That being said, I have decided that through scripting and digital fabrication, I will try to capture and personify the reactive and interactive movements of a plant; the Venus flytrap.  The Venus flytrap is not only reactive to the environment, with its sensitivity to sunlight and soil moisture content, but it is also highly kinetically interactive with potential prey.  I wish to fabricate a number of these digital venus flytraps, and program each one with a different “personality.”  Some will be shy, and move timidly, and startle and subsequently close at the first detection of movement.  Others will be bolder, and take a high degree of detected movement to close.  However, all will cycle through these stated levels of agitation if enough stimulus is introduced:

Natural State: zero human stimulus

The venus flytraps are calm an emit cool colors, pulsing between greens and blues.  The agenda of the flytrap is to optimize the amount of light striking its petals, opening and stretching its petals towards the sunlight. 

Slightly agitated state: some human stimulus

Reds become introduced to the green and blue pulsing colors, and some of the flytraps open to assume a predatory stance. 

Predatory state: high levels of human stimulus

Red colors are now dominant, and green is completely absent; blue pulses occur at an increased rate.  All flytraps are now fully open, in a predatory stance.=

I will be using Firefly, a plugin to Grasshopper to code the installation’s ‘personalities’, an Adrunio board to wire its components, an ultrasonic transducer to record human proximity to the sensor, a photo resistor, RGB LED’s and servo motors.  I have decided to hang the flytraps in Alumni Hall’s atrium, above the second story’s bridge that leads toward the vending machines.  There is already an existing installation here, but I have decided that it would be best to wire the flytraps through existing installation and attach to its tensile structure.


Results (in progress)

My current approach towards the creation of a multi-loop system is flawed.  I initially felt that there is a linear relationship between how many stimulus a project can react to and its level of interactivity.  I now believe that if there is only one or two means to express a reaction, it does not matter how many inputs there are; the project will simply never be interactive.  In my case, I felt that if sound, proximity, motion, and sunlight control the kinetics and emitted light of my flytraps, that my project would ideally approach the definition of interactivity.  However, I never did order the sound sensor because I began to realize that no matter how many different inputs control the color of emitted light from the LED, it will always be predictable. People may not understand exactly how all of the inputs affect the flytraps, but they assume that if they do something; wave, walk away, clap, stomp, jump, etc, that they will eventually be able to find a way to change the LED or open a flytrap.

Academically, perhaps I have failed to create something interactive, but learning the technicalities of how to create something that reacts to its environment has been a complete success.  I can see myself using this system again later in my career professionally or in the mean time, just as a fun side project. 





Works Cited (in progress)

Design Museum. "Cedric Price." http://designmuseum.org/design/cedric-price

Haque, Usman. "Architecture, Interaction, Systems."  www.haque.co.uk. 2006. 

"The Architectural Relevance of Gordon Pask." 4d Social - Interactive Design Environments. Wiley & Sons, 2007.

Kulkarni, Ajay. "Design Principles of a Reactive Behavioral System for the Intelligent Room". Artificial Intelligence. 2002.

Interactive Architecture. Fox, Michael and Miles Kemp. Princeton Architectural Press, New York.

Monday, April 25, 2011

4.25 progress update






The revised file for the art building laser cutter allows for less wasted material, smaller hanging apparatus and more flytraps.  The tabs on the fingers on the left were also made larger to interlock with the body pieces on the top right.





This is the original file I tested at the engineering building; the proportions of the hangers (on the top right) to the assembled whole were not pleasing, and the pieces were so large I could only fit 6 on my material.  I was able to fit 12 on the revised file.  The tabs on the fingers on the left worked surprisingly well, and I decided to try my luck and make them even larger on the next set.



I have finished writing my script that opens and closes the flytraps and changes their coloring as well when people walk by.  Before the open house on Friday, I hope to have all 10 of these installed on the catwalk in the atrium, and I look forward to seeing people walk by and experience them.  As for my paper, I have decided to expand upon the theatrics of interactive architecture, and the implications of personified or 'living' interactive architecture.

Wednesday, April 20, 2011

4.20 progress update



Here is a working prototype of the flytrap attached to arduino.... A few kinks need to be worked out in the programing its movement, but I now have the files ready to send to the laser cutter on Friday.

Monday, April 18, 2011

4.18 progress update

In playing with my new IR proximity sensor, I found that I will only be able to track motion in a straight line from the sensor, not in a larger area as I had originally anticipated.  Therefore, I have decided to hang the digital venus flytraps from the a stairwell in alumni; this will ensure that people walk at the sensor in a straight line.  I was able to write a script that utilizes proximity data from the IR sensor to control red, green and blue values in the RGB LED.  This allows me to gesturally control color, just like the color sliders in photoshop by moving my hands. I ordered 9 more RGB LED's; I plan to have 9 total flytraps. I finally found a way onto the laser cutter schedule so I will need finalized my files this week.

Tuesday, April 12, 2011

Midterm Update

Although some of this is posted elsewhere on my blog, I will be redundant here for the sake of organization.

Independent Project:
The Personification of an Impartial Digital Agenda

ABSTRACT

Although energy efficiency is economically and ethically the justification for the advancement of digital (inte)reaction in architecture, I am interested in applying  an identifiable personality to a digital product.

If I can master the means and methods needed to create and program a digital work with personality, I certainly will have developed the technical skills to create something environmentally reactive as well.

In fabricating and programming a set of unique “digital” plants that react with the environment and interact with human passersby, I will become more aware of the embedded digital possibilities and environmental strategies of a project proposal.  


INTRODUCTION

In class we debated the difference between interaction and reaction.  Architecture is a decidedly reactive entity; it exists only out of necessity for human shelter.  Architecture, however does not need to be limited to the reactive realm. When augmented with digitally controlled systems, the built environment can become interactive; personifying space.

Reactive digital systems, centered upon automating modern convinces such as temperature control systems, fire and crime detection, and automatic teller machines, have been around for some time.  Their usefulness and functionality is not debatable; obviously these systems have found their niche in contemporary culture.  According to Usman Haque, these single-loop systems “provide us with a situation where a person is at the mercy of the machine and its inherent logical constructs. [We may get unexpected results (for example the machine tells us that it is out of cash), but the fact that the machine itself was selecting from a predetermined set of responses precludes any constructive interaction].”  These single-loop, narrow “minded” machines have no personality, their existence is justified strictly by their ability to serve a purpose. 

Largely beginning in the 1960’s with the Cyberneticians Gordon Pask and Cedric Price, interactive architecture has come a very long way.  While the technology available today was not at the disposal of these pioneers, the foresight of Pask and Price is still very much relevant.  Gordon Pask spoke of how “rather than an environment that strictly interprets people’s desires, an environment should allow users to take a bottom-up role in configuring their environment in a malleable way without specific goals.”  Cedric Price, attributed to the creation of the inspiration for the reconfigurable Pompidou Center, was very much an advocate for the notion his so called “anticipatory architecture.”
Price's Fun Palace. 1961

As technology continued to develop of the years, a drastic acceleration in the application of technologic advancements occurred in the nineties.  With the creation of such projects as Michael Mozer’s adaptive house, MIT’s Intelligent Room, and Bill Gate’s house, programming architecture to respond to the needs of its human inhabitants became reality. 



MIT: Intelligent Room

Adaptive House



However, today digitally interactive architecture seems no more a reality than it was when it was originally conceived in the 60’s.  We still don’t talk to our homes, nor do they speak to us.  Contemporary paradigms of interactive architecture seem to best exist in installations and expositions, and have not yet become mainstream applications to the built environment.
the temporary Blur Building: DS + R




installation: Digital Water Pavilion




Technology as of recently has showed great promise in user tangibility and interactivity, especially with the advent of tablets and open source applications.  Programming is becoming much simpler, and the learning curve of new software is much less steep.   Computer controlled circuit boards such as Arduino can be readily found online and easily programmed to automate any number of tasks; perhaps an open source library home automation is not so far off in the future?

While certainly sustainability and improved energy efficiency are the future for digitally interactive and reactive architecture, I am interested in the creation something digitally interactive through a programmed ‘personality’.

What if a digital product were freed of catering to human needs, in a single-loop, reactive sort of way? What if the digital product was allowed to develop its own personal agenda, and subsequently, its own personality?




PROJECT DESCRIPTION

In the spirit of creating a digitally interactive piece that is freed of functional assistance to its owner, I have decided that through scripting and digital fabrication, I will try to capture the movements reactions and interactions of a plant; the Venus flytrap.  The Venus flytrap is not only reactive to the environment, with its sensitivity to sunlight and soil moisture content, but it is also highly kinetically interactive with potential prey.  I wish to fabricate a number of these digital Venus flytraps, and program each one with a different “personality.”  Some will be shy, and move timidly, and startle and subsequently close at the first detection of movement.  Others will be more bold, and take a high degree of detected movement to close.  All the digital flytraps will open to the sun, and attempt to optimize the amount of light hitting their petals. I haven’t yet figured this part out, but each flytrap will have a corresponding coloration that can fluctuate based on the under lighting of LED’s.

I will be using Firefly, a plugin to Grasshopper, an Adrunio board, an IR motion sensor, photo resistor, RGB LED’s and servo motors to realize my digital Venus flytrap garden.  



WORKING BIBLIOGRAPHY

Design Museum. "Cedric Price." http://designmuseum.org/design/cedric-price

Haque, Usman. "Architecture, Interaction, Systems."  www.haque.co.uk. 2006. 

"The Architectural Relevance of Gordon Pask." 4d Social - Interactive Design Environments. Wiley & Sons, 2007.

Kulkarni, Ajay. "Design Principles of a Reactive Behavioral System for the Intelligent Room". Artificial Intelligence. 2002.

"Interactive Architecture." Fox, Michael and Miles Kemp. Princeton Architectural Press, New York.













Red, green and blue LED's all illuminated with full light
hitting sensor.


As the light to the sensor begins to be blocked, the green 
light is the first to turn off.










With all light blocked to the sensor, the red light 
is the only continuing to shine.
With the red, green, and blue light diffused into a semi-
transparent object, I found that I could create any color 
in the RGB spectrum by incrementally turning on or 
off any combination of the LED's.




Monday, April 11, 2011

progress updates. week of 4.10

4.10

I've been searching for some "academic" articles on interactive architecture other than the one's we assigned in class.  I must admit that I am more interested in the applications of interactive technology rather than rhetorical argument's regarding its definition, so I haven't yet found much more than case studies.

4.11

I started the digital files for my pieces and parts to control with Arduino; I've decided laser cut plexi is the way to go for the body of my 'creature.'  I will upload some images of my Arduino set-up tomorrow.  My search for articles continues.  

Wednesday, April 6, 2011

progress update

 4.06.
http://www.acroname.com/robotics/parts/R48-IR12.html

I look forward to adding this to my Arduino collection: I want my project to be able to react to proximity of passers-by.


4.07
If I am going to make something interactive that can respond to more than one form of stimulus, the venus flytrap is an excellent case study.  It responds to not only environmental conditions- humidity, sunlight, rainfall, and temperature- but also movement and touch. 




4.08

RGB LED changing (rainbow) colors, Arduino from Meinaart van Straalen on Vimeo.

While waiting for my proximity sensor to arrive, I'm trying to come up with a few variables to be controlled by distance data.  My arduino board came with a few different colors of LED's; color coding with light someone's distance from the sensor could be fun.


4.09

I really appreciate the look of the digital fabrication group's laser cut plexi... I purchased some clear acrylic today to create my creature.  Under-lit plexi from Arduino's LED's should look vibrant. 

Mistry Pravnav Talk



While it seems this technology is still years away, it certainly seems promising.  There isn't anyone who would rather enjoy working on their "computer" wherever they please, as they go throughout the day.  No one wants to be a slave to their computer monitor.  This would also be fantastic in eliminating the waste in the production of computer screens, which inevitably become outdated every five years or so.

Monday, April 4, 2011

individual project. progress update


Now that I have proven to myself that I can control Arduino, I need to find an application for it.  While originally I was looking to create a digital flower that responds to light, I want to find something that is less predictable and responsive to more criteria.  The script above allows a servo motor to respond to the fluctuating light levels in a room, but I want to branch out and try out some sensors that respond to proximity, touch, noise and temperature. In class we discussed the possibilities of interacting with something digital, and I now want to create something that unless uninterrupted, will do its own thing- as if it weren't simply created to cater to the user.

Here are some great examples of work with Arduino and proximity sensors:
http://natebu.wordpress.com/

Sunday, April 3, 2011

case studies

CNC:




Laser cutter:






3d Printer: RepRap. the self replicating machine




Water Jet Cutter:


Robotic Bricklayer:

Tuesday, March 22, 2011

project. interactive architecture


Firefly for Grasshopper / Arduino from Jason K Johnson on Vimeo.

I am expecting an Arduino board in the mail tomorrow.  I am excited to try my hand with it via Firefly, an add on to Grasshopper.  I also ordered a light sensor and a rotation motor, and I hope to be able to control the motor with the data from the light sensor, similar to this example of a light data being used to control aperture sizes in grasshopper geometry.


Arduino + Grasshopper & Firefly from Rodrigo Medina on Vimeo.

Whereas in the video above, physical data is used to manipulate arbitrary digital geometry, I would like to instead have physical data manipulate a physical object.  Ultimately, I want to build a sort of digitally controlled "flower," perhaps a homage to Phillip Beasley, that reacts to light by opening and closing its petals based on the amount of light the flower receives.  Below is an example of a physical armature being controlled by physical data via the movements of a wii remote.  


Robofun with Arduino, Grasshopper, FireFly from Chris Wilkins on Vimeo.

Monday, February 28, 2011

invisible landscapes

take a look at this... a very interesting project that beautifully illustrates the spirit of connective digital networks.

Sunday, February 27, 2011

digital recycling

Since I chose to develop a perspective of one of my projects in my portfolio, I found it a bit difficult to break away from my work process I typically use in renderings.  Although I have a great appreciation for hand renderings, I typically shy away from drawing things by hand for presentation purposes, as I find photoshop to be more efficient and forgiving.  That being said, there is a certain character indicative of a hand drawing that rarely is evident in a clean, polished digital rendering; for the digital recycling project, I wanted to create a process that would capture a certain roughness that is found in a hand rendering. 

I began with a digitally rendered section of my project, and printed it out without altering it in anyway.  I wanted to soften the ground plane and fade it out towards the edges of the composition, so I cut out digitally rendered ground and just did some smearing of graphite to fill in the ground.


I then scanned the image in gray scale, as I found the values to be more exciting in the image when they weren't competing with the blue tint of the digital rendering.  Since the image is now entirely desaturated, in photoshop I brought some color back by adding some foliage.  After printing again, I realized I needed to poche the section cuts and ground plane to allow for the rendering to read as a section.

In illustrator I cleaned up the line work and found that I preferred a yellow poche over my sharpie's light blue.  I had some crinkeled trace paper on my desk, and wanted to give the sky some texture, so I scanned in my sheet of wrinkled trace and overlayed it on top of the rendering.  As I stretched the s smaller scanned image over top of the larger perspective, it took on a subtly noisy and pixelated appearance, which I found added another layer of texture to the image.

I then layered in some background trees and sky from some photographs I took at the site prior to construction.  Finally, in order to give the image some life and a human scale, I found in a newspaper two figures that I scanned in and dropped into the final composition.






I found a point of diminishing return rather quickly, as although I feel that each step added something to the overall character of the composition, in the end I would be satisfied with any of the images after the third piece.  The most successful of the iterations was the texture added by the trace paper in the sky; this is a method I will probably use in the future.  It also would be interesting to apply this technique to the ground as well to create an appealing texture.

silk!

http://weavesilk.com/?9#new

Thursday, February 24, 2011

2.23.2011

 "the purpose of art is to engage people."  I don't think I've heard a better definition of art.  It is extremely difficult to create an all-inclusive definition of art, as art is not necessarily a universal, independent entity.  It is easy to approach art from a "finished product" stage, only looking at the surface level of a finished piece and analyzing it based on appearance or performance. 

What if it is the process of creation, something that is not expressed in the final product without background knowledge or research, that truly defines something as art?  In class we talked about the work of Clifford Ross, a photographer who builds his own camera equipment in order to capture large format, extremely high resolution photographs. Nasa has adopted his techniques in order to direct his camera technology towards developing high resolution images of astrological events.  

We also talked about last class's inquiry into controlled serendipity, focusing mostly on the ways in which a digital process facilitates a more "serendipitous" environment; I think Frank Gehry might disagree.  Although he is the figure head behind the creation of Gehry Technologies and Digital Project, Gehry himself "doesn’t know how to use a computer, except to throw it at people."

Tuesday, February 22, 2011

controlled serendipity

in a creative profession, the fabled "happy accident" is an ideal way to stumble across something unexpectedly desirable. What if there were a way to create a design process which encourages these findings of controlled serendipity?

While by definition it is seems a paradox to control something that is otherwise unexpected, facilitating a design process that is open to mutations has terrific possibilities.  It is important that any designer not work in a purely linear fashion, but is always going back and reworking past notions, as well as constantly exploring new ideas as they arrive, instead of continuing to blindly develop their original direction.

Our digital recycling project should prove how somewhat unexpected outcomes arrive in the design process when switching between media, and alternating between digital and analog.  Often times when switching from the computer and going back to sketching or hand modeling, you are brought back into tune with textures and materialism.  Textures are often glossed over on the computer screen, but being aware that the roughness of a certain paper or striations in a basswood model are really improving the character of the project is something that could be defined as controlled serendipity. 

On the digital side, algorithms such as Galapogos are a recent development in design software, which is an "evolutionary solver" for formal design explorations.   The software emulates evolutionary processes defined by user controlled fitness parameters, and cross breeds models over x-number of generations to eventually come to a 'genetically' determined ideal model.  The algorithm intermittently introduces random mutations into the gene pool to allow for "happy accidents." While this sort of controlled serendipity definitely kills the romanticism of the "happy accident" found within a crumpled piece of paper, I'm sure it still has its applications to certain design processes.

Sunday, February 20, 2011

presentation : representation... a comparison

While the gradient between presentation and representation does not define a clear line where one becomes the other, in their extremes, presentation and representation are very much separate.  In order to create a differentiation, it is necessary to analyze the two terms outside of their action form: it is possible to present a series of representations, just as it is possible to represent in some form, an upcoming presentation.

In class someone mentioned an example of a painting of an apple.  Let's assume underneath this painting there is a plaque engraved with the word "apple."  To anyone viewing the painting, this would be an example of a representation; clearly the apple depicted only abstractly represents an apple, and is not actually itself, an apple. 

Conversely, what if the words on the plaque under the painting were changed to "painting of an apple"? 

This shows how the stated intentions of the communicator/artist/designer entirely define the term and associated differences between presentation and representation.  Ultimately, if the stated intentions of a designer and given product align, it is a presentation.  If they do not, it is a representation.

Tuesday, February 15, 2011

presentation : representation

Representation implies an absence of presence; an analogous substitute must be used to portray something that is cannot be fully seen or otherwise understood.  Representing an idea does not require that said idea is fully developed, instead it would be very possible to use similar case studies with a similar desired outcome of the idea, partially leaving it to the audience or user to piece together what the idea is really about. 

On the other hand, the act of presenting an idea implies a more concise and controlled communication with an audience.  A presentation leaves much less to the audience to be interpreted than a representation.


 

Wednesday, February 2, 2011

Digital Perceptions



JR campbell, director of Kent State's School of Fashion Design and Merchandising program, speaks on the influence of digital technology on his work.  Although Campbell once thought that his design goals prior to his emersion into the digital realm would remain unchanged, he admitted that he lost his original aspirations to tell stories once his work became a digital product.  Instead of defending his original goals, Campbell expanded his goals to include technical advances such as engineering imagery that can go across seam lines, creating things that shift their level of focus, and establishing multiple layers of work.  Through his explorations, Campbell found that as the process of digital printing can really be simplified to just the print button, his only limitations were the cost of material.  As he moves forward with his research, his aspirations have shifted to creating something economically viable and how to use the capabilities of digital printing to further express 'fabricness.'





other images of his work

Saturday, January 29, 2011

90's era digital art: Jean Pierre Hebert


["Blue-ism," Jean Pierre Hebert. 1995]

Co-founder of the Algorists in 1995, Jean Pierre Hebert began his explorations into algorithmic art and drawings in the mid-1970's.  He experiments with various media, from pencil and ink drawings, to digital prints and sand and copper etchings.   In 1985 Hebert moved from his home in Calais, France to Santa Barbara, California where his first exhibition, “Sans lever la plume,” presented his variety of ink drawings made with mechanical plotters.   Hebert has an eye for repetitive patterns in nature, including especially the striations made by wind on sand and waves in water.  The software Hebert uses to create his compositions was developed by him for the specific purpose of creating forms to be plotted.  His drawings in the 90's were influenced by the work of Max Bill, hokusai, buddhist meditational art and zen art.



 

1.26.2011

It seems that technology can often be equated to novelty.  How often do you hear someone say "oh, I'm not very good with [cell phones][computers][digital cameras, etc], I'm not very tech savy." 

What does it mean to be tech savy? Or, more importantly, how do we socially define technology?  In class we discussed the technology of writing.  While handwriting was once a mystery to the masses, it would be far-stretched for someone today to think of handwriting a great technologic advancement. Once a technology becomes widespread and deeply embedded into the character of a culture, its connotations as a piece of technology quickly fade.

This relationship between technology and novelty illustrates a emerging dilemma in art.  Whenever technology is used as tool to create art, the first reaction by critics is always "Is this really art?"  As critics of digital art, we question the controlling influence of the artist, enforcing the ideal that art must be created by a skilled author.  The common conception of art is something that must be done by hand; something that the artist's pencil or brush entirely defines.  The authorship of an oil painting would never be brought into question.  However, digital art is not granted the same leniency; we question who the artist is, the computer or the user.  It takes time for the 'newness' of a technology to wear off before something truly creative and widely recognized as art to emerge and be appreciated.  It is easy for everyone to recognize the artistic skill of someone who works with charcoal or graphite; we have all picked up a pencil and used it to create something.  With all of the software packages and power of a computer, we cannot know what is controlled by some algorithm, what is a photoshop brush, or what was intentionally  and cleverly created by a 'digital artist.'   We need to understand the limitations of a particular software before we can again recognize the skill of the artist; in order for something to be art it needs to be both visually or mentally stimulating, and their needs to be a sense of wonder and awe for the skill of the artist. 

Wednesday, January 26, 2011

postcard reflection



I chose to illustrate the “Digital Fabrication” lecture by Larry Sass from MIT.  Since digital fabrication is by nature controlled by a computer, it resounds as something clean and precise, yet simultaneously complex; these concepts embody the spirit of digital fabrication and were the focus of each of my compositions.

I first worked with Wordle and arrived at a 'finished' level of work within an hour.  After setting the parameters on color (light blues and whites), orientation, and establishing a hierarchy, I probably spent at least half an hour pressing the 'realign' button, which would give me a somewhat random arrangement within my stated limits.  I saved 20 different compositions I liked out of 60 or so total that were generated, and I found that there was a diminishing return on how much I liked any given composition over the next, so I decided to not spend any more time realigning and be happy with what I had.

Following Wordle's completion, I moved onto the illustrator postcard.  While in Wordle the compositional outcome always fell within an expected range of organization and color scheme, this was not the case in illustrator where I had to start with a blank slate and everything was intentional. To create a digital 'structure' for this composition, I worked with a dot matrix to turn 'on' or 'off' the general reading of my text.  I spent nearly all of the two hours working on this, as I first had to experiment with how to create roughly legible text at three different scales on the same dot matrix.  In this composition, as opposed to the first, scale does not define the hierarchy; it is hue.

The third postcard, which is hand drawn, was completed last and is probably farthest removed from the spirit of 'digital fabrication.'  By definition, an analog drawing is the farthest removed from the digital, and I didn't want to attempt to precisely draw binary or the likes.  Instead, I wanted to create a free flowing dynamic piece that suggests some sort of self-assembly by a natural algorithm.  The composition is simply a collection of cellular components that twist and join to create a framework that possibly suggests it was created or conceived in a digital environment, or on Zaha's computer screen.  Although my hand drawn postcard is probably farthest from the parameters of the project statement, I enjoyed working on it the most and spent all of the two hour time allotment. 

The introduction of Wordle in class played a large part in the development of the other two post cards, but in an adverse way.  The last thing I wanted to do in the following two compositions was take two hours to mimic something anyone can generate with Wordle in 5 minutes.  Instead of taking influences from Wordle, I turned the complete opposite direction and sought to create something that would be extremely difficult or near impossible for Wordle to ever produce.

I feel a bit defeated in admitting that I think the Wordle postcard was in the end the most successful.  After working with Wordle I looked forward to the illustrator and hand drawn compositions, where I would have much more artistic freedom to do as I please.  However, I think it was within Wordle's strict framework and limited inputs that I was forced to be most creative, and therefore I had a more successful and clever solution.