Category Archives: Uncategorized

Poetry, polity and power Instructable

Poetry, polity and power is the culmination of two disparate ideas that I have been intrigued by this past semester. The first is about a new kind of interaction that is emerging between human beings and machines as technology gets integrated into everyday processes. The common perception of technology or computerized systems being objective is a myth- they embody the values and perspectives of the people who design them. The second is about the power of art and poetry- and how they can be used as dynamic tools for resistance.

Poetry, power and polity is an optimistic poetry generator that can be fed biased text- hate speeches, discriminatory policies, misogynistic statements- and it removes words to create poetry that is hopeful and empowering. I wanted to create a computerized system that would automatically generate poetry from the source text- without human intervention. I see this project as a conceptual prototype that captures the essence, the value inherent in the idea- but needs further iterations to be fully realized.

In its current form, the generator would be more effective if it could respond to different source texts- by activating different heating pads depending on which text was fed in. Future iterations include programming a system that can operate on its own. Possible ways to do this would be to train Machine learning algorithms using many such blackout poetry examples.

The challenges for this project were mainly working with unfamiliar material, that was inconsistent and would react differently on different days. It taught me the importance of experimentation. Powering the circuit using the wall wart was challenging too- mainly because I found very limited documentation on it.

I loved working on this project though because I realized how simple, basic materials, mechanisms and methods can be used to convey ideas.

Here is the pdf with the final slides.

Here is the link to my final Instructable. 

https://www.instructables.com/id/An-Optimistic-Poetry-Generator-Using-Thermochromic/

Fireflies Lamp – Objects and Memories – Final Project (Dario N)

Presentation Link:

https://www.dropbox.com/s/t2ac2prwip2iozi/Objects_%26_Memories_Final_Presentation_051018.pptx?dl=0

Concept and Goals:

Objects and Memories seeks to analyze and project the powerful relations between objects, humans and emotions, and how they connect to evoke memories, nostalgia and rituals, as influential axes for new experiences and associations. Design theorist Donald Norman highlights the importance of the “history of interaction”, the associations and values that people give to objects, and the memories that evoke, over appearance, aesthetics and utility/functionality through the concept of “Emotion rather than reason” (Norman 2005).

The theoretical framework is supported by the development of a lamp inspired by the magical tradition of catching fireflies in a jar, as a playful and gestural ritual that allows users to ‘naturally’ control, illuminate, dim and turn off the light (See image below). This object/experience[1] is also meant to change the bias of relating objects as purely sculptural artifacts, to become elements that fully engage the user, shifting schemes from “observer”, static and contemplative, to “active user” and experience.

[1] Object: Related to certain attributes such as materiality, physicality, form, functionality. Experience: Related to the attributes that are triggered by the human action.

Behavioral process of catching fireflies with the lamp

A really valid question was promptly asked at the beginning of the project by my fellow classmates and professors. What is this? Why are you doing this? An important decision taken at the initial phases was to identify the platform and the context in which the idea was to be located. When talking about “flying an LED” (Image): the technical challenge set as my goal, it is easy to imagine the response framed in a sort of installation in a museum, or in another similar context. This is a very interesting path with a lot of potential, no doubt, because these contexts allow the spatial exploration that can configure a completely immersive experience; I totally agree. One of my goals as an Industrial Designer is to change the established paradigms and premises of a purely sculptural profession focused on aesthetical decisions. In addition to create an object that can be accessible and affordable, potentially by millions of people, as I said on the introduction of this document, the ground of this project can be extrapolated to other contexts and audiences…as a unique experience in a museum, as a tool for learning and nature consciousness, as a visualization for a dystopian future where there is limited access to nature and memories. The spectrum for execution and context manifestation is, at the moment of presenting this document, an item open for re-interpretation.

From these principles, the project started with the analysis of the relationship between a potential audience (mostly children from 9 years on, to adults), artifact and experience. The balance of these three elements results in a well-designed object, where the user (observer or operator) represents the axis of the experience, and the ones who have the right answer and insights to be able to argue the decisions on the artifact -Human Centered Design-.

Precedents:

Multiple mood-boards that gave a first formal approach to what it was intended to be shaped as a final product. The original mason jars and the old oil lamps were taken as inspiration. All this was carried out along with an analysis of elements that intuitively transmitted the action of “catch” such as meshes, baseball catcher gloves, nets; these correspond to relevant archetypes to analyzed actions translated into forms (Figure 11). Interesting references such as Infinity Mirror Rooms by Artist Yayoy Kusama, a fascinating way of using mirrors and light to create an immersive experience of endless worlds, and the product “Dreamlights” by Fred and Friends, which showcase a similar experience of using light and movement (like a flying led), and a clever way to  hide the mechanisms and LEDs with frosted surfaces.

Mood-boards. Formal Inspiration

Infinity Mirror Rooms by Artist Yayoy Kusama and “Dreamlights” by Fred and Friends

Description of the Project (Process and Interaction):

‘Objects and Memories’ seeks to go beyond the ‘completed’. I’m not presenting a finalized lamp, not even a completed conceptual body; my intention is to keep the boundaries open for future iterations and explorations; this project was meant to be inconclusive…an excited segway to future possibilities.

The Design process model of the that was followed in the course of the project covered 5 phases, which were developed around 3 main axes: 1. The achievement of a design concept that supported the experience and the artifacts, 2. A technical exploration that was based on the premise of how to make an LED fly, and 3. A detailed development of the artifact, which required formal and material exploration, and the realization of 3 dimensions of parts that assembled the object. Each of these phases had several technical, conceptual and human-experience challenges. The intention of the project goes past an academic exercise, but it aims to explore new interventions and experiences, as well as engaging in exciting technical experiments.

It is important to clarify that “User Testing” is a recurrent process throughout this development. The 3 axes named in the previous paragraph were developed concurrently, due to time constraints. Likewise, each phase fed and responded to the others simultaneously, so that progress was made in all the axes.

Process + Prototypes:

Initially, in the research phase, the related boundaries to the experience of catching fireflies were analyzed by describing specific objectives, actions and consequences. The objectives in this phase were raised from the observation of children and adults catching fireflies supported in a ‘playful’ context, and the different ways in which they would catch fireflies. The other important objective was to understand how users would interact with an artifact that has no instructions. In summary, in the process of catching fireflies we can identify 3 different paths. It’s important to annotate that this experience is also cultural and depends in many other factors that go beyond the acting exercise itself. In other cultures, the archetypes used to catch the fireflies range from nets to baskets. All of these icons that are part of the vast objectual domain have repercussions in the effectiveness of the memories, which is highly visual. For this particular creative exercise, I centered the analysis and results by influenced from the western way of catching fireflies.

Behavioral User Tests

In this phase, 2D sketches were made and different formal languages were explored that responded to the references of the research phase (mood-boards + inspiration + archetypes). The final result is very similar to the mason jar, since this form invites the experience of catching and containing fireflies, has a base and neutral colors that do not burst with the light that the insects generate, additionally, the lid is simple use and generate direct communication with the product’s operating system. In the same way, 3D developments were created, which aimed to test scale, technical and functional exploration.

2D Sketches. Formal Exploration

The intention with the first prototype was to quickly visualize the idea and the concept, and to test the interest and reaction of the audience to the overall experience. The prototype is screen based, made from a 120 series of “modified” images where is possible to see the behavior and response of the hardware and experience with the inputs of the user.

First Prototype. Look and Feel. GIF

In this phase was necessary to take approach the project from a technical exploration of considering different ways that resemble the light emitted by a firefly in a mason jar.

Possible Technical Approaches

The first technology that was explored was with a matrix of LED’s, which consists in making a cloud of LEDs (soldering one by one) until achieving the desired effect. This matrix was discarded since it requires a lot of space for wiring and hardware (hard-points), besides it is complex to test-build in the desired final object.

Fiber optic was another technology tested to make the effect of fireflies flying in a jar. This was not a good direction since the intensity of the light was not enough and it also required a matrix of complex LEDs on the base of the object. Lasers, projectors and a mechanical system were other alternatives that were evaluated, but finally the LED’s strip get the desired effect with the variations of speed and tonality of the light. In this option there is an important challenge, since the programmable LEDs drains a lot of current, so a very large battery pack was required, and this must be assembled inside the artifact without breaking its form and optimal operation. Something important to consider is that I wanted to avoid the use of external wires since this may affect negatively the nature and freedom from the traditional activity.

Parallel to this process, the development in 3D plays a very important role to achieve tight scales, tolerances and dimensions that are close to reality and consider hard-points by the selected mechanisms.

3D Development. Working to package all the components

I also tested different effects corresponding to the different behavioral actions from the experience. Each effect entailed different levels of complexity: one Effect with one strip, then one effect with two strips, and finally multiple effects with multiple strips (Figure 17). Also, the alignment between the different sensors and the lights, the response and feedback, corresponded to an important technical challenge. Not to mention problems with the  accelerometer reliability, problems with the light sensor, RAM issue due to the control of multiples effects in numerous leds, canvas and brush allocation problems (RAM) from the LED library, were some of the multiple issues and challenges faced during the development process.

 

 

Code Logic Illustration

Construction Process:

Most of the 3D components I built them in a 3D software and then they were manufactured in CNC.

I used 2 clear acrylic tubes, one from the outside and one from the inside where I wrapped the LED strips. I sandblasted the outside tube to hide the internal components.

I shaped the lid from a wooden block in the lathe. The lid has some magnets to activate the reed switch and to close tightly the cap to the body.

Materials:

For the Electronics:

  • x1 Arduino Uno
  • x1 Breadboard
  • x1 ADXL 345 Accelerometer
  • x1 Reed Switch or Magnetic Switch
  • x3 Neopixels RGB strips
  • x3 330 kohm Resistors for the Strips
  • x1 1000 µF, 6.3V or higher Capacitor
  • x1 Battery Pack. 5V 2A
  • Jumper Wires

For the Jar:

  • Wood block of 2in x 6in x 6in for the lid
  • 2 clear cast acrylic tubes. One of 6″OD and one of 4″OD. for the exterior and to wrap the led strips
  • ABS. Most of the 3D components I sent them to be machined in CNC

Circuit Diagram:

 

Servo Motor – Xu (Week 8)

 

I want to use the potentiometer to control the servo motor. When the potentiometer rotates anticlockwise triggers the motor move.

Core components

Potentiometer
Servo Motor
Wires & jump wires
Arduino board
Breadboard

Circuit

How it works
When the potentiometer rotates anticlockwise triggers the motor move. if the potentiometer rotates clockwise at max, the motor will stop.

Code

#include <Servo.h>

Servo myservo;

int potpin = A0;
int val; // variable to read the value from the analog pin

void setup() {
myservo.attach(9);
}

void loop() {
val = analogRead(potpin);
val = map(val, 0, 1023, 0, 180);
myservo.write(val);
delay(15);
}

Final

The video is a demo (or trailer) of my final project, and images are the final view.

Online Link:

https://drive.google.com/open?id=1xcC8PSXATgWr8_nK9iuvLwP5t0UJ21i3

Concept + Goals.

I’m creating an interactive installation embed with the learning function for teenager (early adolescence) who are in the 12-18 years-old age group, to let the teenagers be intrigued by the color and painting, also improve their creativity, imagination, experimental spirit and cognition ability during the immersive experience.

Intended audience.

My target user is a teenager who is between 12 and 18 years old. When teenagers in this age group, they have more active learning and thinking ability. They manifest more positive action toward learning, exploring and creating new things. also this is a golden age for innovation. I’m also targeting at the people who is interested in the painting, color and creating things by themselves.

Precedents.

How teenagers interact with each other or another group, how they play with this installation and figure out the working process, is the main point I consider. Ideally, when teenagers meet this installation, they can attract by the function of inputting the color then they can feel unexpected by the shapes will match the color. For the project interactive function design, this should be simple to understand, friendly to move and use the tools.

This project named HUBO, it shifts the perception of coloring from 2D to 3D, this area satisfied the demands from children which are curiosity, creativity, imagination and the desire of playing. Over time the space will become a creative, colorful scene of furry food, each piece is the trance of interaction and experience.

Thinking about my project, I want to encourage teenagers to draw the screen start from the blank to colorful step by step, I believe every painting created by children are having their story and special meaning. My project will provide a relaxed environment to support their creation.

For the reaction from children, I collected much information from the Our Senses exhibition located on the American Museum of Natural History, and I found this project in the Seeing area. The walls in this room were drawing by multiple animals with different colors, when the light changes only the images that absorb that color can be seen. For the interactive part, children will explore by using the flashlight instinctively. This project actually inspires me a lot about which tool is friendly enough that I can use it in my project to let teenagers know this can be move and use not for display.

I got a reference from the Our Senses exhibition, as the image show, a user can play with the puzzles and the digital screen will give them an feedback on the result of machine learning. People will enjoy the process of making puzzles plus interacting with the screen. From what I have seen in this installation, people will be more passionate and engage when they can see something reacts to their input.

“The Color of Smell” is an interactive tool project which enables to paint with the smell, it consists of a selection of smells, synthetic and natural, a smell-brush and a mutitouch table top. This project can draw different shapes based on the smell you input from the objects, and this function really inspires me that how to surprise the user. So I want to classify a color input from the user, then different color ranges have specific brush.

“FABRIKA” is an APP focus on the customized pattern design, you can choose the shape, color, size, transparency, density and so on. Basically, everyone’s outcome is all the different.

Description of the project.

This project has three parts, color sensor, Wacom digitizer, projection on the wall. This installation will set up on the darkroom for the better user experience, when a user comes into the room they can find some objects placed around the color sensor, they may try them at first them find the other color they need in the surrounding.

set up place arrangement

Generally, when people go into the showroom, they can understand the use of this project. I use the big digital screen to help have a better view of their drawing outcome, also I played a short trailer for this project which contained the simple introduction and the process of using. When they found the color samples cannot satisfy their demands of color use, they tended to found other objects out of the room.

Outcome

As images display, users show their creativity and passion in choosing the color and draw on the canvas based on the brushes changed.

Feedbacks from the Major Major show

        • Yujie mentioned that I can add a white color with the circle shape to fake the effect of the eraser, and if I do not want people to use it then try to encourage them does not use this function.
        • Some people want a copy of the drawing.
        • Some people have a confusion about when is the restart of the drawing system.

Results

There are two main educational modes are used in many primary schools. Based on huge amounts of research, one mode is following the outline of the image, practicing to fill the color, another is the teacher give students a topic then teach them how to draw. These two modes are all not focused on improving the creativity and imagination, they give students too many limitations.

When tweens participate in this project, they can choose the color to control and change the brush patterns, which can improve them to explore the surrounding closest to them.

Further Efforts

  • Scale and technical reform

I think the scale is the main limitation of this project, a bigger scale can accommodate more people to enjoy the collaborative artwork. The ideal number of people involved in this project is 4 to 7, they will have a fast drawing and better interaction than the effect I have right now.

  • Automatically save function and send by email function

Some people want to save their drawing and get a copy by email, so I will try to achieve this technical function as my further step.

  • Iteration: moving forward

As what I have mentioned in the feedback from the Major Major show, I wish I could do more research in the physiological field to get more support as well as more possibilities about my concept. Currently, it is still quite a simple tool that people can play with. Moving forward to how the painting in the virtual world we are immersed in influences our perception toward the real world would a great iterate choice.

final presentation slides:

Final Post

Group member: Xiaoyu, Weilin

Presentation slides linkhttps://docs.google.com/presentation/d/19OE7jbadUBkxaTjKeHg5-J255E9PJDDEYrXy4R5UUr8/edit?usp=sharing

 

Concept + Goals.

Autonomous Objects is our Major Studio 2 group project, and we decided to actually build these objects.

Concept:We are making a series of experimental prototypes that explore the ubiquitous but novel relationship between humans and objects. By reimagining the messages behind everyday objects, the project enables everyday objects to communicate and negotiate with us in their (possibly) preferable way.

Intended audience.

This is a project everyone can experience. We want people to rethink their relationship with everyday objects through this fun and playful interaction.

Precedents.

When Objects Dream, ECAL

BroomBroom

Objects Thinking Too Much @ICC

Objective Realities

On the Secret Life of Things

Description of the project.

Chair: Chairs also have work hours. They are happy to assist you during work hours but they can also get crappy sometimes. They also preserve the right to refuse anyone when they are on a break.

Radio: A radio who lives in the past.

Lamp: A lamp who only wakes up at daytime.

Printer: A printer who needs a break occasionally.

Video documentation.

(Inside presentation link)

Materials list.

Chair: Velostat Pressure Sensitive Conductive Sheet, copper foil tape, LEDs, alligator clip with pigtail, 9V battery, Arduino battery adapter 9V

Radio: Vintage radio, Adafruit MP3 Shield, Micro SD, SD Card reader, potentiometers, Build-in speaker, Prefboard

Lamp: Photocell sensor, two-channel relay , light bulb, lamp, 10k resistor

Printer: Mini thermal receipt printer, wires

Process + Prototypes.

Chair

Sktech:

Prototype:

Lamp

Printer

Radio

Circuit diagram.

Two potentiometers to A0 and A1.

Speaker directly connect to music shield.

See the tutorial of MP3 Shield

Lamp:

Light bulb part:

Connect the lamp wires to relay: full tutorials can be found here

The Input pin connects to Pin6

If you want to know how light bulbs work, check this.

Photocell sensor part:

Connect one end of the photocell to 5V, the other end to Analog 0.
Connect one end of a 10K resistor from Analog 0 to ground.

If you want to learn more about photocell sensor, full tutorials can be found here.

Mini Thermal printer

Complete circuit and tutorials can be found here

Final Project Proposal – Xu (Week 10)

 

Concept Statement:

Tears as a human bodily reaction have a strong connection to personality and communication. I want to visualize how the human process of tears not only reflects aspects of personal identity but also interactions between people or human with nonhuman, like a language.

Project Description:

I am going to do an installation which will make artificial tears and controlled by machine learning and Arduino.

Concept Diagram :

Schedule:

Phase 1: Week 11. Technical Research.

Machine Learning part: how to classify and code in machine learning; Is it possible to use existing machine learning code to do my project?

Physical computing: how can I control the tears and could tear generate smoothly like human’s real tears?

Phase 2: Week 12. Project Development — Tears machine. 

Phase 3: Week 13. Project Development — Combine Machine Learning with Arduino. 

Finish machine learning training, and then test new data. Try other new data like the actor or actress inside a movie. Combine Machine Learning with Arduino

Phase 4: Week 14. Construction. 

Setting for Major Major. How to set the dummy head, video, tears machine, tubes together in order to express my concept.

Test it and documentation.

Phase 5: Week 15.  Presentation Preparation. 

Create 3D renderings of final product for final presentation (6 images).Photograph and video record User experience.

Material List:

  • Arduino UNO
  • Stepper Motor
  • Motor Sheild
  • Jumper Wires
  • Power Supply

Precedence:

Tears collector

Yi-Fei Chen–Tears Gun

Wim Delvoye Cloaca—Poo Machine

Ernesto Klar – Invisible Disparities

 

Prototypes – Xu (Week 11)

Springe pump

I made a syringe pump which helps by an online open source. This syringe pump pushes out small, accurate amounts of liquid which helps the water dropping more similar than the real tears from human. The tears need to be more controllable, so this machine can smoothly control the water from forward and backward by using Arduino.

 

The components of this machine:

Arduino Uno R3, 17V stepper motor, Adafruit motor shield, mounting rail, threaded rod, shaft coupler, smooth rod, linear bearing, syringe, tubes, some nuts. And I 3D-printed 4 components. The motor rotates the shaft coupler, and shaft coupler rotates threaded rod which can trigger the rod mount plunger to move forward and backward.

Precedent:

Syringe Pump

Final Prototypes – Xu(Week 12)

Machine Learning
For general machine learning steps, I first need to get training data to train the network. I took pictures of myself with crying and normal images. Each of them was around 2000 images. These were the training data. The main technique used in my machine learning part is called CNN(Convolutional Neural Network)(11), it is a widely used technique in Deep Learning. By studying given dataset and corresponding labels (cry or not) the network is able to learn the underlying pattern of an image regarding if there is any characteristic of a crying person, for example, the shape of the mouth or shape of the eyes. 

Then based on the model that I have. I have a new set of data in order to see whether the training model is accurate. I chose a section of the movie Les Miserables which is a song called I dreamed a dream. This is a super emotional song which the actress cried and sang at the same time. I want to use this song as my final new-data in order to express the idea of how my facial classification read the movie. I saved each frame inside the movie and run them through the model that I had. The result showed which images were crying, which images were normal.

Then I wrote the result in stepper motor’s code from Arduino which illustrated by pushing the syringe forward or backward. If the current image in the video is classified as cry, then it will trigger the syringe from the tears machine to push. So that the dummy head drops water which can be seen as crying.

Final Project Documentation-XU

Name: The language of tears

For the psychological perspective of a human’s tears, from myself crying experience, I would like to have sad tears when seeing something have in common with my bad. Also, I’d like to weep when seeing somebody’s weep in front of me which could always trigger the sympathies in my heart. People like to mimic the other person’s facial expression when communication. It is easy to laugh inside a group of people who like to laugh. And it is also easy to cry inside a

sadness group. The emotions really communicate with people and influence other people’s feelings. I want to focus on these humanized interaction and build the emotional link between human and machines.

Machines may not have real emotions themselves but AI can help machines learn and do what humans teach them to do. I admit emotions are unpredictable and work as a key feature of being human. But the human facial expression, the physical situation always can trace a human’s emotion. For instance, we can get a conclusion whether one person is crying when seeing the facial expression of that human, or two light moving water traces emerging under the eyes. So does AI. They learn what kind of data expresses people’s emotions, like facial expressions. AI could learn and mimic human’s reaction based on what we have on our face.

Concept Statement:

By using machine learning which can understand tears and mimic crying as a behavior, I want to build a poetic feeling of humanized communication between human and machines.

Precedents: 

Tears Gun

Eindhoven graduate designs a gun for firing her tears

Artificial Shit Machine

Cloaca – Art(ificial) shit machine

Invisible Disparities

INVISIBLE DISPARITIES, 2011 – ongoing

Presentation slides and Process:

 

Future Iteration

Because of the technical problems of combining python with Arduino, the result of this project couldn’t run the whole process in the real time. Instead, I wrote the machine learning results in Arduino code. I will still try to figure this technical problem out in the future.

And also, after getting some feedback from Major Major show, I think it will very interesting if I will not use the dummy head but put the tubes on my own face instead. The tears machine will help me to drop my tears. This project can be a wearable artistic and personal device which may build a deeper connection between AI and human.

I will keep exploring this topic and also my interest in language and human expressions in the future. Hope I can find some deep explorations on those topics in my Thesis.