Category Archives: Assignments

10+11 week_Joyce

Project description

I am creating an interactive device to visualize people’s interaction with the online information using the plant as a metaphor. Through interacting with the installation, people can understand how our interaction can influence data and rethink or imagine our possible relationship with data.

I am doing research on Internet information because I want to explore what humans would react if data appears to be a living life form, having the lifespan and biological characteristics and habits, in order to discover a new form of interaction to connect the two and encourage people to rethink their relationship with information.


Bionic design

Because the form I will try to redefine information with is by adding the biological characteristic to it and giving it live, I started to look at bionics designs and investigate how design can derive inspiration from nature. Usually, bionics designs imitate the shape and body structure of nature. Bionic Designs that get inspiration from shape and structure creature but some also learn from heir special behavior and their ethnic relations. Studio PSK designed a series of radio devices called “Parasitic Products”[4] that can interfere with electrical equipment, imitating what the parasites to their hosts in the nature. The purpose of

“Parasitic Products” that can interfere with electrical equipment
the design is to highlight the importance of deviance as a way to instigate paradigm shifts in

design. It is a good example of letting people rethink the function of products, and is not always viewed as a heroic discourse, neglecting the aggressive, predatory, and often ruthless lifestyle typical of most organisms. From this project, I discover the possible form to express product in a bionic way and also guide me to think about a way of exploring new function: imitating a kind of biological relationship in nature. I started to imagine the possible biological relationship between data and people base on the reality, like parasitism, symbiosis, saprophyte etc.

The natural metaphor of online information

Then I started to look at social media information which has the closest connection with humans. It helps us to build our public identity and it relies on out input and attention to keep it alive and active. I was inspired from Social Network Zoology done by Chia-Hsuan Chou[5], which compares people’s behavior on social media to animal’s social behavior while doing things like hunting or courtship in order to define your social media personalities as a certain kind of animal. This work tells the story by creating vivid metaphors for human’s behavior on

“Social Network Zoology” compares people’s behavior on social media

the internet, which means to me that metaphor is actually a concise storytelling method. But in my case, the focus is not on people’s social relationship online but the interaction between people and the media. Base on this project, I decided to use visual metaphor to visualize

people’s interaction with social media information. In order to create a narrative based on people’s relationships with social media, I observe what people usually do or react to social information and how these activities impact the information itself.


I am planning to make a plant-like product that represents the social media. For example, on Facebook. The posts you made on Facebook would be printed on a leaf. The growing speed of the plants would depend on the interaction activity of people passing by. The usual action we do on social media, like sharing, liking and commenting would respectively

correspond to tearing part of the post. Without people’s attention or interaction, the social media plant would die, which shows the relationship between social media and people. The former relies on the input and attention from people, and the latter needs to build their identity with the public. Also, bringing the virtual action to actual physical ones exaggerates the bounce between the two vividly.

TransSense: Environmental Interconnectedness

The title of this work is called TransSense: Environmental Interconnectedness. The exploration into this project started by looking into how technology can influence not only how we interact with fellow humans across the world, but also our environment. So much of our lives are dictated by family, friends, acquaintances, our surroundings, etc. How can we extract environmental data to create a sense of calmness to our lives.


Concept + Goals +Audience:

I started this phase of the project by asking myself:

  • How can information be translated in real time to the body?
  • How can it be sensed through physical output?

How many of us communicate with the world is through cellular devices. When too much time is spent communicating through this form, this can create a disconnect between us and our physical surroundings. The aim of this  project is to better integrate one with the world, through a different form, one that doesn’t rely on screens.

So, began to look at different forms of interaction as well as what information can we globally receive from spaces. I have been interested in wearables for sometime and one of the hardest things that I come across when developing wearable devices or garments is justifying the use of the item when the same information can be derived from a phone or other mobile device. However, I find that certain forms of sensation and particular types of information lend themselves to the platform. The sensation being haptics, and the information being weather.

In order to sense a vibration, one must be touching the device which generates the feedback. Due to the nature of clothing, which touches the human body for over 90% of the day, this lends itself to great advantages for intuitive connection. Clothing is also an item that changes based on the weather. If its cold one may wear a sweater. If it rains, a raincoat. If its a hot, a short sleeve shirt. With this project I want to push that function even further. Why not allow for our clothing to “speak” to us about the weather. In this case, I wanted not only to inform about wind patterns, but replicate them.

To this project there is also a poetic side to it. To replicate the wind on the body using information from anywhere in the world may allow one to feel more connected with a particular place in one’s past. For instance, by wearing a garment that can replicate the live wind patterns from my hometown I may be able to establish a mental connection with that location, to fill a dissent void. Of course, this is more poetic in nature, as I mentioned before. If even capable, much more testing and speaking with psychological specialists would be needed.

To clearly state my goals, I want to “shrink” the world for those who are away from the places they care (audience) about by replicating the live wind patterns of that location, in order to provide comfort and a better communicative relationship with our environment which is becoming more cutoff by the use of mobile technologies.

I chose this particular audience because I am one who misses home from time to time, as many do, but can always make it back. I am developing this product as a means for people in the future to cope with being away from a place that they care deeply about.


One precedent that I researched while working on this project was, Rachel Freire. Freire created a wearable called the embodisuit. What this suit does is act as a wearable mesh, worn as an undergarment that would parse information that the user was interested. This suit uses heat, cooling, and haptic feedback to alert the wearer of various forms of changing data, such as the weather and people -to- people communication.

Description of the product:

This product is a wearable haptic garment that allows the user to receive update-to-date weather conditions via the internet (IOT). This garment is embedded with 10 vibration motors. Each motor is connected to an Adafruit Feather MO WiFi Microcontroller. This microcontroller, when connected to a nearby accessible WiFi network, will pull the weather data by location from the OpenWeatherMap API. It then parses the information needed, in this case only the wind information, and maps it to the vibration motors. The are two pieces of information being mapped. One, the wind speed is being used to control how much power is given the the motor, and two, the wind direction is used to determine with motors are being activated.

Because this garment is meant to replicate the wind wind it was important to obtain a pulse effect across the body as if it was a gust of wind passing over the wearer. This also meant this varying intensities of power would have to be given to each motor and as stated previously the direction gathered from the API would dictate where on the body this gust would begin. For instance, it the wearer is facing NORTH and the API states the wind is coming from NORTH then the front-facing motors would be given full strength, the motors on the side are given slightly less power, and the motors on the back don’t receive any. How much power each section has is  modulated based on the wind speed taken from the API.

In order the make this a dynamic experience, a magnetometer (digital compass) was included initially. This compass would keep track of the user rotation and remap the “pulse” to a different part of the body, determined by the direction from the data set. I will speak to that instance further down.

Video Documentation:


Materials List:

Process + Prototypes:

This process started off with coding. Before the making process started I wanted to make sure that I could do it code wise with the electronics I had available. I started off coding the NODEMCU. I found this WiFi developer board quick for connecting to the internet and grabbing information through the API, however, it was tedious when it came to programming the pins. The first thing that I noticed was there were no PWM pins. To create the breeze effect I felt it vital to use PWM. In addition to that the number of pins available to power all of the pins as outputs were limited due to the multiplexing of the pins. When WiFi mode was enabled in the code, several of the the pins stopped outputting. If I tried to connect one pin to another, the board would reset.

At that point I decided to use the Adafruit Feather which was capable of PWM and WiFi connectivity, and without the multiplexing issue. Once I was able to map the API data to the LEDs I was using for testing, I added a digital compass to keep track of which direction the user was standing, as to create a my dynamic experience of feeling the motors react to your rotation. However, the compass was prone to many disruptions, electricity in particular. When I placed it near my computer I found that the readings were quite sporadic and didn’t map well to 360 degree movement. Readings could go from 0 degree to 230 degree with no values reading between. I believe this also has to do with the movement of the human body is faster then the ability of the microcontroller to process.  In the end, I decided it would be best to focus my efforts else where, and save the compass for a future iteration.

I then began crafting out the circuit and determining the best placement on the body. I tested various fabrics, and conductive materials. The Eeontex Stretch fabric was not conductive enough, so I switched to a conductive fabric tape, connected the vibration motors, sewable LEDs (more so for visual demonstration) and secured it from shortages by placing a layer of insulating material on top of the circuit.

Circuit Diagram:

Poetry, polity and power Instructable

Poetry, polity and power is the culmination of two disparate ideas that I have been intrigued by this past semester. The first is about a new kind of interaction that is emerging between human beings and machines as technology gets integrated into everyday processes. The common perception of technology or computerized systems being objective is a myth- they embody the values and perspectives of the people who design them. The second is about the power of art and poetry- and how they can be used as dynamic tools for resistance.

Poetry, power and polity is an optimistic poetry generator that can be fed biased text- hate speeches, discriminatory policies, misogynistic statements- and it removes words to create poetry that is hopeful and empowering. I wanted to create a computerized system that would automatically generate poetry from the source text- without human intervention. I see this project as a conceptual prototype that captures the essence, the value inherent in the idea- but needs further iterations to be fully realized.

In its current form, the generator would be more effective if it could respond to different source texts- by activating different heating pads depending on which text was fed in. Future iterations include programming a system that can operate on its own. Possible ways to do this would be to train Machine learning algorithms using many such blackout poetry examples.

The challenges for this project were mainly working with unfamiliar material, that was inconsistent and would react differently on different days. It taught me the importance of experimentation. Powering the circuit using the wall wart was challenging too- mainly because I found very limited documentation on it.

I loved working on this project though because I realized how simple, basic materials, mechanisms and methods can be used to convey ideas.

Here is the pdf with the final slides.

Here is the link to my final Instructable.

Final Project (Week 12 and 13)_Alyssa

Final:   Time’s Up 


Concept + Goals: With this project, I sought to create an interactive exhibit that complements the ‘Me Too’ and ‘Time’s Up’ movements. These were my guiding design questions:

Continue reading

Twinkle Stare – Final Presentation Documentation

  • Your presentation
    Online Link: or
    PDF link:
    Pcomp Twinkle Stare_2
  • Concept + Goals.
    My concept was to create an IoT device that reconnects me in New York and my dog in Taiwan to be able to feel her presence and recreate a moment we share together. My goal was to be able to feel my dog’s presence over long distance and to spend more time with her somehow. This project is very close to me and I decided to an long distance device because of my dog who is actually sick with cancer back in Taiwan. So I want to build something that will actually work for us.
  • Intended audience.
    This project is mainly for myself so I can spend more time with my dog with her presence over long distance. However, I feel like this device for be for other dog owners who are in long-distance relationship with their dog and would like more presence of their dog.
  • Precedents.
    Pillow Talk is a device that lets you hear the real time heartbeat of your loved one over long distance by Little Riot, which really inspired having the presence sense in my device. SoftBank from Japan also created a series of devices called called Personal Innovation Act, Analog Innovation that helps connect the older generation to the younger generation by translating the new technology we use into older forms such as printing your social media updates in the mailbox as newspapers every morning so your grandma can read updates about you.
  • Description of the project.
    After confirming my concept, I begin to think about the technical aspects of my project. My device’s interaction has two parts with different interactions. One part will be my dog’s end with her doll embedded with a pressure sensor on which she will lay on it and a speaker. The other part is the model of my dog in my room in New York which will have a face tracking camera, a button, and a pressure sensor built into it. As I began searching online for the details of how to make these interactions work, I was also introduced to two tools that could help with the long distance IoT connection to work. One tool is the MESH sensors which consists of 7 block sensors which each has a built-in function such as tilt, led, button, motion, and more functions to make prototyping and building project easy for the Internet of Things. The other is the IFTTT the free web-based service to create chains of simple conditional statements, called applets, to connect with different applications and services over the internet. Both tools are extremely helpful for my IoT device, however, I want to make all my interactions work properly first offline. First of all, I decided to figure out how to get the face detection to work on camera. I used an Arduino controlling to a servo motor and connecting the servo motor to the OpenCV face detection on a program called Processing to track faces when it moves from left to right on the screen. Thankfully, I spent a few days studying the open source code example online and made face tracking work with no problem. Next was to get the pressure sensor connected to a trigger to open the face tracking camera to start tracking face detected when it is pressed. This was a part where I was stumped and frustrated because I could not get the code to work with my own ability. With some help from my peers, I was able to get the pressure sensors to work to open up the camera to start face tracking when it is pressed and another pressure sensor to turn in off. However, there was another one problem I encountered with my code which was that it can only be run once. If I want to try to press the pressure sensor to turn it on again, the sensor cannot read my pressures values anymore. I would have to rerun both the Arduino and Processing sketch for it to work again. After some debugging, I discovered that it was part of the Arduino code that made the instructions appeared to be stuck in a loop. After modifying my code, my code was working fine, though, at times unstable, I decided that with the time I had, I would not make the IoT connection happen but just focus on the interactions and the physical doll of my device. My whole framework to make this IoT device work is displayed in the diagram below. What I’m focusing on is the interactions on the left. Ideally, I would incorporate this with the MESH sensors and by using the IFTTT webhooks service to send to communicate with my MESH sketch.
  • Video documentation

  • Materials list

    For the face tracking part: 

    Software Required
    Firmware Required
    Hardware Required

    For the physical making of the doll –

    • Fluffy Socks –,
    • Polyester stuffing –
    • Sewing Kit – Amazon
    • Process + Prototypes.
      After making the technical part of my device work, I decided to quickly move on to the physical enclosure. The physical enclosure of my device consists of two parts. One is the doll that my dog will lay on and the other is the model doll that sits in front of my desk. I completely underestimated how hard it is to put anything physical together, even if it is something cute and furry such as a doll. Somehow, the thought of making a doll to me seems simple to me. Not until I started a few tries with making a doll did I realize that I have a lot of practice to do. I thought of taking apart an actual doll but I also wanted my doll to be customizable so I decided to make one myself. To start off, I used fluffy socks as my main material and stuffed it with polyester fiberfill. I made a koala doll as the doll that my dog lays on. This was easier to make as it is such a small doll with no body. Then, I moved on making the model doll of my dog. With my limited experience, it was hard to me to make a model of the dog that looks exactly like my dog. One of my first versions of my doll was unable to stand up properly so I had to make a stand stuck onto an acrylic board and put it inside the doll so the model doll could stand by itself by itself. I then glued the servo onto the board and cover it with polyester fiberfill and put the sock fabric for the model’s head over it to make the model’s head. After getting the body to sit properly on a flat surface, I moved onto making the head. I put a web camera into the head and cut a small hole for the web camera to be able to peak out through the sock fabric. The web camera, however, did not work well hiding inside. One problem is that the web camera is very sensitive to the lighting, the distance, and the height of where you stand. When testing with the web camera inside the head of the doll model, the camera had a hard time detecting face and would jump from different shadows in the screen which causes spasm of quick movements. Another factor that was contributing to this unstable web camera screen was the fur from the sock material I used. This seemed to disrupt the clarity of the screen with a few furs sticking out along the side of the hole I cut. Therefore, I made a hard decision to connect my device to my computer’s camera to ensure the most stable and accurate face tracking. In the end, I was able to put together a functional model doll of my dog. The face tracking camera can be triggered by another doll when you press on it (when my dog lays on it) and you can turn it off by pressing on the model’s head. You can also press a button which is using one of the MESH button sensors that will turn on music which will play in the speaker inside the doll.Prototypes-


      • Pressure sensor did not work as will as I thought
      • Complexity of the code
      • Webcam did not work well behind my fabric + distance issues
      • Time Management
      • Aesthetics doll’s with many wires sticking outFuture Iterations:
      • Refinement on the design/look of the doll with no wires with webcam built inside the doll
      • Making it work over local distance and wireless with bluetooth
  • Circuit diagram.

Continue reading

Final Documentation

Title of this project

Storytelling Soundsystem of “Silence Breaker : One Survivor’s Story”



In the midst of current #metoo movement, I’d like to focus on domestic abuse particularly for this design project. We are facing significant moments of women’s right against sexual abuse in these current issues, and there are needs to bring about more positive effects to society than just mere attentions. I would like to bring up domestic abuse against women and children for my design project because sexual abuse in domestic area could be the most hidden, and deluded part.

One of a common myths associated with DA (Domestic Abuse) is that victims of DA are helpless, passive and fragile. However, survivors are often strong, and use a number of coping strategies to manage their situation. But society’s traditional approaches to survivors such as victim blaming, stigmatization, and pathologizing make obstacles to let them speak out. Also survivors are at risks such as retaliation from perpetrators and losing their life foundations. Through this design project, I’d like to call for a change to society to the level of acting out.

Continue reading

Next Iteration & Playtesting Plan (Week 12)

Making a fabric speaker

  • Background : After careful consideration of which will the best way to deliver this story of domestic abuse, I chose to build a sound system with virtual visualization. I created the survivor’s house and character with camera movement in Unity. I built 4 spaces of her house – her bedroom, living room, dining room, and a large room they used as a study. I put 4 soundtracks of her testimony in each space, along with other ordinary domestic sounds I recorded separately. After doing so, 4 video experiences were created. 

Continue reading


The video is a demo (or trailer) of my final project, and images are the final view.

Online Link:

Concept + Goals.

I’m creating an interactive installation embed with the learning function for teenager (early adolescence) who are in the 12-18 years-old age group, to let the teenagers be intrigued by the color and painting, also improve their creativity, imagination, experimental spirit and cognition ability during the immersive experience.

Intended audience.

My target user is a teenager who is between 12 and 18 years old. When teenagers in this age group, they have more active learning and thinking ability. They manifest more positive action toward learning, exploring and creating new things. also this is a golden age for innovation. I’m also targeting at the people who is interested in the painting, color and creating things by themselves.


How teenagers interact with each other or another group, how they play with this installation and figure out the working process, is the main point I consider. Ideally, when teenagers meet this installation, they can attract by the function of inputting the color then they can feel unexpected by the shapes will match the color. For the project interactive function design, this should be simple to understand, friendly to move and use the tools.

This project named HUBO, it shifts the perception of coloring from 2D to 3D, this area satisfied the demands from children which are curiosity, creativity, imagination and the desire of playing. Over time the space will become a creative, colorful scene of furry food, each piece is the trance of interaction and experience.

Thinking about my project, I want to encourage teenagers to draw the screen start from the blank to colorful step by step, I believe every painting created by children are having their story and special meaning. My project will provide a relaxed environment to support their creation.

For the reaction from children, I collected much information from the Our Senses exhibition located on the American Museum of Natural History, and I found this project in the Seeing area. The walls in this room were drawing by multiple animals with different colors, when the light changes only the images that absorb that color can be seen. For the interactive part, children will explore by using the flashlight instinctively. This project actually inspires me a lot about which tool is friendly enough that I can use it in my project to let teenagers know this can be move and use not for display.

I got a reference from the Our Senses exhibition, as the image show, a user can play with the puzzles and the digital screen will give them an feedback on the result of machine learning. People will enjoy the process of making puzzles plus interacting with the screen. From what I have seen in this installation, people will be more passionate and engage when they can see something reacts to their input.

“The Color of Smell” is an interactive tool project which enables to paint with the smell, it consists of a selection of smells, synthetic and natural, a smell-brush and a mutitouch table top. This project can draw different shapes based on the smell you input from the objects, and this function really inspires me that how to surprise the user. So I want to classify a color input from the user, then different color ranges have specific brush.

“FABRIKA” is an APP focus on the customized pattern design, you can choose the shape, color, size, transparency, density and so on. Basically, everyone’s outcome is all the different.

Description of the project.

This project has three parts, color sensor, Wacom digitizer, projection on the wall. This installation will set up on the darkroom for the better user experience, when a user comes into the room they can find some objects placed around the color sensor, they may try them at first them find the other color they need in the surrounding.

set up place arrangement

Generally, when people go into the showroom, they can understand the use of this project. I use the big digital screen to help have a better view of their drawing outcome, also I played a short trailer for this project which contained the simple introduction and the process of using. When they found the color samples cannot satisfy their demands of color use, they tended to found other objects out of the room.


As images display, users show their creativity and passion in choosing the color and draw on the canvas based on the brushes changed.

Feedbacks from the Major Major show

        • Yujie mentioned that I can add a white color with the circle shape to fake the effect of the eraser, and if I do not want people to use it then try to encourage them does not use this function.
        • Some people want a copy of the drawing.
        • Some people have a confusion about when is the restart of the drawing system.


There are two main educational modes are used in many primary schools. Based on huge amounts of research, one mode is following the outline of the image, practicing to fill the color, another is the teacher give students a topic then teach them how to draw. These two modes are all not focused on improving the creativity and imagination, they give students too many limitations.

When tweens participate in this project, they can choose the color to control and change the brush patterns, which can improve them to explore the surrounding closest to them.

Further Efforts

  • Scale and technical reform

I think the scale is the main limitation of this project, a bigger scale can accommodate more people to enjoy the collaborative artwork. The ideal number of people involved in this project is 4 to 7, they will have a fast drawing and better interaction than the effect I have right now.

  • Automatically save function and send by email function

Some people want to save their drawing and get a copy by email, so I will try to achieve this technical function as my further step.

  • Iteration: moving forward

As what I have mentioned in the feedback from the Major Major show, I wish I could do more research in the physiological field to get more support as well as more possibilities about my concept. Currently, it is still quite a simple tool that people can play with. Moving forward to how the painting in the virtual world we are immersed in influences our perception toward the real world would a great iterate choice.

final presentation slides: