POV 3D Display

December 15, 2016
/ / /

FINAL || Instructor: Benedetta Piantella || Collaborators: Michael Simpson
conceptWe began our final project under the pretense that we wanted to pursue a project that incorporated our shared passion for architectural design. We imagined projects that would allow us to carve out a physical space using movement and gestures. As we delved deeper into what these idea might look like in practice, we quickly realized that the level of abstraction created by the physical input made the resulting models too difficult for the user to keep mental track.

From there, we began thinking about drawing tools which leveraged existing modes of architectural graphic technique to allow a user the ability to draw three dimensional spaces in a 3D space. The device we envisioned allowed the user to create drawings in plan and elevation which would then be integrated to create a 3D projection of the resulting space. However, as we continued to pursue this idea we began to realize that the limitations of this kind of system outweighed the benefits.

These disparities made it clear to us that the adoption of abstract interfaces necessitate a display system capable of conveying complex volume data. To realize any of the ideas we wished to pursue we would need a volumetric display capable of presenting real-time data that is highly dynamic and also high resolution. After these revelations and some initial research surveying the landscape of volumetric display, we decided to dedicate our final project to the pursuit of this technology.

Our initial thought was to create a display that would use strips of individually addressable LEDs and spin these LEDs around a center axle. We based this idea on our existing exposure to the technique of Persistence of Vision (PoV) where the blink rate of a light in motion creates a light field, with a fast enough rate of motion, the light field created by this fools our eyes to integrate it into a persistent image.

Before starting to design our own PoV system, Michael conducted extensive precedent studies to survey the state of the art.  He quickly determined that there had been many attempts to create volumetric displays using PoV. Below is a table of summarizing the results of these investigations.

We both found that the Interactive 360 Degree Light Field Display created at the University of Southern California’s ICT Graphics Lab offered the most compelling representation. A goal of that project was to create a display using commercially available devices. Unfortunately, as we began to pursue a similar implementation we hit a major obstacle in trying to understand how to achieve the high projection frame rate (4800hz) necessary for that device. Their implementation used a standard DLP projector but with a modified (hardware) driver that used an FPGA device to act as a middle-man between a graphics card’s 24-bit HDMI signal and the binary (1-bit) colorless frames which are actually displayed by the Digital Micro-mirror Device (DMD) inside all DLP Projectors. Despite understanding the process by which University of University of Southern California’s ICT Graphics Lab employed, implementation was not a straightforward task and implementing the FPGA was actually something outside the scope of our project.

At this point, we decided it was time to settle on a project design and to pursue that design with the remaining time we had.   Michael had previously worked with LED matrices and proposed that a low-pitch (meaning, higher density of pixels) matrix spinning between 900 and 1300 RPM would be able to create the PoV effect we had been looking for.


The final design takes two 32×64 LED matrix with 3mm pitch (from adafruit) and chains them together to create a single 64×64 LED matrix. The combined matrix is mounted on a center axle which is attached to a motor shaft. During the course of our investigation, we realized that a limitation of this sort of design (which was circumvented by the USC-ICT project) is that it requires power and data be provided to a spinning platform. This limitation can trivially be overcome at lower speeds by use of a slip ring. However, at speeds greater than 300 rpm, things start to become more difficult due to the design of the cheaper “capsule” style slip rings. While speaking to Ben Light, we were introduced to another style of slip ring known as a through-bore.

Through-Bore Slip Ring vs Capsule Slip Ring:  Through-bore slip rings have a hole in the middle of the ring with set screws that are intended to be locked into place around a motor’s shaft. This style of slip ring is specifically intended for the purpose of our project and are rated to operate even at speeds much higher than we intended. This style is not a readily available product in most stores and must be purchased directly from one of the manufacturer’s. As we believed this would be the key to enabling our project to work as intended, we made the decision to purchase one of these. Unfortunately, despite our being guaranteed of next day delivery, the slip ring arrived one day late- causing us to be unable to implement the project before our class. But, now with the ring in hand, we will continue to realize the project and will post again with results of the fabrication.

Communication with the Displays: Aside from the fabrication, we also needed software to communicate with the displays and also to allow them to be interactive. Our goal was to create an installation where the user would be able to control the pixels on the spinning display by means of a joystick and linear potentiometers for adjusting values like color, blink rate, along with a standard potentiometer for adjusting the speed of rotation.

A Raspberry Pi was used to communicate with the displays using C (Python was also an option) via adafruit’s gfx libraries. However, for a more interactive experience, the arduino mega was used to host a sketch that enabled control of the displays over serial communication. The sketch was derived from an old (and partially broken) SparkFun sketch which was meant for smaller displays. After making some adjustments to the sketch it was able to run on the mega with matrices in a 64×64 chained arrangement providing a way to interactively manipulate the pixel information. This interface would allow us to move a cursor around the displays and draw points, lines, rectangles, ellipses, polygons, and/or text on-demand. This component of the project is fully working but could not be demonstrated in PoV fashion due to the lack of a fast rotating platform.


Images above show successful use of controlling motor speed, successful communication to two LED Matrix displays with a combination of Raspberry Pi and Arduino Mega, and physical components for the first model fabricated with the the CNC mill and laser cutter.

Read More

Work in Progress: Social Media Sustained Ecology

December 7, 2016
/ / /

FINAL || Instructor: Allison Parish || Collaborators: Carlie Zhang

The flocks’ movements are generated by a variety of environmental factors. For both flocks, the individual agents push and pull with cohesion forces and separation forces to avoid collision.  There is a general Perlin noise field which helps the flocks seem to move with intent and purpose.  The field vectors also keep the flock within the parameters of the screen.  Where flock one generally moves with the field vectors, flock two’s main objective is the chase and collide with flock one.  When food is generated on the screen, both flocks however, are drawn strongly to the food source to sustain their existence.  Food is the strongest motivating force in this system.  Individual agents will break from the larger group if food is in sight.  The number of food particles generated is equal to the number likes and the number of characters found with in the comments of the post.

Read More

POV 3D Display

December 1, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson

After taking a few steps back from our original project, we plan on using our final as a study into volumetric displays and their functionality.  We struggled with the concept of drawing/sculpting in space without a physical feedback mechanism to inform our users of their production.  In order to understand how to interact in space, we found it necessary to develop a way to describe our 3D geometry in an accessible way for users of these tools (ie. without abstract drawing methods of architectural related design practices).

BOM & Scheduling 



Read More

Social Media Sustained Ecology

November 17, 2016
/ / /

FINAL  || Instructor: Allison Parish  ||  Collaborators: Carlie Zhang

Inspired by my childhood digital pet game, Tamagotchi, our project imagines a digital micro ecosystem which feeds off the social media activity: ‘posts’, ‘likes’, and ‘comments’.  The system of intelligent behavior particles, or flocks, swarm and feed from food source nodes which emit food particles and sustain the individual agents within the flock.  The food nodes are taken from a users Instagram posts and the food particles are representative of the posts and comments on each post.  Without sustained feeding of the flocking agents, the story takes a dark turn, the flock will start to exhibit abnormal behavior as agents within the flock begin to cannibalize each other.   The first flock changes into a predatory pack hunting down remaining agents from the original group until the food source returns.

The first example below shows flocking behavior with the individual agents informing the group behavior and vice versa.  The second example shows independent autonomous particles moving based on a Perlin noise flow field.

Targeting Flocking Behavior p5.js alpha editor link || Perlin Noise FlowField p5.js alpha editor link

Read More

3D Draw

November 17, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson


Using methods of orthogonal drawing used in traditional architectural practice to define space and form (plan, section, and elevation), we will construct 3D drawn forms by having users draw on the respective XY, XZ, ZY planes.  2D lines drawn on the orthogonal surfaces will be projected and extruded to intersect with one another to inform the 3D shape.

w.01_Parts and Materials

w.02_Experimentation and refinement of drawing tool in 3d.  Fabricate prototype in cardboard.

w.o3_Fabrication and debugging

w.04_Documentation and Presentation

Read More


November 14, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson


This project is an investigation into bottom-up ad hoc architectural design.  Using depth sensor cameras, we will map a user and the environment around them.  Our digital interface will allow user to push and dig through physical space to carve and sculpt a virtual inhabitable dwelling.  The project will allow users to explore their own needs and requirements for adequate shelter.  If given a plot of land, what will one make?

Read More

Collage Animation Storyboard

November 9, 2016
/ / /

Instructor: Instructor: Gabe Barcia-Colombo


My story is inspired by a multitude of things.  I have been fascinated by Japan’s cultural relationship with nature and nuclear energy.  The theme of human destruction and nuclear fallout has been a prominent theme in Japanese art and design since the the bombing of Hiroshima during WWII.  One consequence of this psychological imprint, has been the fantastical creation of gigantic monsters in science fiction movies who have grown and mutated as a result of nuclear fallout.   I also explored the themes of utopia and unlimited energies in architecture.  The Metabolist Architects of post-WWII Japan struggled with the idea of designing their country’s utopic future with the daunting possibility of nuclear war. Architects, such as Isozaki Arata, often portrayed Japan’s urban development within desolate landscapes of nuclear aftermath.

My story will follow a group of travelers who stumble upon a fantastical land of untapped wealth and resources.  The nomadic group settles in this land and begins to build a civilization.  Unhappy with the destruction of their home, the mythical creatures that inhabited the land before it’s destruction try to reclaim their home.

setting-converted-01-copysetting-converted-05 Read More

Animation Study 1

October 29, 2016
/ / /


img_3061blurThis is an animation study looking into how to imply depth and volume in a stop motion animation.  A camera array is a tool of multiple cameras which take simultaneous images of one subject from different perspectives.  These images are composed in sequence and imply depth and volume from flat images.  I tried to replicate the affects of a camera array with a single camera and a rotating subject.  I should have guessed this, but the affect was not as anticipated.  The vase of roses looks to be spinning instead of implying depth or volume.

Read More

Stop Motion Animation

October 28, 2016
/ / /

Instructor: Gabe Barcia-Colombo  ||  Collaborators: Daniella Garcia, Koji Kanao


From left to right: Exquisite Corpse Example, Artist Unknown; The Center, David Altmedj; Still Life with Dead Game, Frans Snyders

Inspired by the idea of an exquisite corpse, our group would like to make a sculptural still life with animated subjects which are consuming and penetrating into one another.  We are thinking to combine the stop motion animation techniques with camera bullet affects or a camera array set up.  We would like both the subject and the platform to be animated simultaneously to imply depth and volume.

Read More

BeatBOX: Part 3

October 27, 2016
/ / /

Midterm  || Instructor: Benedetta Piantella  ||  Collaborators: Ziyu He

unnamed-2Above: Play interface for BeatBOX.  3 Distinct sounds and their magnitudes are pushed onto a timeline denoted by 3 distinct shapes with magnitude.

The final device pushes notes and tones onto a circulating timeline through.

screen-shot-2016-10-27-at-9-58-56-amAbove: Diagram of movement for sound generation.

Read More


October 20, 2016
/ / /

Midterm || Instructor: Benedetta Piantella || Collaborators: Ziyu He



Above work by SemiConductor. Immersive sound visualizer of Earth’s tectonic plate convergence.  Work below by Ryoji Ikeda for Honda.

Working with multiple inputs from Arduino, we hope to make a device that is capable of creating a visual display while simultaneously call on audio files to distort and compose sound.

Read More

Drag Doc: FISH

October 18, 2016
/ / /

Instructor: Marina Zurkow  ||  Collaborators: Laura Kerry, Lindsey Piscitell 


Drag is an art and a craft of transformation.  Drag queens are male performance artists who use makeup, costume, and comedy to create provocative and alluring female caricatures.   As the LGBTQ community becomes assimilated within mainstream culture, drag, a once strictly performative subculture within the gay community, is now becoming something closure to entertainment with iconic celebrity.  However, with mainstream assimilation, drag is often confused with the queer, trans, and gay identities.  Our group wanted to create a short documentary using the transformative craft of drag makeup as a lens for analyzing its purpose in personal identity and social ingenuity.

Read More


October 13, 2016
/ / /

This is an initial study into systemic coding.  The distance between the nodes are evaluated to each other and if the nodes are with in a certain respective distance to each other, a connection is made.  I am more interested in what these nodes can be, and what rules must be met for a connection to be made.  The nodes could be anything from swarm particles, people, or sites.  The reason for connection can be more complex and can be used to reveal relationships and visualize data.

Constellation p5.js alpha editor link

Start by clicking in black space…

Read More

Synthesis Day: Serial Communication

October 10, 2016
/ / /

Synthesis Day ||  Collaborators: Danni Huang


Danni and I, built off of a partially completed script.  We initially wanted to make an “atomic model” with a changing number of orbiting electrons.  We used our FRS (and photosensors, later) to make the radius and the electron numbers increase and decrease.  We made an array of orbiting ellipses and set the value to 1000. The frequency of the orbiting ellipses created an unexpected illusion.  It looks like a series of dilating rings composed of many smaller particles.

SuperNova p5.js alpha editor link

Read More