Intro to Physical Comp

POV 3D Display

December 15, 2016
/ / /

FINAL || Instructor: Benedetta Piantella || Collaborators: Michael Simpson
conceptWe began our final project under the pretense that we wanted to pursue a project that incorporated our shared passion for architectural design. We imagined projects that would allow us to carve out a physical space using movement and gestures. As we delved deeper into what these idea might look like in practice, we quickly realized that the level of abstraction created by the physical input made the resulting models too difficult for the user to keep mental track.

From there, we began thinking about drawing tools which leveraged existing modes of architectural graphic technique to allow a user the ability to draw three dimensional spaces in a 3D space. The device we envisioned allowed the user to create drawings in plan and elevation which would then be integrated to create a 3D projection of the resulting space. However, as we continued to pursue this idea we began to realize that the limitations of this kind of system outweighed the benefits.

These disparities made it clear to us that the adoption of abstract interfaces necessitate a display system capable of conveying complex volume data. To realize any of the ideas we wished to pursue we would need a volumetric display capable of presenting real-time data that is highly dynamic and also high resolution. After these revelations and some initial research surveying the landscape of volumetric display, we decided to dedicate our final project to the pursuit of this technology.

Our initial thought was to create a display that would use strips of individually addressable LEDs and spin these LEDs around a center axle. We based this idea on our existing exposure to the technique of Persistence of Vision (PoV) where the blink rate of a light in motion creates a light field, with a fast enough rate of motion, the light field created by this fools our eyes to integrate it into a persistent image.

Before starting to design our own PoV system, Michael conducted extensive precedent studies to survey the state of the art.  He quickly determined that there had been many attempts to create volumetric displays using PoV. Below is a table of summarizing the results of these investigations.

We both found that the Interactive 360 Degree Light Field Display created at the University of Southern California’s ICT Graphics Lab offered the most compelling representation. A goal of that project was to create a display using commercially available devices. Unfortunately, as we began to pursue a similar implementation we hit a major obstacle in trying to understand how to achieve the high projection frame rate (4800hz) necessary for that device. Their implementation used a standard DLP projector but with a modified (hardware) driver that used an FPGA device to act as a middle-man between a graphics card’s 24-bit HDMI signal and the binary (1-bit) colorless frames which are actually displayed by the Digital Micro-mirror Device (DMD) inside all DLP Projectors. Despite understanding the process by which University of University of Southern California’s ICT Graphics Lab employed, implementation was not a straightforward task and implementing the FPGA was actually something outside the scope of our project.

At this point, we decided it was time to settle on a project design and to pursue that design with the remaining time we had.   Michael had previously worked with LED matrices and proposed that a low-pitch (meaning, higher density of pixels) matrix spinning between 900 and 1300 RPM would be able to create the PoV effect we had been looking for.


The final design takes two 32×64 LED matrix with 3mm pitch (from adafruit) and chains them together to create a single 64×64 LED matrix. The combined matrix is mounted on a center axle which is attached to a motor shaft. During the course of our investigation, we realized that a limitation of this sort of design (which was circumvented by the USC-ICT project) is that it requires power and data be provided to a spinning platform. This limitation can trivially be overcome at lower speeds by use of a slip ring. However, at speeds greater than 300 rpm, things start to become more difficult due to the design of the cheaper “capsule” style slip rings. While speaking to Ben Light, we were introduced to another style of slip ring known as a through-bore.

Through-Bore Slip Ring vs Capsule Slip Ring:  Through-bore slip rings have a hole in the middle of the ring with set screws that are intended to be locked into place around a motor’s shaft. This style of slip ring is specifically intended for the purpose of our project and are rated to operate even at speeds much higher than we intended. This style is not a readily available product in most stores and must be purchased directly from one of the manufacturer’s. As we believed this would be the key to enabling our project to work as intended, we made the decision to purchase one of these. Unfortunately, despite our being guaranteed of next day delivery, the slip ring arrived one day late- causing us to be unable to implement the project before our class. But, now with the ring in hand, we will continue to realize the project and will post again with results of the fabrication.

Communication with the Displays: Aside from the fabrication, we also needed software to communicate with the displays and also to allow them to be interactive. Our goal was to create an installation where the user would be able to control the pixels on the spinning display by means of a joystick and linear potentiometers for adjusting values like color, blink rate, along with a standard potentiometer for adjusting the speed of rotation.

A Raspberry Pi was used to communicate with the displays using C (Python was also an option) via adafruit’s gfx libraries. However, for a more interactive experience, the arduino mega was used to host a sketch that enabled control of the displays over serial communication. The sketch was derived from an old (and partially broken) SparkFun sketch which was meant for smaller displays. After making some adjustments to the sketch it was able to run on the mega with matrices in a 64×64 chained arrangement providing a way to interactively manipulate the pixel information. This interface would allow us to move a cursor around the displays and draw points, lines, rectangles, ellipses, polygons, and/or text on-demand. This component of the project is fully working but could not be demonstrated in PoV fashion due to the lack of a fast rotating platform.


Images above show successful use of controlling motor speed, successful communication to two LED Matrix displays with a combination of Raspberry Pi and Arduino Mega, and physical components for the first model fabricated with the the CNC mill and laser cutter.

Read More

POV 3D Display

December 1, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson

After taking a few steps back from our original project, we plan on using our final as a study into volumetric displays and their functionality.  We struggled with the concept of drawing/sculpting in space without a physical feedback mechanism to inform our users of their production.  In order to understand how to interact in space, we found it necessary to develop a way to describe our 3D geometry in an accessible way for users of these tools (ie. without abstract drawing methods of architectural related design practices).

BOM & Scheduling 



Read More

3D Draw

November 17, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson


Using methods of orthogonal drawing used in traditional architectural practice to define space and form (plan, section, and elevation), we will construct 3D drawn forms by having users draw on the respective XY, XZ, ZY planes.  2D lines drawn on the orthogonal surfaces will be projected and extruded to intersect with one another to inform the 3D shape.

w.01_Parts and Materials

w.02_Experimentation and refinement of drawing tool in 3d.  Fabricate prototype in cardboard.

w.o3_Fabrication and debugging

w.04_Documentation and Presentation

Read More


November 14, 2016
/ / /

FINAL  || Instructor: Benedetta Piantella  ||  Collaborators: Michael Simpson


This project is an investigation into bottom-up ad hoc architectural design.  Using depth sensor cameras, we will map a user and the environment around them.  Our digital interface will allow user to push and dig through physical space to carve and sculpt a virtual inhabitable dwelling.  The project will allow users to explore their own needs and requirements for adequate shelter.  If given a plot of land, what will one make?

Read More

BeatBOX: Part 3

October 27, 2016
/ / /

Midterm  || Instructor: Benedetta Piantella  ||  Collaborators: Ziyu He

unnamed-2Above: Play interface for BeatBOX.  3 Distinct sounds and their magnitudes are pushed onto a timeline denoted by 3 distinct shapes with magnitude.

The final device pushes notes and tones onto a circulating timeline through.

screen-shot-2016-10-27-at-9-58-56-amAbove: Diagram of movement for sound generation.

Read More


October 20, 2016
/ / /

Midterm || Instructor: Benedetta Piantella || Collaborators: Ziyu He



Above work by SemiConductor. Immersive sound visualizer of Earth’s tectonic plate convergence.  Work below by Ryoji Ikeda for Honda.

Working with multiple inputs from Arduino, we hope to make a device that is capable of creating a visual display while simultaneously call on audio files to distort and compose sound.

Read More

Synthesis Day: Serial Communication

October 10, 2016
/ / /

Synthesis Day ||  Collaborators: Danni Huang


Danni and I, built off of a partially completed script.  We initially wanted to make an “atomic model” with a changing number of orbiting electrons.  We used our FRS (and photosensors, later) to make the radius and the electron numbers increase and decrease.  We made an array of orbiting ellipses and set the value to 1000. The frequency of the orbiting ellipses created an unexpected illusion.  It looks like a series of dilating rings composed of many smaller particles.

SuperNova p5.js alpha editor link

Read More

iOS 10

September 29, 2016
/ / /

iOS 10 was released just about 2 weeks ago.  With its release came many new features, some useful and well thought out but many, useless, poorly designed, and purely spectacle.  Understandably, from a marketing standpoint, Apple is forced to produce an iOS that appears to offer more and is consequently “worthwhile” to  download to make their user’s lives “better”.  It is a part of their new product line and the new interface must scream, “Look what we did! We have done so much more for you!  Our product is better now!”  However, I find that many apple user’s and enthusiasts, myself included, are becoming frustrated with simple tasks, such as unlocking their iPhones.  After reading an excerpt from Donald Norman’s, The Design of Everyday Things and feeling myself frustrated with the new iOS, I find myself questioning many of the claims made in Bret Victor’s  A Brief Rant on the Future of Interaction Design.

The previous iOS interface had users unlock their iPhones by either lightly touching the home button, which read their fingerprint touch I.D., or by entering their passcode into the numerical keypad, accessible by simply swiping right across the screen.  Now, with the new iOS 1o, if I gently touch the home button with my thumb, my screen stays the same besides for the slight change in an incredibly small padlock icon at the top-center of my screen which changes from “locked” to “unlocked”.  If I swipe right across my screen, the numerical keypad which was once there has been replaced by a completely new widget screen.  To access the numerical keypad, I must touch the home button and wait for my touch I.D. to fail a few times before the keypad pops-up to save the day.  Why the sudden learning curve?!  Why is the new software that is supposed to make our lives easier, delaying me from catching more Pokemon?  My muscle memory can’t keep up with the new changes and consequently I find myself, two weeks later, still trying to unlock my iPhone in ways that no longer work.

This is where Bret Victor and Donald Norman come into play.  Donald Norman stresses the importance of simple, clear, visible, and easily understood design.  Bret Victor demands that we look to other forms of interaction and not to limit ourselves to the touchscreen interfaces which companies like Apple have forced upon us.   The thing is, the touchscreen interface which Bret Victor finds limiting, has now become a part of our interactive vernacular.  It is engrained in our muscle memory and sets precedent to how we interact with technology that is placed before us.

The problem with a screen is that there are no buttons, no indicators,  and no labels or instructions to tell us what to do; so we approach all screens they way we have learned.  When given a screen, we usually try to interact with it the same way we would with our iPhones.  When I approach a human-scaled touchscreen monitor, whether at the mall or on the ITP floor,  if I want to zoom out, I automatically try to use two points of contact ‘pinching’ closer together.  If that doesn’t work, I step away awkwardly, dumbfounded before I troubleshoot something else.  But why should I troubleshoot something that should be intuitive and easy?  Why must I relearn for every digital device that I approach?  Different phones have different interfaces, and if I am told to interact with a Motorola touchscreen phone or some other brand, I usually find myself lost.

The point of my rant is this, I don’t entirely agree with Bret Victor’s encouragement to rethink and explore alternative modes of interaction.  Although I understand the need to explore and investigate the multiplicity of physical interactivity as researchers; as designers, I encourage the industry to think about their users through the philosophy of Donald Norman.  Simple, straightforward, transparent designs create better, more intuitive, physical interactivity for users.  We are designing for users not our inflated creator-egos.  We can fantasize and be aroused by spectacular and sometimes absurd interactive design, but in the end we should make that spectacle accessible.  Accessible meaning that it is not a learning curve for users every time they encounter a interactive product.  Like Donald Norman’s examples, there is a vernacular which develops with the utility in objects and the same goes for interactive designs.

Obviously, there are industry standards and practices, such as copyright bullying, which force designers to make impractical but unique designs.  However, I stress that as designers, we should recognize when good designs work and for what reasons.  Sometimes that reason is a institutionalized vernacular set by successful designs that came before.

Read More

What is Physical Interaction?

September 14, 2016
/ / /

Chris Crawford defines interaction as “a cyclic process in which two actors alternately listen, think, and speak.”  Using his definition as a starting point, I would suggest that physical interaction is the correspondence between a participant/user, the first actor, and either their physical environment or objects within that environment, the second actor(s).  Successful physical interaction happens when the object or environment is capable of doing or providing a response which fulfills the participant’s wants/needs to a certain degree.  As Chris Crawford might also add, this interaction must be capable happening as a series of correspondences influencing or modifying the user’s behavior, otherwise, the response is only limited to only primitive reactions.

I would say that most digital technology is not interactive.  Consider a digital camera.  The camera may provide a necessity to capture the users environment, but it does not think nor respond.  The camera may “listen” to the user’s command and take a momentary snapshot of the user’s environment, and the camera may “speak” by producing the captured image on the screen, but the camera does not process or “think” to produce an image.  Digital cameras have software and operate on miniature computers to process the light information they receive from the lens to produce the digital image, but is the processing of this information “thinking”?  On the other hand, consider an iPhone, a digital device that is capable of much more sophisticated physical interactivity.  If you tilt a iPhone, the screen will rotate and swivel to an “upright” position, but I believe Chris Crawford would define this as a reaction, not an interaction.  Neither the swivel of the screen on an iPhone nor  the reproduction of my environment as a digital image on my camera can modify nor influence my behavior.  There is no interaction in either case.  Interactivity lies in the sophistication of apps used by the user.  Physical interaction with a device happens when the user interacts and the device requires the user to react/respond in return.  To be honest, I don’t know of any apps or devices that are truly interactive and not just responsive.   I’m sure there are a few apps out there that are fairly interactive, but I don’t know of any off the top of my head.  I could suggest that GoogleMaps and PokemonGo are more interactive.  I mean, if GoogleMaps tells me their is traffic and reroutes me, I will listen.  If PokemonGo says there is a Charizard close by, I will go on the hunt.  But still, in either case, I have very limited influence as a user.

Right now, I don’t know how to define interaction design…

The Art of Interactive Design by Chris Crawford; A Brief Rant on the Future of Interaction Design

Read More