Physical Design, Bill of Materials and Schematics
Nov 14, 2017 15:38 · 698 words · 4 minutes read
After doing play testing for Presence, I concluded that:
- Trying to make this a meditative experience would be challenging. It will be difficult enough to get the interation and gaze tracking right, so just create a experience that simply reacts to the users gaze.
- There should be, near-instantaneous, clear visuals showing where the user is gazing, instead of making a column and its surroundings turn blank gradually.
- Having the gaze represented in the y-axis when just rotating columns in the x-axis can be achieved with a spiral design similar to Daniel Rozin’s Twisted Strips
I came up with this design:
Here the columns can be rotated to create a wavy effect, and represent both on the x and y axis where the user is gazing.
Dowels & Form Factor
Based on the above design, there would be twenty wooden dowels oriented vertically. A servo motor connected to each dowel would rotate it 180 degrees.The question is - how big of a dowel should be used and what length should it be?
Costs are a big factor. I went through all of the hardwood round dowels sold on HomeDepot.com that are at least 1” in diameter to get their cost per inch:
|Round Hardwood Dowel||Price||Diamaeter||Height||Price per Inch|
I would want the design to be an even square. The best tradeoff of size vs cost would be to use the 1-1⁄4”x96 dowels. These could be broken into 3, to each yield 32” height. At 20 dowels, to get 32” width (even with the height) there would be 7” of free space. Taking 19 gaps between tubes and a gap at each end this gives 7⁄21 or a 1⁄3” gap between tubes. A camera would be embeded into the top of the frame. I rendered this in Vectorworks with the design as a texture:
Here is a wiring schematic. It’s format is based largely on examples from Daniel Rozin.
In the current setup it would use a desktop with linux and a decent gpu to read from the camera and predict the gaze with a neural network. This camera would communicate over serial with an Arduino which would in turn send serial instructions to the servo controller.
Ideally this would work with an NVidia Jetson TX2 which would replace the the desktop, and be seamlesslly embedded into the installation. In this case, the Jetson could be connected directly to the servo controller via serial and not need an Arduino.
Bill of Materials
7 of those dowels are needed, broken into 3, to get 20 columns of 32”. With 20 servos, 20 ball bearings to support the weight at the other end. Here is a preliminary bill of materials:
|Servo City 24-channel servo controller||1||$49.99||$49.99|
|Plywood for frame (TBD)|
|White Paint (TBD)|
|Black Paint (TBD)|
|Stenciling Material for Spiral (TBD)|
I’d like to bring the cost down if possible.
Before buying all the materials for the real size installation, it’s important to know how well the neural network model for predicting gaze performs on a larger physical scale.
I will build a python script that connects to a webcam, detects gaze in real time using the model, and renders a dot exactly where a user is gazing on a bigger screen.
If the accuracy suffers significantly at bigger distance, the camera may have to be placed in the middle to minimize the distance between it and any point in the insallation. This is a less ideal situation as it would deter from the purity of the design and columns.