For Playful Communication of Serious Research I have been working with Brett, Hannah and Xuedi to build a kiosk where you can explore how the brain processes vision. The project has been an intense fabrication journey, combining traditional wood fabrication, CNC Routing, mold making, 3D Printing, laser cutting & etching. It’s still in the process of getting finished, but this is what I’m presenting today.
Frame – Traditional Fabrication, Vinyl Printing
Hannah and me worked together to build the initial framing. She is the master of fabrication, and I learned a lot about building things out of wood. We worked to cut the parts from 2x4s and plywood and then covered the plywood with masonite. The original design had a curve connecting the play area to the upright region. The masonite was unable to curve at the angle required, so we decided to make an angled piece instead.
I spent a significant amount of time hand planing the sides and front, matching the masonite with the existing wood. This was also my first time using a hand planer. It requires a lot of patience. I used the CNC router to cut out angled parts to make the front curve.
I was able to find a way to mitigate dust from the orbital sander with the magic of gaffers tape. I also learned that wood putty and me are not friends. At this point in the process, we determined that we couldn’t afford to use speed rail for the legs. Instead Xuedi and me built wooden legs from 2x4s.
All of this work is covered with a vinyl sticker.
Play Area – CNC Routing, Laser Cutting & Etching
Once the copy was approved, I cut each of the different brain area shapes were of acrylic with Xuedi. Hannah painted the etched text. I tested multiple hole sizes to make sure that the electronics fit properly.
The play area, where users will plug brain cables in is CNC routed to give a slight inlay where the acrylic parts will sit. There is also a hole cut so that the electronics can mount into the acrylic. I’m planning on using 3M VHB tape to secure the acrylic.
Brain – Mold
Initially we were planning to use resin to cast the brain. I took a trip to the Compleat Sculptor and learned that it was very possible the resin would melt the thermoplastic mold and the PLA Makerbot printed visual cortex areas. Instead, we used Smooth-On Encapso K. This is the material used to make ice or simulated water in artificial flower vases. It’s very clear, making the visual cortex areas look great!
Brain Cables – 3D Printing & Dying
Initially we were planning to use AMS to print the brain connectors, the part that you hold onto when you plug a cable in. But, the prices were too high and the materials may not have been as strong as we wanted. Instead we sent the model to Shapeways to be printed in White, Strong and Flexible, a laser sintered nylon. We colored the final print using a combination of fabric dye and sharpie.
In Interpretive Exhibition Design we learned that most exhibitions are in development for years. In fact, it might take two years to go from concept to execution. For our final, we developed an exhibition proposal in two weeks, including fabricating a scale model!
Working with Asli and Jasmine, we developed Sandy, the City & Climate Change intended for Gallery 77 at the American Museum of Natural History. Without budget as a concern, we worked to use emerging technologies to provide an immersive and personal experience for visitors.
Taking inspiration from IBM THINK and World’s Fair style expositions, Sandy, the City & Climate Change is an exciting exploration of future of New York City and climate change in general.
We are making great progress in making our wind turbine produce more energy.
Unfortunately, we also seem to be really good at making windy days suddenly calm. As we were testing our updates yesterday, we kept placing the turbine in a windy area only to find that the wind stopped suddenly.
The blades have been cut and bent to allow air to still be caught but not get stuck in the blade, slowing everything down. We think. It seems like this new design has been a good improvement.
Kinetic Energy to Electricity
The pulley system was not providing any advantages. In fact, we were seeing less open circuit voltage than when we were using acrylic gears.
We are testing making gears out of Delrin.
While testing, people in Washington Square Park and the Hudson River Pathway had very positive responses.
We are making big progress on the construction of our interactive exhibit.
Acrylic Laser Etching
I’ve tested laser etching the acrylic for the brain function areas. Hannah painted them so we could see how the text looks.
Fabrication of the Base Structure
I’m not an expert in this category at all. Hannah get’s all the credit for figuring out how to make everything fit together. She was kind enough to teach me about her fabrication procedures.
CNC of Round Corners & Testing for Brain Areas
I used the CNC to cut the rounded corner parts and a jig for hannah to use. Jigsaws and I don’t get along. That’s a long story for another night. I also cut tests to see how deep we should pocket the plywood to stick the acrylic in.
Development of Handle for Plugs
I went to great lengths to obtain a student copy of Solidworks. Which they make incredibly difficult. I spent some time learning how to model and, for whatever reason, I find it easier than rhino. After we learned that the ITP AMS fund was nearly depleted, we decided to construct a model for fabrication by shapeways.
My modeling skills still need a bit of finesse. Xuedi ended up building the final model, which looks much better than my phallic version.The version on the top is hers. The version on the bottom is mine. We also decided to switch from magnets to 1/4″ phone connectors, so that accounts for the difference in the bottom.
I finally got the last 3 LED controllers in today. The only part left is the ribbon cable to connect them all together. The circuit is now working. Brett helped discover the need for pull-up resistors to make everything work. We are now using 1/4″ phone connectors. This is a much better solution because they are designed to be plugged in and removed and it’s easy to solder a resistor into the socket.
CNC pocket the plywood play area – Thursday
Laser etch and cut play area shapes – This weekend
Jon and I have been working to make incremental updates to the wind turbine. We’ve built new blades and acquired new materials to test.
During the excursion to Washington Square Park, we spent some time talking about what to do next with the turbine. While we would love to continue to explore uses of the waist air cased by the movement of the subway, it’s hard to test because we don’t want to upset the MTA or Police. Somehow the topic of the marathon came up and cheering for runners. Jon came up with the idea of using wind to power a cheer for everyday moments. Sometimes when you’re walking down the street, a random complement or encouragement can make your entire day better. The wind turbine powered cheering machine is meant to be a playful and unexpected interaction.
Tyvek fabric – possibly going to experiment with blades made of a cloth like material, after our discussion in class with Anne-Marie
Rubber belt – waiting for McMaster to deliver belts to use instead of gears
Delrin sheet – if we have problems with the pulley system to cut gears
Rhinoceros the 3-D modeling software many of us are using at ITP is undoubtedly powerful. Especially on the Mac, it’s surprisingly easy to get started modeling whatever you can imagine. Because Rhino is based on a command-line interface, there is a powerful scripting language and SDK that lets you read and write files directly. Rather than having to learn RhinoScript as a completely new language, you can use Python to script in Rhino!
On the Mac, you’ll need to install a plugin to be able to execute Python code in Rhino. Unfortunately, there doesn’t seem to be a Rhino Python editor for the Mac. However, you can use Komodo (with rhinoscriptsyntax syntax) , or Sublime Text to write Rhino Python code.
Alternatively, there are some tutorials online. But beware! They are using the windows version of Rhino, which could be potentially confusing. But it should all work the same, with the exception of the in app Python editor.
A popular use for scripting in Rhino is making generative models. Specifically, Grasshopper can be used to develop generative algorithms with an interface reminiscent of Max. The sad part is:
Grasshopper doesn’t work on the Mac. You’ll have to install windows to use it. Nonetheless, it can be utilized to make some interesting art, architecture or products. There are a number of examples available online.
Here are a few examples of things made with Grasshopper and Rhino:
I was able to try out this Spirograph example. It would take some significant time to be able to really utilize the power of this tool.
You can even combine grasshopper work with Python code.
I’m pretty excited about the design of our interactive exhibit for Playful Communication of Serious Research. It’s going to require a lot of time and ingenuity to make an amazing experience in four weeks. Because of the many different methods of fabrication that will be required to make this interactive, I want to focus my Digital Fabrication final working time on building the best exhibition possible. To make that happen, a number of different digital fabrication techniques will need to be combined.
The project is an interactive exhibit about how your brain processes vision. I’m building it with Brett, Hannah and Xuedi. We found a group of neurologists at NYU doing some awesome research and went to work trying to figure out a fun and interesting way to present the research to a broad audience.
Technical drawings of the experience:
Framing & Structural – Traditional Carpentry
The main framing will be primarily made of wood. Initially I thought we would build the framework out of aluminum t-slot rail. Hannah is going to help me discover the wonders of traditional woodworking. General carpentry has never been my strongest skill set, but getting the framing of the kiosk right is critical for the rest of the parts to fit and work correctly. I believe the structure will be made of some type of hardwood and the backing will be MDF which we can paint & stick printed vinyl over.
Interaction Area – CNC Routing, Laser Cutting/Etching & Acrylic Painting
Laser Cutting & Etching & Acrylic Painting
The interaction focuses on six visual function areas (currently: faces, motion, depth, color, texture & direction). We are planning to use different colored acrylic for each area and use a different shape. I’m planning to cut the acrylic shape and a hole pass though for the cable interface connection. I’m adding text to the shapes, using a combination of raster etching and vector outlining around the characters. When the laser cutter is working again, I want to test the techniques discussed on the Ponoko blog.
The interaction area baseboard will be made of plywood. I’m planning on using the router to pocket out recesses for each of the laser cut acrylic functional areas to fit into. I will also drill out holes for the electronic components to reside. I’m a bit undecided on the exact method of integrating the electronics and a magnet. We are planning on either using resistance (possibly subject to current fluctuations) or infrared pulses (possibly subject to interference from ambient light) to detect the position of each of the brain connection wires. It will either be painted or covered with printed vinyl.
Brain Cable Interaction Device – 3D Printing & Embedded Electronics
3D Printed Handhold for Cable
Part of making the interaction feel satisfactory is the sensation of plugging in and unplugging brain connection cables from different functional areas. We’ve been experimenting with light pipe, a magical material that is similar to a fiber optic but it glows from the sides rather than just the end. At the rear of the installation will be a bright 1W LED that will match the color of the visual area the cable is plugged into. At the end of the cable the user touches, we are designing a grip that will feature a very satisfying magnetic click into place in the interaction board. We are hoping to 3D print the 4 hand holds at AMS using the Objet, perhaps with a combination of two materials. Still be determined: what the design looks like and how it secures the light fiber in it’s base. It might be neat if it’s hollow, made of two parts and gets screwed together. I’ve ordered magnets from McMaster to test and we will develop the model this week.
3D Printed Brain
This one is confusing for me I must admit. Hannah is mostly spearheading this fabrication component. The idea is to combine an existing model with 3D color printed parts of the visual cortex. We need to talk to the folks at AMS about wall thickness and permeability to light if we try to embed LEDs inside.
There are six visual areas featured and 4 brain connectors, resulting in 24 different combinations of cable locations. We can easily detect that a cable is plugged in, but how can we figure out if it’s cable 1 or cable 4? We’ve developed two solutions, but aren’t sure which one will be better. The first, is to combine an infrared LED at the light source end of the light pipe and detect infrared pulses in each of the visual area connection points. When the LEDs arrive, I’ll test this method. The second option is to embed resistors into the handhold and check the resistance at each of the visual area connection points. This technique could be trouble if there are voltage fluctuations. Are there any other methods? RFID? It doesn’t seem cost effective, but if all else fails that might be the way we have to go.
Panel Graphics – Large Format Printing & Vinyl Printing
Finally there comes the graphics. We are designing accompanying graphic panels, seen here as T1, T2, T3. The ‘How We See’ area is also a graphic and there may or may not be a vinyl overlay on the plywood in the T4 area. I used to get my charts printed at the ad agency I worked for. But this is a lot more exciting. I’ve seen some type of matte/anti graffiti coating on graphics at places like Disney. It looks very nice, but I can’t figure out what it’s called. Either way, we plan to get the graphics printed at AMS and may mount them on gator board or another similar material. The T3 area may be a vinyl print, securely attaching it to the platform.
We certainly have our work cut out for us. But this seems like the perfect opportunity to use some serious digital fabrication techniques to build a fun interactive experience.
It’s official. I will be working with Xuedi, Hannah and Brett to build an interactive exhibition based on the research of an awesome group of neurologists at NYU. In the next four weeks. It might seem impossible to build an exhibit in this time frame, but with our combined fabrication, coding and design skills, we will bring the wonder of how your brain processes and understands what you see.
Members of the curatorial team focused on textures and they developed ways to process images to see if a specific part of your brain ‘likes’ them. I’m butchering the complexity of the work, but it lead to the core idea behind our design. What if you could see how different parts of your brain see things? Even after brainstorming a bunch of different ideas, we remained focused on figuring out a way to show the many different ways you ‘see.’
Brainstorming & Development
Using live video as a springboard, visitors can plug and unplug visual cables to compare how different areas ‘see’ in real time.
When visitors connect cables to the visual processing areas, they see a video of themselves filtered according to that area of the brain. For example, plugging in the color vision area allows the visitor to see the effect of color in what we see. Visitors can plug in to 4 areas simultaneously to compare how they process our sight.
In addition, a 3D model of the brain lights of the visual cortex we believe are responsible for processing vision.
I’ll be posting photos of the storyboard and our original concepts soon.
We are about to begin fabrication. It’s a big effort, using multiple tools and materials. Also in production: graphics and text panels. And experimentation with interface components.
I can’t help but be excited by the sudden proliferation of immersive attractions with trackless ride systems. This video is from Mystic Manor at Hong Kong Disneyland. It looks pretty awesome. The projections look amazing, and there looks like there is some really fun use of laser projections too.
And then there’s this at SeaWorld Orlando. It also looks like they pulled out all the stops and tried to create a really fun attraction. An interesting contrast can be seen between Mystic Manor and Antarctica: Empire of the Penguins, especially in the control of light and motion.
It looks awesome, however the high ambient light gives away a lot about the experience. Compare that to Mystic Manor, which is undoubtedly a different type of experience. I imagine that the SeaWorld experience is brighter not to scare the kids. It just also makes it feel a bit more flat. Still fun and well done. Wish the exit theme song lasted though the entire exit of the vehicles.
A year delayed, the FriedrichLink app has finally been released. It sucks. It’s really a jquery mobile website in an app wrapper (bad) and it doesn’t work. I’ve yet to be able to activate the AC unit anywhere from home. It doesn’t tell you if a command was received and it forces you to unlock the orientation of your phone. Even better, when you do unlock it, the app logs you out. Often not remembering your user ID and password.
How do companies get away with this stuff? API. Want.
Tomorrow and Monday (Sunday December 16th 2-6 & Monday December 17th 4-8) I’ll be showing my Change You Can Believe In prototype coin collector at the ITP Winter Show. It shows you the worth of your change in the value of items & causes you care about. Come see me!
Making your own versus buying something already made… the conundrum.
Our apartment here in New York has one of those newfangled RFID entry systems. We don’t have a door man, so you wave your RFID tag in front of a reader at the door and it automagically unlocks. It’s all fine and good unless you have someone visit. You can make copies of your apartment key, but it’s pretty difficult to clone RFID tags.
The problem: unless your visitor wants to spend their entire time with you or you want to give your RFID tag up… someone will get locked out.
That got me thinking, how else could we let people in to the building?
The Possible Solutions
clone the RFID tag
intercept the in-apartment unlock button to remotely unlock
adopt a dog, train it to open the apartment door, walk downstairs and open the building door
Cloning the RFID tag seems to require an effort and hardware I don’t have beyond what I want to get involved with and a dog doesn’t seem like the right thing to adopt right before grad school, besides, I have Stanley!
Doing a Google search It looks like other people have built similar systems. Essentially, you send a text message, magic happens in the series of tubes and the door unlocks. I built a working prototype today.
My system would use an Xbee at the button [it’s far from the internet box] to activate a transistor which would activate a relay, effectively ‘pressing’ the button. This part works just fine. (a few technicalities, The complexity isn’t as much the hardware as the software that ties it all together.
You need a service to receive text messages, an intermediary web server to translate what the messages mean and send the command to the arduino [it looks like you might be able to do this with within Twilio], the Arduino plugged into ethernet to get the command w/ an xBee & the xBee at the button. All in all, while I think I could eventually make this work, it will take a really long time.
Enter, Lockitron. It’s a hardware + service that came up during some deep Google searching.
It’s a hardware & software solution that does basically what I want to do. They even have a hardware device that has relays, just like what I was building. Sweet. And after chatting with the folks who created Lockitron it looks like I might be able to connect some contraptions to the other relays. Bubble Machine?
In the end, it seems like this will be the way to go for now. Perhaps after I learn more about the code end of software/hardware services at school, I can interface this with a future creation. [looks like Lockitron uses Twillo after all!]
The Kinect went on sale today and I had a Best Buy gift card sitting around (since Christmas!) so I decided to pick one up. I’ve discovered a few things:
I have absolutely no idea what I’m doing when it comes to unix commands
Playing with GitHub for past projects totally paid off
It does some funky windows emulation that I have no idea of how it works, perhaps I’ll have to bootcamp Windows 8 when it comes out?
Apple continues to believe you shouldn’t be trusted with your own computer and makes it incredibly frustrating to find and edit files that are hidden in the bowels of the /usr folder
Kinect is kind of amazing. I can see why so many projects at the ITP show this winter/spring incorporated it. That said, it seems like a bitch to get it to behave, especially in an environment like a show
It’s now 3 AM, I have a cold, poor Stanley is in bed asleep waiting for me to join him… but the time just flew by… I can’t wait to start school.
I still remember my first visit to the new Hayden Planetarium, after the construction of the new Rose Center for Earth and Space was finished in 1999. And now, I regret not going back and see Passport to the Universe before the show was replaced.
The innagural show, narrated by Tom Hanks featured the new, one-of-a-kind Zeiss star projector which was custom designed for use at the Hayden Planetarium. It also introduced some amazing digital video projections, bass shakers in the seats and laser effects.
”We wanted to give New Yorkers the best sky in the whole world,” said Dr. Neil deGrasse Tyson, director of the planetarium, ”because we owe it to them.”
It’s certainly a challenge to make a planetarium relevant and interesting to an increasingly science-phobic, ADD audience. And that’s exactly why there was something magical about this show… the presentation reminded me alot of something you might see at Epcot at Walt Disney World.
Now living in New York, I was more than a bit excited to return to the American Museum of Natural History and the first thing I did was get tickets for the ‘Space Show.’ Many things about the design of the experience seemed unchanged from back in 2000, but as I watched the new pre-show I started to worry.
To enter the Planetarium, you board glass elevators and travel up into a holding area. This effort for a pre-show reminds me of something you might see at Disney. The simple act of transporting the audience out of the main museum into a mysterious dark room prepares the audience for an epic adventure into space.
I love this. It reminds me of the difference between a free-fall ride at Six Flags, where you walk right up to the ugly machine versus walking into a dilapidated old Hollywood Hotel and sneaking off into a dark and cold boiler room to board an elevator to the 5th dimension. You’re priming the audience to understand and appreciate the story.
Back at the museum, the pre-show has upgraded to a show controller instead of a DVD player, that part is a win. The pre-show video is another story. Filmed outside the planetarium in the middle of the day, it clashes with the dark mysterious nature of space and the dark pre-show area. What’s worse It looks like it was filmed on an iPhone with an amateur crew. I felt the video lacked the same epic wonder of space that the loading pre-show video exuded. All this in an age of Vimeo space videos of amazingness:
When it became time to enter the planetarium, I’m sad to see that the doors are still manually opened and propped with bright orange door stops. This, I suppose, is more of a personal pet-peeve than a travesty. However, a simple set of automatic doors would contribute to both the efficiency of the show and the fit and finish of the overall experience. I honestly thought maybe I should a Kickstarter to buy the museum a set of automatic doors and a show controller.
Entering the planetarium is just as spectacular as I remembered. Thankfully the folks at the museum bucked the trend of the tipped dome theater, like what was installed at the California Academy of Science (another disappointing planetarium show). The Zeiss Star Projector hides underneath the floor, allowing you to walk quickly to find your seat.
As the current show Journey to the Stars began, I waited patiently for a favorite moment of the old show (that I remember 12 years later) when, in a burst of carbon dioxide or liquid nitrogen, the star projector dramatically rises from the floor. But that never happened. You see, the new show doesn’t use the Zeiss star projector at all.
How did we go from taking the hard way to this new star projector free show?
But in housing the Zeiss in its new dome, ‘'We've gone against fashion and taken the harder way,” Ms. Futter said, ‘'to yield a better presentation of the cosmos.” The museum rejected the recent worldwide trend among planetariums to seat visitors in one direction facing a tipped dome that doubles as a revenue-enhancing Imax screen, capable of attracting audiences to 3-D movies.
I know it must be insanely expensive to operate and maintain the star projector and elevator unit. But that’s what made the Hayden Planetarium unique. At this point, it’s no better than a tilted dome show for half the price. (or free as in the case of the California Academy of Science)
I don’t even want to get into the copywriting of the current show. Whoopi Goldberg’s inflection and tone sounds like a mix of under-education and disinterest. Why do we continue to dumb down science? Apparently the world of Yelp disagrees with me, however one reviewer seems to agree…
I’m not sure what’s going on at the American Museum of Natural History. But it seems like a shame to go through all the trouble of having Zeiss reengineer the Milky Way so that it can be seen on one Tuesday a month.
It looks like Stanley and my move to New York is nearing completion. The piano arrived today, the furniture from CB2 came last week and I finally finished the desk. You see, space seemed tight back in Santa Monica, so transitioning into 450 or so square feet has been kind of rough. We hardly brought any furniture and did a pretty good job at limiting the other items too. Still we quickly ran out of space.
Here begins the adventure to find the perfect second room desk/storage combination. We had looked around at all the regular places and couldn’t really find anything that provided a good workspace & lots of storage & wasn’t super wide. The room is an odd shape to begin with, so things like EXPEDIT were just too damn wide. Then through the magic of the IKEA playhouse/maze/showroom, we met BRODER.
You see BRODER is a storage system designed for garages and outdoor places. It’s made of metal, it looks very industrial. And it is relatively inexpensive. I think I’m pretty well versed in IKEA, and we broke a cardinal rule, ‘buy it when you see it’. In a day, a single part of this magical & mythical storage system disappeared from inventory. We needed three BRODER posts, $10, and IKEA Brooklyn now only had 1. And the winning IKEA inventory systems indicated that it would be more likely that Mississippi would pass gay marriage before more would be in stock.
Well okay, easy enough, we can just get a post at another IKEA. Right? Right? WRONG. IKEA’s online stock system (designed by monkeys who were deemed too slow for medical testing) indicated the posts are gone from nearly every store in driving distance. Yet, IKEA Pittsburgh has 165 of them. The same system said IKEA Paramus might have 7 or so. And no, IKEA does not ship between stores, and no they wouldn’t ship the $10 part directly to me.
Have you ever tried calling an IKEA? The phone system makes it nearly impossible to get a human. No wait, it is impossible (unless you know the magic number). So I broke IKEA cardinal rule #2, ask for a Physical Stock Check before you rent a Zipcar and drive to it.
When I got to IKEA Paramus all 7 of the posts had either been picked up and misplaced by bad shoppers or lost in the warehouse. Ugh. But I got the magic number. What secret number do you ask? The extension for the Self Serve Warehouse. This magic number lets you connect directly to a real human, the folks who can check to see if things are really where they are supposed to be.
As I incessantly checked other IKEAs for the part, a magical 18 of them appeared in IKEA New Haven. Yes, IKEA in Connecticut. See, I haven’t been back to Connecticut since I lived there. There is no reason to ever go to Connecticut. But I needed this $10 post. I learned that IKEA New Haven’s phone system is similar to Paramus and was able to do a Physical Stock Check. 17 of the posts were waiting for me.
Another Zipcar, er the Hertz equivalent and two and a half hours later I was at IKEA New Haven. The posts were really there! Yay! So I picked up everything, almost. This IKEA was missing 2 of the T-foot pieces I needed & the countertop didn’t fit in the car.
Nearly defeated, another Zipcar brought Stanley & I to IKEA Brooklyn where I finally got the last two missing parts. Joy! A short trip to Home Depot for additional tools and we were set. Magical.
You can check out the video time lapse of the installation. Although there were a few minor setbacks (and a bent jig saw blade, reminding me as butch as I want to be I’m not designed to use power tools unsupervised) it looks amazing & has loads of storage. There is even room for Stanley’s LEGOs!
Now I’ll have lots of space to work on projects & I’m one step closer to the start of ITP in the fall. Can’t wait!
I have cross post on Instagram & Facebook, so you’ve likely already seen this… but the Bubble Alarm Clock has been a ‘hit’ with Stanley and the instructables write up was featured on the front page! I seem to be in a bubble mania phase, we went to Toys ‘R’ Us today to look for other bubble toys to exploit.
Kids toys today are a sad lot. I was hoping to find a bargain on some random simple electronic toys to take apart but couldn’t find anything. There was a decent selection of bubble toys and I ended up with the Gazillion Bubble Hurricane. (Yes, that’s really it’s name - they also have the Bubble Typhoon) It does indeed produce a gazillion bubbles, however the drive mechanism for the wands seems to be faulty.
I’m going to have to return it and try again tomorrow. After reading online, it looks like they are not making the machines like they used to, the old version let you screw in the bubble solution as an auto feeder!
This Gazillion Bubble company & Super Miracle Bubbles seem to have a monopoly on the bubble fun market. Who knew.
Anyway, I’m glad to have a finished arduino project in daily use. I learned how to make a clock, use a transistor, use a relay, use an LCD and how to use github for code. All in all not a bad project.
Operation #bubblegun alarm made major progress today. Sketches for controlling the pump, wand wiper via servo and now time + alarm exist. To be combined and improved tomorrow! #arduino #alarmclock #MAKE #sparkfun (Taken with instagram)
This eye-fi based ‘internet of things’ camera seems pretty neat. I have a few eye-fi cards sitting around. I wonder what I could stalk… the mailman when he opens the mailbox?
Last night I went to Disneyland and went on Star Tours the Adventures Continue, which impresses me more and more every visit. The queue is great and has lots of stuff to look out (it even uses projectors in a respectable way, I’m thinking about you Winnie the Pooh) but the ride experience is even better. (I’m also curious how the integration works)
I love that the Rebel spy incorporated into the film is an actual guest onboard the Star Speeder. This just seems like the right way to make things interactive. Not to be derivative, but what if Mara inside Indiana Jones & the Temple of the Forbidden Eye actually could *see* if you looked into her eyes. I remember ducking down and hiding as a kid because I literally believed I would be doomed if I looked into her eyes.
But think about it, what if she singled out the few or many people that looked into her eyes with a laser burst or another effect. There have been so many new special effect technologies in the last few years, Indiana Jones could be even more amazing!
The installation of the xbee + arduino powered garage door sensor was a success! This was my first time working with xbee, and I’m pretty happy about the way things came out.
The problem: because of the design of the house, it’s difficult to visually check if both garage doors are securely closed.
Although it might be a bit of overkill, this was an opportunity to figure out how to use xbee & install a sensor system.
My solution: Private twitter account that relays the status of both doors (open or closed).
In the garage:
Two magnetic proximity sensors mounted at floor level on each of the garage doors.
An xbee radio mounted on a spark fun regulated breakout board with one digital pin connected to each of the sensors
In the house:
Arduino Uno with an ethernet shield & xbee radio mounted on a spark fun breakout board.
The Arduino reports the status of the garage doors to a pachube feed about every minute (the code for the arduino includes a watchdog script to reset the board if it gets stuck)
Triggers on pachube automatically tweet when the door is opened & when it is closed.
In the future: Visual indicator of door status / push notifications instead of twitter / better handling of normal operation (short open/close for access/egress)
All in all, for my first try I think it went pretty well. The system is operating reliably. I wish that pachube made it easier to set more specific triggers. I’m not yet really familiar enough with writing/hosting the triggers or adjusting the arduino behavior to make the system wait longer before notifying that the door is open an unusually long amount of time.
Tom is a master's candidate at NYU's Interactive Telecommunications Program interested in theme parks, engineered experiences & things that change color.
Intern: Exhibits & Media Software Development / American Museum of Natural History
Jr. Planner [Freelance] / TBWA\Media Arts Lab
Jr. Planner / TBWA\Media Arts Lab
Social Media Intern / Brandtailers
Supported account services department with client’s social media needs by identifying threats and opportunities in user generated content surrounding client’s brands. I also analyzed campaigns and translated anecdotal/qualitative data into recommendations and revisions
Document Control Contractor / Walt Disney Imagineering
Organized & updated Slide Library and Show Quality Standards Library archives in preparation for relocation of collections.
Lead Strategist - Media Team / Circle Advertising, National Student Advertising Competition Team
I worked with a team of 50 others to develop and implement a media strategy that acknowledges and embraces that media includes all interactions between a brand and its audience. Our work contributed to Chapman University's Circle Advertising team winning 1st place at the NSAC District 15 competition.
Creative Intern: Information Research Center / Walt Disney Imagineering