craneium.net

brain matters . . .

  • Increase font size
  • Default font size
  • Decrease font size

Playa Time Lapse Project

E-mail Print PDF

Here is a link to the video!

Look at these amazing htHDR versions. (tutorial link)

Raw Image Dump

htHDR image :)

On Saturday the 6th of August myself and three co-conspiritors hiked up to the top of old Razorback mountain.  The hike is largely unremarkable save for the fact it overlooks the fine playa which serves as a home to burners everywhere. We had made this hike before to check on the Black Rock HAM radio repeater, a service offered by BRARA.  This time we had a different project in mind.  About 2 months previous, I had been given my first DSLR camera (see equipment list a bit further down), and had done several experiments with time-lapse photography around my new home at Langton Labs and during the 4th of Juplaya event.

The confluence of having seen this view of the playa from Old Razorback, and my newfound interest in photography got the engineer gears turning in my head.   With the support of several friends, I decided to do a time-lapse video of the playa bringing the viewer from its pre-buring man state, through the event, and past the entire cleanup and GTFO efforts.  This turned out to be a non-trivial, but extremely rewarding undertaking.

Equipment Involved:

1 Canon T1i DSLR camera

1 Walmart Special $50 car battery

1 SunSaver Duo Charge Controller

2 LM317 based voltage regulation circuits (soldered board free and coated in "Liquid Tape")

Eye-Fi Pro X2 8 GB Class 6 SDHC Wireless Flash Memory Card EYE-FI-8PC

1  HQRP Kit AC Power Adapter and DC Coupler compatible with Canon Adapter

Powerfilm F15-1200 20w Folding Solar Panel Charger

Slik Heavy-Duty Tripod

1 Cheap Intervalometer


Planning and Setup:

The first thing we had to figure out was power.  In my early tests with the camera, the current draw  averaged around 200mA when taking photos every 10 minutes.  This meant that we had to supply no less than 33 Amp Hours per week on the playa.  Wishing to cover the entire 2 month span, this This could be accomplished by small fleet of car batteries.  However the idea of having to drag all that weight up 1600 feet above the playa sounded terrible, so I instead opted to get a single battery, supported by a backpacking solar panel and accompanying charge controller.

Next up was bandwidth.  Making a camera fire on a regular interval is pretty simple, but dealing with the gigabytes of generated data was less trivial.  I did not want to compromise the size or quality of the images, but even a 32GB SD card would top out after a week of exposures at a reasonable rate.  Thankfully, we have friends in the area who generously support a radio-linked point-to-point wifi back to Gerlach which is mirrored onto the internet.  This combined with the "bottomless memory" feature that the new EyeFi SD cards support, enabled us to keep the data flowing!

After stringing all of the above equipment together in a logical fashion, we setup the repeater to turn on its wifi for several hours each day and tested everything.  Then we went home!

 

Annoying Reality:

EyeFi cards come with an extremely annoying, totally undocumented feature.  Namely that they require a hardware reboot (or several) to delete photos.  While we had planned to run this without any support, I (and several friends) ended up taking several trips up that way to reboot the thing!  Be warned EyeFi cards are not really optimal for this! Some people have had luck with the CHDK and EyeFi, but sadly the T1i is not scriptable :(

 

Photo Editing:

To go from 8000 x 12 magapixel images (22GB of data) to the movie you see here, I (Peretz) used:

1) Adobe Photoshop Lightroom - for batch photo editing
2) A time lapse settings export plugin -- slightly modified from one found here http://www.pixiq.com/article/lightroom-timelapse-presets-now-updated-to-version-3 - to export two crop versions (zoomed in and widest setting) of the full video at ~30 fps.
3) iMovie - to add text, transition between the wide and the zoomed in views, tweak time (accelerating nights in the beginning and slowing time for the burns), and adding credits.
4) I also had access to a 6 core 3.5ghz, 24 gb RAM machine with a SSD drive, which made this go faster.

***

Now in more detail. (Btw, I am assuming you are familiar with the basic features and views of Lightroom.):

1) In Lightroom, I created a new catalog and imported all the image files. I set the cache size to 10 GB in Lightroom's preferences to make editing go much faster after a long import.  When importing the files, I set "Render Previews" at 1:1.  This automatically caches all size views of the photos and accelerates flipping through them later.

Since 8000 is too many photos to edit individually, I used Lightroom's batch editing features on groups of photos.

The first thing to do is to apply a crop to all of the photos.  To select the crop constraints, consider your export medium.  If it's youtube, you want to manually select a 16:9 (widescreen) aspect ratio for your crop box.  Also, be sure not to crop lower than the resolution you intend to release at (whether it's 1080 or 720 etc.)  Next select all of your photos in library/grid view.  Go to develop view on one.  Apply the crop.  Now click the "Sync" button on the bottom right (if it's not there, go back to grid view and select all and return) and select only the "crop".  This is the basic idea you will repeat on various groupings of photos with various settings being synchronized.

Within the Library view, I used the Metadata Filter to group photos by exposure settings.  In our specific case, only the exposure time was automatically determined by the camera (with the other settings F-stop, ISO being fixed).  So, for example, I could select all of the 30s exposures, and give them a keywords tag "night" and all of the exposures faster than 1/50 as "day" and all the rest as "medium".

For the first round, this grouping into three batches was sufficient.  After selecting all of the photos within one batch, I chose one representative photo and took it to develop view.  Primarily, for the day shots, I got rid of dust sports and aberrations that accumulated over the month of exposure to the elements.  And for the night, I cleared up the dead pixels.  And whatever else appealed to me aesthetically.  Each time, I applied the edits to the whole group using the "sync" button.

As you can imagine, the transitions between day / mid / night were terrible, introducing jarring discontinuities.  So I went through and selected those transitions and again gave them a keyword and edited them manually, smoothing the transition with a few frames on either side. I did have to scroll through 8000 photos, but in grid view at the maximally zoomed out setting, you can easily spot the discontinuities and manage this rather quickly. At the end, I probably edited 30 archetypical photos in all, but then shared those edit settings with various (and sometimes overlapping) groups.

2) To export the photo, I went to the "Slideshow" view of Lightroom.  As suggested here http://www.pixiq.com/article/lightroom-timelapse-presets-now-updated-to-version-3 I imported a custom User Template, BUT I DID NOT ALTER THE VIDEO PRESET.  And I actually used search and replace in a text editor to modify their 29.97fps custom setting to 1920x1080 rather than the 1280 x 720 default.  Then I exported using the standard 1080 export video setting of Lightroom.  (This allows you to override LR seeming 10 frame a second limit, though I'm not sure wether this matters in the longrun, as the video program will make the final determination of frame rate (when you speed up and slow down.)

I did this twice for two different crops because I intended to switch between .  (All of the photos and intermediate steps are available for download from the dropbox.)  This took a long time, since until that point, Lightroom only recorded changed and edits in metadata.  Only during this export, did it downsample the pictures

3) Imported both videos as events into iMovie.  I'm assuming you don't need any how-tos from here, since the application is beginner friendly (and by that, I mean, to me.)  I was going to fire up Adobe Premiere, but iMovie worked just fine.

4) Then I emailed my friend Sharps and asked him to compose a track!

Feel free to ask questions.  Think of this as a living document!

 

 

Kudos List:

Post-Processing : Peretz Partensky
Sound Composition : Sharps / Saedos Records (http://sharpsbeats.com)
Hardware Providers: Todd Huffman and "Safety" Phil Lapsley

Hike Team:

Ted Blackman, Galit Sorokin, Giggity, Ryan "Flophouse" MatthewsTodd Huffman, Cody Daniel

Special Thanks:

BRARA Ham radio group (http://blog.cq-blackrock.org/)
Ranger Keeper (bandwidth down from the mountain)
Dropbox (for hosting raw images)
Langton Labs
Hackpad (https://hackpad.com)

 

 

Last Updated on Tuesday, 07 February 2012 01:41
 

The Making of an Effigy (Kinect + Architecture + Fire = Fun!)

E-mail Print PDF

I know what you are here for, so lets start with some porno for pyros:

Photo credit Derrick (Donut) Peterson

February 11 was my first introduction to the Burning Flipside community.  Thanks to the suggestion of a good friend (you know who you are!), I found myself huddled by a tiny wood stove in a freezing cold warehouse staring at a some diagrams that looked something like the following:

Credit to Dotti Spearman (Pretend)

I had never been privy to a large build like this before;  I had some idea the amount of work that was in store for us, but not a clear idea of exactly what I had to add.  During the initial planning weeks, I spent a good amount of time talking to Dotti (Pretend), our rock-star architect and DaFT lead, and had a fun idea. . .

In parallel to this project, I had been playing with the Microsoft Kinect, a device designed to sit atop a television set, and sense the movement and gestures of game players for XBOX.  Right around this time, libfreenect and supporting Python/cython code was hot off the presses and extremely glitchy.  I had been prototyping code to map caves in the Austin area, most significantly Airman's cave, but in my testing it did a reasonable job on convex objects as well.  I suggested, that we might be able to use the Kinect to scan an actual model, thusly incorporate a true human form into the design.  She was ecstatic, and over the next week, I got my code on . . . hard . . .

  1. I wrangled down the libfreenect Python/cython drivers as well as the python wrappers to vlfeat
  2. I wrote code to co-register the depth and image cameras (fundamental matrix estimation)
  3. Coded up a capture routine to record the depth and image camera information at about 15 frames per second, and dump them to disk via pytables.   (I Added a post-process compression later)
  4. Wrangled the depth camera information into .ply based meshes, (including depth culling, quality pruning, and aberrational correction)
  5. Incorporated a SIFT based keypoint detector (from the vlfeat library) to estimate camera pose changes, and transform the output meshes accordingly.

Additionally, I threw together an extremely ghetto steady camera stand from parts found on the hack-shelf in the Austin Hackerspace, including a monster transformer as counter-weight, a metal spoon handle, and a sheet-metal laptop tray.  The lovely and talented KT served as the model for our scan, and posed extremely patiently during the whole process.  Here is myself and KT (also at the hacker space):

Photo credit to Dotti (Pretend)

Here is a shot through the eyes of the Kinect (Depth is false colored on the right, and visual image on the left):

Below is the computed mesh that came from that image above on the left:

We did three scans, each taking about 3 minutes, and generating ~1.5 GB of uncompressed data.  The second scan turned out the best (I missed a significant section of her back on the third), and the post-processing fun began.  I algorithmically estimated the camera's position throughout time in each subsequent frame, then transformed and outputted ~150 .ply mesh files.  Based on these I selected, hand-cleaned, smoothed, and aligned about 30 of these using Meshlab, a tool which had a much fast global ICP algorithm.  Coloring all the overlapping meshes differently gives a feel for the complexity of the assembly:

(~20 Million polygons when finished)

The meshes were merged and imported into sketchup.  These went through a couple geometric operations, and were augmented with hands and feet (features which were below the attainable resolution of the Kinect) and sliced into ~200 sections per form.  From the sketchup design, these cross-sections and a half-ton of salvaged plywood were fed into a CNC router (a ShopBot - graciously hosted by Dave Umlas and Marrilee Ratcliff of the epic "Fire of Fires" Temple)

The output was several trucks full of scrap, >100 pounds of saw-dust, and ~400 pieces of wood which we came to affectionately refer to as "the lady bits."  The collation, transport, and assembly of these pieces presented a unique hoard of challenges.  I can not emphasize enough the efforts of more than two dozen people in their assembly, but the end effect was simply stunning.  Here you see our charming model next to her 5.5 times larger wooden embodiment.

Photo credit Derrick (Donut) Peterson

In keeping with the nature of burn events, the true beauty of this structure was born out in the flames.

Photo credit Derrick (Donut) Peterson

There is really nothing like seeing thousands of hours of human effort vested in the conception, design, and fabrication of something so ephemeral.

Photo credit Derrick (Donut) Peterson

In closing, I want to say thank you to everyone involved.  As all our work was reduced to ash and cinders, I felt more connected to the Austin community than I ever did before.

I have heard the Sirens call of effigy design, and I know it will be my turn soon.  When that time comes, I hope I can create something even half as beautiful and moving as Dotti did for us!  Happy burn and thanks again!

 

More Links:

Like DaFT on facebook

Watch It Burn! (make sure to select the "HD" option)

Last Updated on Sunday, 05 June 2011 20:22
 

ipython install Gutsy/Hardy

E-mail Print PDF

Installing ipython for cloud/grid computing is a little bit different than the process one must go through for Gutsy/Hardy. From a completely fresh Ubuntu Intrepid install the following Ubuntu/debian packages are needed:

$ sudo apt-get install build-essential libssl-dev python-setuptools python-dev

From there we will use “easy_install” to get the relevent python modules we need to work with.

$  sudo easy_install foolscap nose pexpect pyopenssl sphinx

Then, becuase I am a bit paranoid, I download the ipython source and run it’s confgure option to make sure that it can find all the dependant libraries.

$ wget http://ipython.scipy.org/dist/ipython-0.9.1.tar.gz
$ tar xfz ipython-0.9.1.tar.gz
$ cd ipython-0.9.1/
$ ./setup.py configure

As mentioned in a previous article, you ought to see something along the lines of:

Twisted: 8.1.0
Foolscap: 0.3.2
OpenSSL: 0.8
sphinx: 0.5.1
pygments: 1.0
nose: 0.10.4
pexpect: 2.1

Next we know the installer can find everything in the proper fashion, I install via easy_install (this makes upgrading easy later as new versions come out)

$ cd ..

$ rm ipython-0.9.1/
$ sudo easy_install ipython

From there you should be up and running!

The following is a good test as to whether all capabilities are functioning properly: everything is functioning properly:

$ ipcluster -n 4
Last Updated on Thursday, 17 December 2009 06:52
 

Solidification in Space

E-mail Print PDF

Update!  The Arizona Daily Star featured an article about our experiment!

KVOA also had a news spot.

I was offered the opportunity to help coordinate of two solidification experiments onboard the International Space Station as part of my masters research.  (All photos courtesy NASA)

The experiments are a collaboration between American and European scientists (MICAST and CETSOL) and all experiments take place on the Materials Science Research Rack (MSRR) part of the Materials Science Lab (MSL)

The experiment itself consists of an aluminum alloy sample which is directionally solidified under carefully controlled conditions.  The focus of the research has nothing to do with developing fantastic materials for space age applications, it fact it is quite the opposite.  The alloy in study is an Aluminum 7% Silicon alloy, much more similar to the material you would find in your engine block than something on a space shuttle.  Density driven flow arises in hundreds of engineering situations, but in solidification it is extremely important.  Like in most materials, changes in temperature lead to changes in density.  Changes in density of a liquid lead to flow in the liquid commonly called convection, a gravity driven phenomenon.  In something simple like a pot of boiling water watching your maccaroni noodles swirl around, this is largely inconsequential, but in the solidification of metal, the movement of liquid carries heat and alloy components and can fundamentally change the solidification process.  These currents play a large, and poorly understood, role in the introduction of defects and resulting material properties of the solidified metal.  The absense of gravity that the ISS experiment allows us to develop metallic microstructures in the absense of this flow, better understand the forces and phenemenon at work, and ideally help improve materials processing right here on Earth.

Coming back to the experiment, the sample itself is about the size of a drinking straw (9x255mm), and is encased in a alumina and tantalum casing collectively referred to as the sample cartridge assembly (SCA).

This assembly is housed in a vacuum sealed stainless steel housing called the sample protection container (SPC),or as the astronauts call them "Toilet Plungers."

About a day before the sample is processed, it is removed from its container.

Then it is mounted in the low gradient furnace (LGF).

The cartridge is screwed into place, and the entire assembly is closed up.  The furnace chamber is the evacuated and held at a high vacuum for the majority of the day to insure that the sample and chamber are completely degassed and as a "leak check."

The next part is the segment that I spent roughly the last 1.5 years of my life planning, modeling, and generally stressing out about.  Progressive furnace heaters are slowly turned up to specified temperatures, and the sample is slowly inserted into the hot-zone of the furnace.  After the sample reached the deepest part of the furnace, it was held for some time while the PID controllers equilibrate the temperatures.  After that time passed, the sample was slowly withdrawn for a fraction of its length at a low speed (fractions of a millimeter per second) and near the middle the rate was increased dramatically and the remainder was extracted until solidification was completed.

Conceptually, this is pretty simple: heat, melt, solidify, and cool right?  There are a lot of finer details that I will spare you, but basically the objectives of our research were somewhat at odds with the design of the furnace, so I spent quite a bit of time staring at a glowing rectangle using modeling and simulation to give us the answers we needed regarding the physics of the experiment.

All in all, things went without a hitch or even a hiccup!  Our first sample began and successfully completed on Tuesday February 2nd, just before 9 in the evening.  I want to thank my PI's Dr. Erdmann and Dr. Poirier, the folks at NASA Frank Szofran, and the MSRR/MSL controllers John, Patrick, and Dave who made this all possible!

 

Full Photo Album:

292836main_MSL-...
292836main_MSL-CETSOL_and_MICAST1 292836main_MSL-CETSOL_and_MICAST1
292836main_MSL-...
292836main_MSL-CETSOL_and_MICAST1 292836main_MSL-CETSOL_and_MICAST1
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h19m42s45 vlcsnap-2010-02-05-09h19m42s45
183483main_MSRR...
183483main_MSRR-11 183483main_MSRR-11
408328main_MSL-...
408328main_MSL-CETSOL_and_MICAST2 408328main_MSL-CETSOL_and_MICAST2
DSCN1404
DSCN1404 DSCN1404
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h19m47s99 vlcsnap-2010-02-05-09h19m47s99
183486main_MSRR...
183486main_MSRR-12 183486main_MSRR-12
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h22m03s175 vlcsnap-2010-02-05-09h22m03s175
397716main_MSRR...
397716main_MSRR-15 397716main_MSRR-15
408328main_MSL-...
408328main_MSL-CETSOL_and_MICAST2 408328main_MSL-CETSOL_and_MICAST2
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h23m51s235 vlcsnap-2010-02-05-09h23m51s235
MSRR From DLR
MSRR From DLR MSRR From DLR
_dsc0011
_dsc0011 _dsc0011
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h27m12s196 vlcsnap-2010-02-05-09h27m12s196
_dsc0016
_dsc0016 _dsc0016
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h27m32s134 vlcsnap-2010-02-05-09h27m32s134
_dsc0018
_dsc0018 _dsc0018
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h27m39s201 vlcsnap-2010-02-05-09h27m39s201
_dsc0042
_dsc0042 _dsc0042
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h28m05s210 vlcsnap-2010-02-05-09h28m05s210
_dsc0048
_dsc0048 _dsc0048
vlcsnap-2010-02...
vlcsnap-2010-02-05-09h28m40s44 vlcsnap-2010-02-05-09h28m40s44
SCA Schematic
SCA Schematic SCA Schematic

Last Updated on Saturday, 22 May 2010 20:31
 

CUDA Textbook

E-mail Print PDF

The CUDA textbook chapters found here could have saved me a lot of learning pain.  They are really pretty exceptional compared to everything else I have found.

Last Updated on Monday, 07 December 2009 01:08
 
  • «
  •  Start 
  •  Prev 
  •  1 
  •  2 
  •  Next 
  •  End 
  • »


Page 1 of 2