Web maps, e.g. Google Maps, normally don’t print well, as their resolution is much lower than normal print resolution, not to mention the various other unwanted text and elements that print along with the map. While the unwanted elements can be cropped out, the only fix for the low resolution is to render a higher resolution image (or use vectors). Formerly, this required installing GIS software, which also requires a suitable data source. Print Maps changes that by leveraging Mapbox GL JS and OpenStreetMap data to render print resolution maps in the browser. After the user selects the map size, zoom, location, style, resolution, and output format, PNG or PDF, Mapbox GL JS is configured as if it was being used on a very high pixel density display and used to render the map output. To use Print Maps, visit printmaps.org.
The site’s source code is available on GitHub. Also, slides from my HopHacks presentation on the project.
A digital light wand, used to paint images in long exposure photography, has been around for a few years, since the advent of cheap, controllable RGB LEDs. While there are freely available designs and code for such a device, I wasn’t happy with them, so I decided to design and build my own. The control electronics for the design I looked at used a Arduino Mega 2560, with a display and control shield as well as other components; I found this far too bulky. Furthermore, the design used a grossly underpowered voltage regulator rated for 1A with a strip of LEDs that draws upwards of 2.5A. The LED strip, however, is one of the nicest ones available, with 144 individually controllable WS2812B RGB LEDs on a one meter strip.
Two years ago, I took a large set of photos at the George Peabody Library. Among those were a very wide angle image looking up at the skylight and an image looking down. The latter was taken with a point-and-shoot camera suspended between the sixth floor railings with the help of fishing line and office supplies. Unfortunately, the field of view was much too narrow, leaving the stacks out completely, so the photo fell short of my vision for it, and it didn’t complement the photo looking up. I needed to use my DSLR and fisheye lens, but there was no way the previous method would have supported it. However, I just revisited this idea and finally got the photo I wanted.
Over the past five months, I have continued working on Pannellum and just released version 2.1.0. This release includes a number of improvements including a loading bar for equirectangular panoramas, “inertia,” configuration from Photo Sphere XMP data, more descriptive error messages, and more documentation. The loading bar is something I’ve wanted since the beginning but never had a good way of implementing before, since
Image objects don’t have progress events. To add support for using Photo Sphere XMP data, I needed access to raw data that wasn’t available with
Image objects. Researching that, I learned that XMLHttpRequest Level 2 supports blob objects that allow the raw image to be transfered; before, it was possible but was a bad idea as it involved some ugly character code hacks. This extended functionality is a few years old at this point, but since the name didn’t change, plenty of information sources are out of date. Configuration from Photo Sphere XMP data seems to work, but without an Android device capable of making them, I was not able to test a number of corner cases. The improved error messages should hopefully resolve one of the most commonly reported issues, why a large panorama doesn’t work, since it now explicitly says that the image is too big and lists what the device’s largest supported image size is. Also of note, I changed the naming of a few configuration parameters for consistency; this breaks some existing configurations, but it will be better going forward. Lastly, numerous bugs were also fixed.