Recently, I’ve been looking at various fiducial markers for computer vision applications. While some of these markers, such as AprilTags, have readily available reference implementations, others have no published code. While one of the markers I was looking at, Pi-Tags, does not have a reference implementation, it does have a third party implementation in the form of a Robot Operating System (ROS) module,
cob_fiducials; this is great for robotics applications using ROS, but problematic otherwise. Since I wasn’t interested in using the detector with ROS, I separated it out into a standalone library. Additionally, I modified the detector to add a function that just returns the image pixel coordinates of the detected markers instead of calculating their poses, and added the ellipse refinement step1 that was mentioned in the Pi-Tag paper.2
Although the OSD3358 system-in-package, the so-called BeagleBone on a Chip, is a BGA package, it turns out that it’s surprisingly easy to solder. Using a stencil, solder paste, and a hot plate, it ended up being easier to solder than some QFN and fine pitch leaded packages I’ve soldered via the same method, since the OSD3358’s ball pitch is wider and the PCB pads have solder resist between them.
Unfortunately, I didn’t take any photos while soldering them.
A few years ago, I wrote about a method of using a modified scanner to scan large documents in segments. While this led to high quality results, it is a very slow and tedious process. More recently, I’ve had a decently large number of maps and documents to digitize but didn’t care so much about the quality and had neither the time nor the patience to scan them using my previous method. Instead, I turned a small conference room into a very large makeshift camera stand. After removing a tile from the drop ceiling, a small wooden beam was placed on the ceiling grid, straddling the hole where the tile was removed. A DSLR camera was then attached to this beam, pointing straight down, with the camera tethered to a computer via USB. A table was placed under the camera. The document to be digitized was placed on the table, and a sheet of glass was placed on top of the document to keep it flat.1 The fluorescent tubes were removed from the closest ceiling light fixtures to remove glare from the glass; the same number of bulbs were removed from each side of the camera and table to keep the lighting consistent. Once everything was set up, documents were quickly photographed, with captures triggered using the computer. Once finished, lens distortion was removed from the images, and the images were cropped and level corrected. While the results weren’t nearly as nice as the scanner-based method, they were good enough for what I needed them for, and it was much, much faster. An example result is below.
Unfortunately, I neglected to photograph the camera set up.
Back in September, I took apart the original Amazon Dash, but now there’s a new version, so I took it apart as well. The new product number is
PL46MN; the old product number is
ORS3YV.1 The original wand was very similar to the first generation Dash Button, and the second wand bears more than a passing resemblance to the second generation Dash Button. As with the original wand, the new version is essentially a Dash Button with a barcode scanner and a larger, user replaceable battery.
Although there are plenty of tools that work well for stabilizing regular video, there aren’t any good ones for stabilizing 360 degree video. As I was unable to find any freely available software that worked, I used various command line tools from Hugin and FFmpeg. Although this worked, it was extremely slow and had some issues with the horizon drifting.1 I can’t really recommend the approach, but I figured I’d post the technique in case anyone finds it to be useful. Hopefully Facebook with open source their 360 video stabilization, since it seems much better.