Figure Caption Color Indicators

Earlier this year, I became aware of a feature in GitHub-flavored Markdown that displays a colored square inline when HTML color codes are surrounded by backticks, e.g., #1f77b4. Although I only recently became aware of this feature, it dates back to at least 2017 and is similar to a feature that Slack has had since at least 2014. When I saw this inline color presentation, I immediately thought of its applicability to figure captions, particularly in academic papers; as a colorblind individual, matching colors referenced in figure captions to features in the figures themselves can be challenging at times due to difficulties with naming colors. Thus, I added similar annotations to figure captions in my recently submitted paper, Two-year Cosmology Large Angular Scale Surveyor (CLASS) Observations: A First Detection of Atmospheric Circular Polarization at Q Band:

Fig. 2. Frequency dependence of polarized atmospheric signal at zenith for the CLASS observing site, both for circular polarization (|V|, shown in blue) and linear polarization (\sqrt{Q^2+U^2}, shown in orange). The light gray bands indicate CLASS observing frequencies, with the lowest frequency band corresponding to the Q-band telescope.

Fig. 5. Example binned azimuth profiles are shown…angle cut. The profile in blue is from a zenith angle of 43.9° and a boresight rotation angle of −45°, the profile in orange is from a zenith angle of 46.7° and a boresight rotation angle of 0°, and the profile in red is from a zenith angle of 52.8° and a boresight rotation angle of +45°.

The first caption refers to a line plot, while the second caption refers to a scatter plot with best fit lines. These examples, as well as underlining examples elsewhere in this post, display best in a browser that supports changing the underline thickness via the text-decoration-thickness CSS property. At the time of writing, this includes Firefox 70+ and Safari 12.2+ but does not include any version of Chrome; however, browser underlining support is still subpar to the underline rendered by \LaTeX, so the reader is encouraged to view the figures in the paper. Continue reading

Posted in , | Tagged , , , , , | Leave a comment

Discernibility of (Rainbow) Colormaps

Earlier this month, the Turbo rainbow colormap was released and publicized on the Google AI Blog. This colormap attempts to mitigate the banding issues in the existing Jet rainbow colormap, while retaining the advantages of its high contrast; note that Turbo is not perceptually uniform, so care should be used where high accuracy is required, particularly for local differences. What particularly caught my attention was the fact that the author attempted to address the color vision deficiency-related shortcomings of Jet. I am of opinion that the creation of a colorblind-friendly rainbow colormap probably isn’t possible, since the confusion axes of color vision deficiencies become problematic once hue become the primary discriminator in a colormap instead of lightness;1 this made me a bit suspicious of the claim and prompted further investigation on my part. While the author’s attempt to consider color vision deficiencies in the creation of the colormap is laudable, it was unfortunately based on what I feel is a flawed analysis. Depth images visualized using the colormap were fed into an online color vision deficiency simulator, and the results were evaluated qualitatively by individuals with normal color vision; however, this particular simulator is, best I can tell, based on an outdated technique from a 1988 paper2 instead of the more recent and accurate approach of Machado et al. (2009).3 Below, I attempt what I feel to be a more accurate and quantitative analysis, which shows that Turbo isn’t really colorblind-friendly, despite the attempt to make it so. Continue reading


  1. It probably is possible to create a colorblind-friendly rainbow colormap for a particular type of color vision deficiency. However, creating such a colormap that simultaneously works for multiple types of color vision deficiencies as well as for normal color vision is what is likely impossible.  

  2. G. W. Meyer and D. P. Greenberg, “Color-defective vision and computer graphics displays,” in IEEE Computer Graphics and Applications, vol. 8, no. 5, pp. 28-40, Sept. 1988. doi:10.1109/38.7759  

  3. G. M. Machado, M. M. Oliveira, and L. A. F. Fernandes, “A Physiologically-based Model for Simulation of Color Vision Deficiency,” in IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1291-1298, Nov.-Dec. 2009. doi:10.1109/TVCG.2009.113  

Posted in , | Tagged , , , , , | Leave a comment

Pannellum 2.5

Pannellum 2.5 has now been released. As with Pannellum 2.4, this was a rather incremental release. The most noteworthy change is that equirectangular panoramas will now be automatically split into two textures if too big for a given device, which means images up to 8192 px across, covering all consumer panoramic cameras, now have widespread support. There has also been a significant improvement of the rendering quality on certain mobile devices (the fragment shaders now use highp precision), and support for partial panoramas has improved. Finally, there are an assortment of more minor improvements and bug fixes. See the changelog for full details. Pannellum also now has a Zenodo DOI (and a specific DOI for each new release).

Posted in | Tagged , , , | 21 Comments

Preliminary Color Cycle Order Ranking Results

Last month, I presented a preliminary analysis of ranking color sets using responses collected in the Color Cycle Survey. Now, I extend this analysis to look at color ordering within a given color set. For this analysis, the same artificial neural network architecture was used as was used before, except that batch normalization, with a batch size of 2048, was used after the two Gaussian dropout layers. Determining ordering turned out to be a slightly more difficult problem, in part because the data cannot be augmented, since the ordering, obviously, matters. However, due to the way the survey is structured, with the user picking the best of four potential orderings, there are three pairwise data points per response. The same set of responses was used, ignoring the additional responses collected since the previous analysis was performed (there are now ~10k total responses).

To maximize the information gleaned from the survey responses, the network was trained in four steps. The process started with a single network and ended with a conjoined network, as before, except the single network underwent three stages of training instead of one. First, the color set responses—the responses that were used in the previous analysis—were used to train the network for 50 epochs, to learn color representations. Next, the ordering responses were used with the data augmented with all possible cyclic shifts to train the network for an additional 50 epochs, to learn internal cycle orderings. Then, the non-augmented ordering responses were used to train the network for another 100 epochs, to learn the ideal starting color. Finally, the last layer of the network was replaced, as before, to make a conjoined network, and the new network was trained for a final 100 epochs, again with the non-augmented ordering responses. Continue reading

Posted in , , | Tagged , , , , , , | Leave a comment

Preliminary Color Cycle Set Ranking Results

Since I launched my color cycle survey in December, it has collected ~9.7k responses across ~800 user sessions. Although the responses are not as numerous as I’d like, there’s currently enough data for preliminary analysis. The data are split between sets of six, eight, and ten colors with ratios of approximately 2:2:1; there are fewer ten-color color set responses as I disabled that portion of the survey months ago, to more quickly record six- and eight-color color set responses. So far, I’ve focused on analyzing the set ranking of the six-color color sets, for which there are ~4k responses, using artificial neural networks. The gist of the problem is to use the survey’s pair-wise responses to train a neural network such that it can rank 10k previously-generated color sets; these colors sets each have a minimum perceptual distance between colors, both with and without color vision deficiency simulations applied.

As inputs with identical structure are being compared, a network architecture that is invariant to input order, i.e., one that produces identical output for inputs (A, B) and (B, A), is desirable. Conjoined neural networks1 satisfy this property; they consist of two identical neural networks with shared weights, the outputs of which are combined to produce a single result. In this case, each network takes a single color set as input and produces a single scalar output, a “score” for the input color set. The two scores are then compared, with the better scoring color set of the input pair chosen as the preferred set; put more concretely, the difference of the two scores is computed and used to calculate binary cross-entropy during network training. The architecture of the network appears in the figure below and contains 2077 trainable parameters.

Artificial Neural Network Architecture Diagram Continue reading


  1. Bromley, Jane, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. “Signature verification using a ‘Siamese’ time delay neural network.” In Advances in neural information processing systems, pp. 737-744. 1994. 

Posted in , , | Tagged , , , , , , | Leave a comment