Randomly Generating Color Sets with a Minimum Perceptual Distance

Earlier this year, I released a color cycle picker that enforces a minimum perceptual distance between colors, including color vision deficiency simulations, with the goal of creating a better color cycle to replace the “category 10” color palette used by default in Matplotlib, along with other data visualization packages. While the picker works well for what it was designed for—allowing a user to create a color cycle—it requires user intervention to create color sets or cycles.1 The basic technique used—performing color vision deficiency simulations2 for various types of deficiencies and enforcing a minimum perceptual difference for the simulated colors using the CAM02-UCS3 perceptually uniform color space (where each type of deficiency is treated separately) and a minimum lightness distance (for grayscale)—is still valid for the random generation of color sets; it just needs to be extended to randomly sample the color space.

To randomly sample the available RGB color space, I started with the excellent Colorspacious Python library, which is capable of doing the requisite color vision deficiency simulations and perceptual distance calculations. However, it’s too slow for what I wanted to accomplish. Thus, I stripped the library down to the bare essentials and optimized it with the Numba JIT compiler. Since RGB to CAM02-UCS conversions are computationally expensive, but the 16.8 million possible 8-bit RGB colors easily fit in memory, the CAM02-UCS colors are precomputed for every possible color, both for normal color vision and the three types of color vision deficiency. Since very dark and very light colors are poor choices for data visualization, only colors with J \in [40, 90] are used, leaving 13.1 million colors to sample from.

To generate a color set, a starting color is chosen at random. Then, each possible color is check to see if it is far enough away in both lightness and perceptual distance, both for normal color vision and for those with color vision deficiency, at the maximum chosen color vision deficiency severity. Of these remaining colors, one is chosen at random. The process is then repeated until the color set contains the desired number of colors. This method has an advantage over rejection sampling, since it is guaranteed to return and was found to be faster. After the color set is generated, it is checked at intermediate levels of color vision deficiency severity to ensure that the minimum perceptual distance requirement is met there as well; it the distance requirement is not met, the color set is thrown out. Checking a coarse color vision deficiency interval during set generation was tried but removed, since the performance penalty outweighs the gains from having to try again fewer times. With this method in place, it is now possible to randomly generate color sets of various sizes that meets various minimum perceptual distance and minimum lightness distance requirements. However, substantial computational resources required to generate a large number of color sets.

Using this code, I’ve generated six, eight, and ten color sets with what I think are reasonable minimum perceptual and lightness distances, where reasonable means that the colors are easy enough to tell apart while still allowing a reasonably large range of different colors to be used. Full deuteranopia, protanopia, and tritanopia simulations were used. For each configuration, 10 000 random sets were generated on a 28-core machine, a process that took from around nine hours for the six color configuration to around three days for the ten color configuration. The code and generated color sets are available in a repository on GitHub.

While the individual colors in the color sets are easy enough to tell apart, the colors and their combinations are not necessarily aesthetically pleasing. I’m currently working on something to address this shortcoming; details will follow in a subsequent blog post.

  1. A color set doesn’t have a defined order, while a color cycle does. 

  2. G. M. Machado, M. M. Oliveira, and L. A. F. Fernandes, “A Physiologically-based Model for Simulation of Color Vision Deficiency,” in IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1291-1298, Nov.-Dec. 2009. doi:10.1109/TVCG.2009.113  

  3. Luo M.R., Li C. (2013) CIECAM02 and Its Recent Developments. In: Fernandez-Maloigne C. (eds) Advanced Color Image Processing and Analysis. Springer, New York, NY. doi:10.1007/978-1-4419-6190-7_2  

This entry was posted in , and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *