Robin Sonefors 26 December 2010 As someone who’s primarily a programmer, and not really a designer, I, like Eric, at first considered the first algorithm revolting. It is indeed flawed, and figuring out which colors would be pathological to the algorithm is not a particularly challenging endeavor. However, I realized it will probably work alright, and in case there are more readers like myself and Eric, I will provide an explanation that at least I find more satisfying than the one given in the article: Most video formats are still encoded in black and white, with added color channels. While this was originally done for backwards compatibility reasons, this also turns out to compress really well, since it turns out that the red, green and blue channels are very often quite similar: by separating the channels in a YIQ or a YUV picture, the luma channel will likely be the only one with a lot of information. By separating the channels in a RGB picture, all of the channels will appear to contain pretty much the same shapes. This must mean that most things are “somewhat greyish”: the channels in RGB are rarely that far apart. Exploiting this, as the 50% method appears to do, by assuming that the green and blue channel will be similar to the red will thus often give you somewhat sane results. This assumption does break sometimes, particularly for highly saturated colors such as #ff0000, but for those, either black or white text works. Sure, #7fffff and #800000 will give you really bad results, but other than those, it should work alright.