Quantizer improvements
The original versions of these quantizers landed a couple months back, and they were fine. Since then, we've had several opportunities to iterate on them. This change lands improvements from those iterations. TL;DR: 18% faster, big bug fix to WSMeans, code is cleaner (hopefully) - QuantizerMap indexes an image's pixels, 'unique-ing' them by reducing the pixel array to a map with keys of colors, and values of population. This allows other quantizers to operate much more quickly: instead of working on each pixel individually, they're able to operate in bulk. - QuantizerWu uses flat arrays instead of 3D arrays and is more understandable IMHO. - QuantizerWsmeans has speed improvements, most importantly, it has a big bug fix. When a Kmeans-based quantizer algo starts, it must first assign the pixels to any one of the starting clusters. The original implementation decided what cluster to assign a pixel to by finding the cluster closest to the pixel. However, the algo terminates if no pixels moved after one iteration of the algorithm, and since the pixels were already in the cluster closest to them, the algo would immediately terminate before it actually figured out where the cluster moved to after pixels were assigned to it, and had a chance to move pixels around based on that. - Funnily enough, even though this _should_ mean Wsmeans got a lot slower since it has to do more iterations, it is actually 16% faster Additionally, during review of this CL: An accidental dependency on iteration order of a Set was introduced, causing inconsistent initialization of the mPoints array, creating inconsistent results from the quantizer. Removing the dependency on hashes of float[], and avoiding Maps altogether, removes a dependency on hash codes of pointers that existed during review, making it easier to have verifiable consistency across iterations. This also improves speed slightly, from 55 ms to 39 ms (tested on sunfish, first 9 wallpapers in Landscapes, City Scapes, and Art categories, and averaged) Bug: 189931209 Test: ran performance tests with VariationalKMeansQuantizer, the previous Celebi = Wu + Wsmeans quantizer, and the new Celebi = new Wu + new Wsmeans quantizers, over 100 iterations. Wu speed is roughly the same, Wsmeans is 18%. Verified quantizer output is stable for the same input pixels, run 100,000 times for each of two wallpapers. Change-Id: I3324d29860c098ea1fd602b8d4197837e732f4f1
Loading
Please register or sign in to comment