# C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

© 2006-2015 Andrew Cooke (site) / post authors (content).

## Efficient Entropy Estimates for Sequences of Large Values with OpenCL

From: andrew cooke <andrew@...>

Date: Sat, 8 Oct 2011 21:43:34 -0300

I need to estimate the entropy of sequences of large numbers in OpenCL.

The basic idea is to find how often each number occurs and then use the
Shannon (p ln(p)) formula.  But doing that naively in OpenCL is hard - the
numbers are too large to use a simple array (where we increment array[n] each
time we find value n).

(By "large number" I mean that they are represented as an arbitrary number of
32 bit ints).

The sequence length is much shorter than the maximum number and although the
numbers are large certain values are likely to repeat often.  So one approach
is to generate a lookup-table that associates each number with an index and
then historgam the indices.  But I cannot see how to do that efficiently or
easily in OpenCL.

The best idea I have had so far is to use modular arithmetic.  For example,
take the numbers modulo 2^8 and then increment a 255 entry array.  But modulo
2^n simply discards bits past n and the numbers are actually "bit patterns",
so I don't want to do that.  So it seems better to work modul some prime value
(or perhaps several).

Thanks to the magic of modular arithmetic it seems to be relatively easy to
convert large (multi-int32) numbers to their modulo:

a0 + 2^32 (a1 + 2^32 (a2 + ...))
= (a0 % n) + (2^32 % n) * ((a1 % n) + (2^32 % n) * ((a2 % n) + ...))

and if I use a prime less than 2^16 (I am thinking a value like 17 might be
enough!) then I can do the above in 32 bit ints without any problems.

I hope that makes sense.  Anyone have any better ideas?  I don't need entropy
exactly - all I want is a number that indicates "how statistically random"
some sequence is in a way that is reasonably smooth and sensitive (since it
will be used as a fitness measure in a GA).  And no, I don't really have any
clues about what a good statistical measure would be, but it would be nice if
it could reject a simple counter (which entropy cannot, so there is certainly
room for improvement).

(Background - the values are the binary state of a system that generates
"rhythms"; the system is deterministic it and tends "naturally" to generate
simple repeating patterns - the aim is to evolve away from that).

Andrew

I just remembered - radix sorting is probably the "right" way to do this.
Andrew