| Andrew Cooke | Contents | Latest | RSS | Twitter | Previous | Next

C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Lepl parser for Python.

Colorless Green.

Photography around Santiago.

SVG experiment.

Professional Portfolio

Calibration of seismometers.

Data access via web services.

Cache rewrite.

Extending OpenSSH.

C-ORM: docs, API.

Last 100 entries

Calling C From Fortran 95; Bjork DJ Set; Z3 Example With Python; Week 1; Useful Guide To Starting With IJulia; UK Election + Media; Review: Reinventing Organizations; Inline Assembly With Julia / LLVM; Against the definition of types; Dumb Crypto Paper; The Search For Quasi-Periodicity...; Is There An Alternative To Processing?; CARDIAC (CARDboard Illustrative Aid to Computation); The Bolivian Case Against Chile At The Hague; Clear, Cogent Economic Arguments For Immigration; A Program To Say If I Am Working; Decent Cards For Ill People; New Photo; Luksic And Barrick Gold; President Bachelet's Speech; Baltimore Primer; libxml2 Parsing Stream; configure.ac Recipe For Library Path; The Davalos Affair For Idiots; Not The Onion: Google Fireside Chat w Kissinger; Bicycle Wheels, Inertia, and Energy; Another Tax Fraud; Google's Borg; A Verion That Redirects To Local HTTP Server; Spanish Accents For Idiots; Aluminium Cans; Advice on Spray Painting; Female View of Online Chat From a Male; UX Reading List; S4 Subgroups - Geometric Interpretation; Fucking Email; The SQM Affair For Idiots; Using Kolmogorov Complexity; Oblique Strategies in bash; Curses Tools; Markov Chain Monte Carlo Without all the Bullshit; Email Para Matias Godoy Mercado; The Penta Affair For Idiots; Example Code To Create numpy Array in C; Good Article on Bias in Graphic Design (NYTimes); Do You Backup github?; Data Mining Books; SimpleDateFormat should be synchronized; British Words; Chinese Govt Intercepts External Web To DDOS github; Numbering Permutations; Teenage Engineering - Low Price Synths; GCHQ Can Do Whatever It Wants; Dublinesque; A Cryptographic SAT Solver; Security Challenges; Word Lists for Crosswords; 3D Printing and Speaker Design; Searchable Snowden Archive; XCode Backdoored; Derived Apps Have Malware (CIA); Rowhammer - Hacking Software Via Hardware (DRAM) Bugs; Immutable SQL Database (Kinda); Tor GPS Tracker; That PyCon Dongle Mess...; ASCII Fluid Dynamics; Brandalism; Table of Shifter, Cassette and Derailleur Compatability; Lenovo Demonstrates How Bad HTTPS Is; Telegraph Owned by HSBC; Smaptop - Sunrise (Music); Equation Group (NSA); UK Torture in NI; And - A Natural Extension To Regexps; This Is The Future Of Religion; The Shazam (Music Matching) Algorithm; Tributes To Lesbian Community From AIDS Survivors; Nice Rust Summary; List of Good Fiction Books; Constructing JSON From Postgres (Part 2); Constructing JSON From Postgres (Part 1); Postgres in Docker; Why Poor Places Are More Diverse; Smart Writing on Graceland; Satire in France; Free Speech in France; MTB Cornering - Where Should We Point Our Thrusters?; Secure Secure Shell; Java Generics over Primitives; 2014 (Charlie Brooker); How I am 7; Neural Nets Applied to Go; Programming, Business, Social Contracts; Distributed Systems for Fun and Profit; XML and Scheme; Internet Radio Stations (Curated List); Solid Data About Placebos; Half of Americans Think Climate Change Is a Sign of the Apocalypse; Saturday Surf Sessions With Juvenile Delinquents; Ssh, tty, stdout and stderr; Feathers falling in a vacuum; Santiago 30m Bike Route

© 2006-2015 Andrew Cooke (site) / post authors (content).

Matching DNA Update - Faster Java Code

From: "andrew cooke" <andrew@...>

Date: Sat, 20 Sep 2008 18:54:39 -0400 (CLT)

I have just finished implementing the main core of the algorithm outlined
here - http://www.acooke.org/cute/Identifyin0.html - directly in Java and
it runs in about 8 seconds!

There were two main problems.  First, inferring how Postgres did an
efficient search and, second, implementing that without using too much
memory (my first attempt exhausted the heap so I now have a slight 
tradeoff, which uses a sort to avoid creating more memory structures and
so adds a log term to the big-O).  It's easiest to describe both together,
by outlining the final solution, but in practice I the development had two
distinct steps.

So, as in the prototype code, I generate candidate pairs by matching small
fragments of the DNA.  More exactly: I take 25 fragments, each 8 bits,
from each individual and I categorise two individuals as a candidate pair
if they have at least 3 fragments in common.

So, in psuedocode, I do the following:

 generate a table of fragments[individual_idx][fragment_idx]
 generate a table of counts[individual1_idx][individual2_idx] = 0

 for each column of fragments in turn:
   sort the table column containing the fragments;
   scan the sorted column:
     for all fragments with the same value:
       increment the counts associated with the pairs of individuals
               that share that fragment value;
     if any count == 3:
       if the "bit distance" between the pair is < 3000:
         add the pair for that count to the graph;

And I need to repeat this 6 times with different sets of fragments (the
number of identified pairs after each set is 8116, 9623, 9935, 9988, 9998,
9999).

Instead of sorting each column of fragments the scan could be direct, but
you would need to have a separate memory structure to record which
individuals were associated with which values (for this amount of data I
suspect the log pays for itself in the simplification (reduced constant
cost) that the sorted data introduces).

Also, Java has no direct support for sorting bytes (the fragments) with
keys.  I could have wrapped everything in objects, but it was more compact
(and probably faster) to bit-pack the DNA fragment and the individual
index together in a single integer (obviously the DNA has to occupy the
more significant bits for the sorting to give the corrected order).

I am going to look for a graph library now to finish this off.

8 seconds is pretty good.  When I started out I was looking at many hours;
even the optimized Python/SQL code took 30 min...

Andrew

PS  My initial attempt at searching the hashes was to do a depth first
search trying to find common fragments for each pair in turn.  While this
would have fitted well within a constraint programming framework (see my
posts here over the last week or two when I was looking at Choco and
Gecode) it was, in retrospect, completely stupid - a huge amount of time
is spent exhaustively searching irrelevant pairs.  The direct scan
described above is much more efficient, but it's not yet clear to me how
the two approaches are related.  Is there some way in which the direct
scan with counting is a dual of the search?  Or does some kind of
optimisation of the search eventually reduce it to the scan?  I don't see
how either of those pan out, but haven't looked at the CP techniques in
any detail yet.

Core Routine

From: "andrew cooke" <andrew@...>

Date: Sat, 20 Sep 2008 19:07:14 -0400 (CLT)

public int search()
{
  byte[] counts = new byte[GenomePair.hashSize(population.size())];
  int[] scratch = new int[population.size()];
  // for each fragment in turn:
  for (int column = 0; column < nHashes; ++column) {
    // pack into an integer
    for (int row = 0; row < population.size(); ++row) {
      scratch[row] = pack(hashes[row][column], row);
    }
    // group individuals with the same hash are together
    Arrays.sort(scratch);
    // for each group
    for (int row = 0; row < population.size();) {
      // get the hash for the group
      byte hash = unpackHash(scratch[row]);
      Set<Integer> allMatching = new HashSet<Integer>();
      // note the first individual
      allMatching.add(unpackRow(scratch[row]));
      // for each additional individual
      while (++row < population.size() &&
          unpackHash(scratch[row]) == hash) {
        int higher = unpackRow(scratch[row]);
        // for each pair
        for (int lower: allMatching) {
          GenomePair pair = new GenomePair(lower, higher);
          // if we have sufficient hits, check the distance
          if (++counts[pair.hashCode()] == nMatches
              && population.connected(pair, cutoff)) {
            graph.add(pair);
          }
        }
        // extend the current set so that we generate all pairs
        allMatching.add(higher);
      }
    }
  }
  // this should tend to to population.size()-1
  return graph.size();
}

Perfect Hash

From: "andrew cooke" <andrew@...>

Date: Sat, 20 Sep 2008 19:10:22 -0400 (CLT)

I should explain that I am abusing GenomePair.hashCode() - the
implementation returns a continguous index from 0 over all possible pairs.
 So all pairs are distinct and there are no gaps.  The total number of
values is given by hashSize().

At some point I'll change the name.  Originally I was using HashMaps of
these...

Andrew

Same Results

From: "andrew cooke" <andrew@...>

Date: Sun, 21 Sep 2008 20:05:00 -0400 (CLT)

I added the final graph code (using JGraphT, which seems quite capable)
and the results are, as expected, identical to the earlier work.  I also
tried some variations on the numbers of matches and hashes (but not the
fragment size, which is hard coded at 8 bits (ie bytes) in this version) -
the code is much more stable than the Python/SQL implementation to these
changes (I now suspect Postgres was switching algorithms depending on
predicted memory usage), and the values chosen aren't particularly
critical.

I'm considering sending it off to the company that posted the problem, but
they only accept submissions that are employment applications, so it seems
a bit silly (I'm not looking for a job, and won't move to Boston...).

Andrew

Comment on this post