| Andrew Cooke | Contents | Latest | RSS | Twitter | Previous | Next


Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Lepl parser for Python.

Colorless Green.

Photography around Santiago.

SVG experiment.

Professional Portfolio

Calibration of seismometers.

Data access via web services.

Cache rewrite.

Extending OpenSSH.

Last 100 entries

British Words; Chinese Govt Intercepts External Web To DDOS github; Numbering Permutations; Teenage Engineering - Low Price Synths; GCHQ Can Do Whatever It Wants; Dublinesque; A Cryptographic SAT Solver; Security Challenges; Word Lists for Crosswords; 3D Printing and Speaker Design; Searchable Snowden Archive; XCode Backdoored; Derived Apps Have Malware (CIA); Rowhammer - Hacking Software Via Hardware (DRAM) Bugs; Immutable SQL Database (Kinda); Tor GPS Tracker; That PyCon Dongle Mess...; ASCII Fluid Dynamics; Brandalism; Table of Shifter, Cassette and Derailleur Compatability; Lenovo Demonstrates How Bad HTTPS Is; Telegraph Owned by HSBC; Smaptop - Sunrise (Music); Equation Group (NSA); UK Torture in NI; And - A Natural Extension To Regexps; This Is The Future Of Religion; The Shazam (Music Matching) Algorithm; Tributes To Lesbian Community From AIDS Survivors; Nice Rust Summary; List of Good Fiction Books; Constructing JSON From Postgres (Part 2); Constructing JSON From Postgres (Part 1); Postgres in Docker; Why Poor Places Are More Diverse; Smart Writing on Graceland; Satire in France; Free Speech in France; MTB Cornering - Where Should We Point Our Thrusters?; Secure Secure Shell; Java Generics over Primitives; 2014 (Charlie Brooker); How I am 7; Neural Nets Applied to Go; Programming, Business, Social Contracts; Distributed Systems for Fun and Profit; XML and Scheme; Internet Radio Stations (Curated List); Solid Data About Placebos; Half of Americans Think Climate Change Is a Sign of the Apocalypse; Saturday Surf Sessions With Juvenile Delinquents; Ssh, tty, stdout and stderr; Feathers falling in a vacuum; Santiago 30m Bike Route; Mapa de Ciclovias en Santiago; How Unreliable is UDP?; SE Santiago 20m Bike Route; Cameron's Rap; Configuring libxml with Eclipse; Reducing Combinatorial Complexity With Occam - AI; Sentidos Comunes (Chilean Online Magazine); Hilary Mantel: The Assassination of Margaret Thatcher - August 6th 1983; NSA Interceptng Gmail During Delivery; General IIR Filters; What's happening with Scala?; Interesting (But Largely Illegible) Typeface; Retiring Essentialism; Poorest in UK, Poorest in N Europe; I Want To Be A Redneck!; Reverse Racism; The Lost Art Of Nomography; IBM Data Center (Photo); Interesting Account Of Gamma Hack; The Most Interesting Audiophile In The World; How did the first world war actually end?; Ky - Restaurant Santiago; The Black Dork Lives!; The UN Requires Unaninmous Decisions; LPIR - Steganography in Practice; How I Am 6; Clear Explanation of Verizon / Level 3 / Netflix; Teenage Girls; Formalising NSA Attacks; Switching Brakes (Tektro Hydraulic); Naim NAP 100 (Power Amp); AKG 550 First Impressions; Facebook manipulates emotions (no really); Map Reduce "No Longer Used" At Google; Removing RAID metadata; New Bike (Good Bike Shop, Santiago Chile); Removing APE Tags in Linux; Compiling Python 3.0 With GCC 4.8; Maven is Amazing; Generating Docs from a GitHub Wiki; Modular Shelves; Bash Best Practices; Good Emergency Gasfiter (Santiago, Chile); Readings in Recent Architecture; Roger Casement; Integrated Information Theory (Or Not); Possibly undefined macro AC_ENABLE_SHARED; Update on Charges

© 2006-2013 Andrew Cooke (site) / post authors (content).

Countdown Timers (eg Download)

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 13:21:37 -0400 (CLT)

The Problem

Yesterday I downloaded a file over BitTorrent (Colbert wasn't as funny as
people said).  It took most of the day.  The "time remaining" value was
almost useless - the value fluctuated wildly.  I started to wonder why.

Watching the other values displayed it soon became clear that "time
remaining" was calculated from the instantaneous download rate.  But the
download rate varied hugely - from 0 ("infinity time remaning" to

A value based on the average download period would have been more useful. 
But that could be misleading if the download rate varies (perhaps the
100kB/s peer left the cloud at some point, for example).  And it could
also be misleading at the end of download, when you really do care about
the instantaneous rate.

The standard compromise to deal with these problems is to take a weighted
average of the recent rates (I won't worry for the moment about whether we
want to average rates or their inverse) - an exponential weighting is easy
to obtain from a "rolling average".

That assumes a certain model for the system - that it is governed by a
single rate that slowly evolves over time (is the best explanation I can
see, at least).  But looking at what was happening with BitTorrent, that
didn't seem to be correct.

Instead there seemed to be several different modes - 0, 15, 30, 60, 100 KB/s
and the system transitioned between them. Perhaps it reflect the number of
peers.  If the transitions are fast enough they average out; if they are
slow enough then the weighted average will adapt to each in turn; that
leaves a region where the simple model above will always be wrong -
playing catch-up.

Another problem with the weighted average is that one probably wants
different weightings at different stages in the download's lifetime.  A
long-term average is appropriate for most of the download, but a more
responsive value is appropriate near the end.

I also felt that the way that the information could be presented could be
improved, but I'll address that later.

An Alternative Model

We could model what I saw above as series of different states, with
transitions between them.  This sounds simple enough, but there are a
of problems in practice.

The leading problem is how to identify a state.  A download monitor would
have an API that is called when data is downloaded (presumably).  In other
words, calls are made irregularly and may indicate various values
(different size packets).

The most serious consequence of this irregularity is that if download stalls
or becomes very slow, then the API is not called for a long time.  It seems
that the monitor needs its own thread (or rather, a regular "clock tick"; a
separate thread with timer would be the simplest implementation in Java, but
probably not in many C based GUIs, I suspect).

After toying with various approaches I believe that the following approach
is best: the API accumulates the values of any calls made between cycles.
This gives a regular, unbiased (unlike interleaving "zero" values on clock
ticks with API intermittent API calls) signal.  The integrated value
the cycle identifies the state.

While I think this is the best solution, it introduces another problem - for
slow downloads (or short cycles) the system introduces "ghost" zero states
that mask any transition between "real" states.  For example, consider a 1
second cycle with a system that receives 10 units every minute (as a
API call).  If that system then jumps to 20 units every minute we will model
that as a transition from 10 to zero (repeated zero) to 20, rather than a
transition from 10 to 20.

We need either second order correlations (hard) or additional structure in
the model.  One fix is to have a separate "zero" for each state.  So 10
zero(10) transition to 20 and zero(20).

Transition Probabilities

So far I have ignored transition probabilities.  We can treat each state as
having a fixed lifetime (one cycle) and include transitions to "self" (and
cycles via the associated zero, which hopefully will come out in the wash)
to indicate that the state remains the same across ticks.

With a constant lifetime for states, probabilities are proportional to
transition counts.

There are two useful things we can do with transition probabilities.

First, we can solve for the expected download rate looking forwards.  This
is an average over all future possibilities (to infinite time) and can be
calculated (I think) via a simple matrix solution (it's just a bunch of
simultaneous equations).  Inverting a matrix might be expensive with many
states (even if they are grouped, which I will come to in a moment), but I
have a hunch that they can also be approximated effficiently by a simple
iterative process (the rate of state x is calculated from all others;
and repeat) - that looks like it will be stable and could allow a solution
to adapt as probabilities drift with time.

That sounds cool enough, but it may still be a complicated way to
calculate the value you'd get by calculating the average state (worrying
about weighting).  Does the extra complexity buy anything?  I think there
may be some use if probabilities change - it's not yet clear to me whether
"expiring" transition probabilities is equivalent to using a running

Second, we can predict the expected behaviour (and explore the variations)
in the short term.  We can tell the difference between a stable state
(with a high "self" probability) and an unstable one - for the stable
state we can
make an accurate prediction of near-term download speeds.  This seems to
be a significant advantage over other schemes - it occupies a middle
ground between global average and latest value.


Grouping States

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:06:33 -0400 (CLT)

Grouping states simplifies / speeds calculations.  Dynamically regrouping
seems problematic (re-assignment after throwing away information).  So
either atomic values should be accumulated and then grouped for each
analysis (which sounds expensive) or we ned a "fixed" system that is
sufficiently general for all cases.

An extensible "buckets" system would have to be logarithmic to be
manageable.  I guess an increase of 10% bin-on-bin might be about right,
giving sufficient resolution while keeping sizes sufficiently small.

Perhaps better would be an ordered tree where new values descend until
they are within 10% of an existing value (whose value might shift
accordingly?).  Which suggests width might depend on popularity, but that
raises the resampling problems again.

Hmmm.  The tree will have overlap/shadowing issues unless bins are
regular.  So we're left with a tree that implements sparse buckets.

GUI (Clocks)

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:23:05 -0400 (CLT)

"Progress" is typically shown as a bar, but "time remaining" doesn't seem
to have an equivalent standard representation.

It seems to me that you could show the appropriate information on a clock
face.  Different scales (few minutes, many minutes, few hours, many hours
etc) by selecting seconds, minutes or hours as units (perhaps indicated by
the clock hands, or in text) and by showing either a whole dial or a
quarter dial.

The time remaining could be marked as an arc from 12, but you could also
factor in current time and have an arc starting from when the process
started, changing colour at the current time, and finishing at the
expected finish time.

Comment on this post