# C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

© 2006-2015 Andrew Cooke (site) / post authors (content).

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 13:21:37 -0400 (CLT)

The Problem
-----------

Yesterday I downloaded a file over BitTorrent (Colbert wasn't as funny as
people said).  It took most of the day.  The "time remaining" value was
almost useless - the value fluctuated wildly.  I started to wonder why.

Watching the other values displayed it soon became clear that "time
remaining" was calculated from the instantaneous download rate.  But the
download rate varied hugely - from 0 ("infinity time remaning" to
100kB/s).

A value based on the average download period would have been more useful.
But that could be misleading if the download rate varies (perhaps the
100kB/s peer left the cloud at some point, for example).  And it could
also be misleading at the end of download, when you really do care about
the instantaneous rate.

The standard compromise to deal with these problems is to take a weighted
average of the recent rates (I won't worry for the moment about whether we
want to average rates or their inverse) - an exponential weighting is easy
to obtain from a "rolling average".

That assumes a certain model for the system - that it is governed by a
single rate that slowly evolves over time (is the best explanation I can
see, at least).  But looking at what was happening with BitTorrent, that
didn't seem to be correct.

Instead there seemed to be several different modes - 0, 15, 30, 60, 100 KB/s
and the system transitioned between them. Perhaps it reflect the number of
peers.  If the transitions are fast enough they average out; if they are
slow enough then the weighted average will adapt to each in turn; that
leaves a region where the simple model above will always be wrong -
playing catch-up.

Another problem with the weighted average is that one probably wants
different weightings at different stages in the download's lifetime.  A
long-term average is appropriate for most of the download, but a more
responsive value is appropriate near the end.

I also felt that the way that the information could be presented could be
improved, but I'll address that later.

An Alternative Model
--------------------

We could model what I saw above as series of different states, with
transitions between them.  This sounds simple enough, but there are a
number
of problems in practice.

The leading problem is how to identify a state.  A download monitor would
have an API that is called when data is downloaded (presumably).  In other
words, calls are made irregularly and may indicate various values
(different size packets).

The most serious consequence of this irregularity is that if download stalls
or becomes very slow, then the API is not called for a long time.  It seems
that the monitor needs its own thread (or rather, a regular "clock tick"; a
separate thread with timer would be the simplest implementation in Java, but
probably not in many C based GUIs, I suspect).

After toying with various approaches I believe that the following approach
is best: the API accumulates the values of any calls made between cycles.
This gives a regular, unbiased (unlike interleaving "zero" values on clock
ticks with API intermittent API calls) signal.  The integrated value
across
the cycle identifies the state.

While I think this is the best solution, it introduces another problem - for
slow downloads (or short cycles) the system introduces "ghost" zero states
that mask any transition between "real" states.  For example, consider a 1
second cycle with a system that receives 10 units every minute (as a
single
API call).  If that system then jumps to 20 units every minute we will model
that as a transition from 10 to zero (repeated zero) to 20, rather than a
transition from 10 to 20.

We need either second order correlations (hard) or additional structure in
the model.  One fix is to have a separate "zero" for each state.  So 10
and
zero(10) transition to 20 and zero(20).

Transition Probabilities
------------------------

So far I have ignored transition probabilities.  We can treat each state as
having a fixed lifetime (one cycle) and include transitions to "self" (and
cycles via the associated zero, which hopefully will come out in the wash)
to indicate that the state remains the same across ticks.

With a constant lifetime for states, probabilities are proportional to
transition counts.

There are two useful things we can do with transition probabilities.

First, we can solve for the expected download rate looking forwards.  This
is an average over all future possibilities (to infinite time) and can be
calculated (I think) via a simple matrix solution (it's just a bunch of
simultaneous equations).  Inverting a matrix might be expensive with many
states (even if they are grouped, which I will come to in a moment), but I
have a hunch that they can also be approximated effficiently by a simple
iterative process (the rate of state x is calculated from all others;
rinse
and repeat) - that looks like it will be stable and could allow a solution
to adapt as probabilities drift with time.

That sounds cool enough, but it may still be a complicated way to
calculate the value you'd get by calculating the average state (worrying
about weighting).  Does the extra complexity buy anything?  I think there
may be some use if probabilities change - it's not yet clear to me whether
"expiring" transition probabilities is equivalent to using a running
average.

Second, we can predict the expected behaviour (and explore the variations)
in the short term.  We can tell the difference between a stable state
(with a high "self" probability) and an unstable one - for the stable
state we can
make an accurate prediction of near-term download speeds.  This seems to
be a significant advantage over other schemes - it occupies a middle
ground between global average and latest value.

Andrew

### Grouping States

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:06:33 -0400 (CLT)

Grouping states simplifies / speeds calculations.  Dynamically regrouping
seems problematic (re-assignment after throwing away information).  So
either atomic values should be accumulated and then grouped for each
analysis (which sounds expensive) or we ned a "fixed" system that is
sufficiently general for all cases.

An extensible "buckets" system would have to be logarithmic to be
manageable.  I guess an increase of 10% bin-on-bin might be about right,
giving sufficient resolution while keeping sizes sufficiently small.

Perhaps better would be an ordered tree where new values descend until
they are within 10% of an existing value (whose value might shift
accordingly?).  Which suggests width might depend on popularity, but that
raises the resampling problems again.

Hmmm.  The tree will have overlap/shadowing issues unless bins are
regular.  So we're left with a tree that implements sparse buckets.

### GUI (Clocks)

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:23:05 -0400 (CLT)

"Progress" is typically shown as a bar, but "time remaining" doesn't seem
to have an equivalent standard representation.

It seems to me that you could show the appropriate information on a clock
face.  Different scales (few minutes, many minutes, few hours, many hours
etc) by selecting seconds, minutes or hours as units (perhaps indicated by
the clock hands, or in text) and by showing either a whole dial or a
quarter dial.

The time remaining could be marked as an arc from 12, but you could also
factor in current time and have an arc starting from when the process
started, changing colour at the current time, and finishing at the
expected finish time.