Andrew Cooke | Contents | Latest | RSS | Previous | Next

C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Choochoo Training Diary

Last 100 entries

Surprise Paradox; [Books] Good Author List; [Computing] Efficient queries with grouping in Postgres; [Computing] Automatic Wake (Linux); [Computing] AWS CDK Aspects in Go; [Bike] Adidas Gravel Shoes; [Computing, Horror] Biological Chips; [Books] Weird Lit Recs; [Covid] Extended SIR Models; [Art] York-based Printmaker; [Physics] Quantum Transitions are not Instantaneous; [Computing] AI and Drum Machines; [Computing] Probabilities, Stopping Times, Martingales; bpftrace Intro Article; [Computing] Starlab Systems - Linux Laptops; [Computing] Extended Berkeley Packet Filter; [Green] Mainspring Linear Generator; Better Approach; Rummikub Solver; Chilean Poetry; Felicitations - Empowerment Grant; [Bike] Fixing Spyre Brakes (That Need Constant Adjustment); [Computing, Music] Raspberry Pi Media (Audio) Streamer; [Computing] Amazing Hack To Embed DSL In Python; [Bike] Ruta Del Condor (El Alfalfal); [Bike] Estimating Power On Climbs; [Computing] Applying Azure B2C Authentication To Function Apps; [Bike] Gearing On The Back Of An Envelope; [Computing] Okular and Postscript in OpenSuse; There's a fix!; [Computing] Fail2Ban on OpenSuse Leap 15.3 (NFTables); [Cycling, Computing] Power Calculation and Brakes; [Hardware, Computing] Amazing Pockit Computer; Bullying; How I Am - 3 Years Post Accident, 8+ Years With MS; [USA Politics] In America's Uncivil War Republicans Are The Aggressors; [Programming] Selenium and Python; Better Walking Data; [Bike] How Fast Before Walking More Efficient Than Cycling?; [COVID] Coronavirus And Cycling; [Programming] Docker on OpenSuse; Cadence v Speed; [Bike] Gearing For Real Cyclists; [Programming] React plotting - visx; [Programming] React Leaflet; AliExpress Independent Sellers; Applebaum - Twilight of Democracy; [Politics] Back + US Elections; [Programming,Exercise] Simple Timer Script; [News] 2019: The year revolt went global; [Politics] The world's most-surveilled cities; [Bike] Hope Freehub; [Restaurant] Mama Chau's (Chinese, Providencia); [Politics] Brexit Podcast; [Diary] Pneumonia; [Politics] Britain's Reichstag Fire moment; install cairo; [Programming] GCC Sanitizer Flags; [GPU, Programming] Per-Thread Program Counters; My Bike Accident - Looking Back One Year; [Python] Geographic heights are incredibly easy!; [Cooking] Cookie Recipe; Efficient, Simple, Directed Maximisation of Noisy Function; And for argparse; Bash Completion in Python; [Computing] Configuring Github Jekyll Locally; [Maths, Link] The Napkin Project; You can Masquerade in Firewalld; [Bike] Servicing Budget (Spring) Forks; [Crypto] CIA Internet Comms Failure; [Python] Cute Rate Limiting API; [Causality] Judea Pearl Lecture; [Security, Computing] Chinese Hardware Hack Of Supermicro Boards; SQLAlchemy Joined Table Inheritance and Delete Cascade; [Translation] The Club; [Computing] Super Potato Bruh; [Computing] Extending Jupyter; Further HRM Details; [Computing, Bike] Activities in ch2; [Books, Link] Modern Japanese Lit; What ended up there; [Link, Book] Logic Book; Update - Garmin Express / Connect; Garmin Forerunner 35 v 230; [Link, Politics, Internet] Government Trolls; [Link, Politics] Why identity politics benefits the right more than the left; SSH Forwarding; A Specification For Repeating Events; A Fight for the Soul of Science; [Science, Book, Link] Lost In Math; OpenSuse Leap 15 Network Fixes; Update; [Book] Galileo's Middle Finger; [Bike] Chinese Carbon Rims; [Bike] Servicing Shimano XT Front Hub HB-M8010; [Bike] Aliexpress Cycling Tops; [Computing] Change to ssh handling of multiple identities?; [Bike] Endura Hummvee Lite II; [Computing] Marble Based Logic; [Link, Politics] Sanity Check For Nuclear Launch; [Link, Science] Entropy and Life

© 2006-2017 Andrew Cooke (site) / post authors (content).

Countdown Timers (eg Download)

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 13:21:37 -0400 (CLT)

The Problem
-----------

Yesterday I downloaded a file over BitTorrent (Colbert wasn't as funny as
people said).  It took most of the day.  The "time remaining" value was
almost useless - the value fluctuated wildly.  I started to wonder why.

Watching the other values displayed it soon became clear that "time
remaining" was calculated from the instantaneous download rate.  But the
download rate varied hugely - from 0 ("infinity time remaning" to
100kB/s).

A value based on the average download period would have been more useful. 
But that could be misleading if the download rate varies (perhaps the
100kB/s peer left the cloud at some point, for example).  And it could
also be misleading at the end of download, when you really do care about
the instantaneous rate.

The standard compromise to deal with these problems is to take a weighted
average of the recent rates (I won't worry for the moment about whether we
want to average rates or their inverse) - an exponential weighting is easy
to obtain from a "rolling average".

That assumes a certain model for the system - that it is governed by a
single rate that slowly evolves over time (is the best explanation I can
see, at least).  But looking at what was happening with BitTorrent, that
didn't seem to be correct.

Instead there seemed to be several different modes - 0, 15, 30, 60, 100 KB/s
and the system transitioned between them. Perhaps it reflect the number of
peers.  If the transitions are fast enough they average out; if they are
slow enough then the weighted average will adapt to each in turn; that
leaves a region where the simple model above will always be wrong -
playing catch-up.

Another problem with the weighted average is that one probably wants
different weightings at different stages in the download's lifetime.  A
long-term average is appropriate for most of the download, but a more
responsive value is appropriate near the end.

I also felt that the way that the information could be presented could be
improved, but I'll address that later.


An Alternative Model
--------------------

We could model what I saw above as series of different states, with
transitions between them.  This sounds simple enough, but there are a
number
of problems in practice.

The leading problem is how to identify a state.  A download monitor would
have an API that is called when data is downloaded (presumably).  In other
words, calls are made irregularly and may indicate various values
(different size packets).

The most serious consequence of this irregularity is that if download stalls
or becomes very slow, then the API is not called for a long time.  It seems
that the monitor needs its own thread (or rather, a regular "clock tick"; a
separate thread with timer would be the simplest implementation in Java, but
probably not in many C based GUIs, I suspect).

After toying with various approaches I believe that the following approach
is best: the API accumulates the values of any calls made between cycles.
This gives a regular, unbiased (unlike interleaving "zero" values on clock
ticks with API intermittent API calls) signal.  The integrated value
across
the cycle identifies the state.

While I think this is the best solution, it introduces another problem - for
slow downloads (or short cycles) the system introduces "ghost" zero states
that mask any transition between "real" states.  For example, consider a 1
second cycle with a system that receives 10 units every minute (as a
single
API call).  If that system then jumps to 20 units every minute we will model
that as a transition from 10 to zero (repeated zero) to 20, rather than a
transition from 10 to 20.

We need either second order correlations (hard) or additional structure in
the model.  One fix is to have a separate "zero" for each state.  So 10
and
zero(10) transition to 20 and zero(20).


Transition Probabilities
------------------------

So far I have ignored transition probabilities.  We can treat each state as
having a fixed lifetime (one cycle) and include transitions to "self" (and
cycles via the associated zero, which hopefully will come out in the wash)
to indicate that the state remains the same across ticks.

With a constant lifetime for states, probabilities are proportional to
transition counts.

There are two useful things we can do with transition probabilities.

First, we can solve for the expected download rate looking forwards.  This
is an average over all future possibilities (to infinite time) and can be
calculated (I think) via a simple matrix solution (it's just a bunch of
simultaneous equations).  Inverting a matrix might be expensive with many
states (even if they are grouped, which I will come to in a moment), but I
have a hunch that they can also be approximated effficiently by a simple
iterative process (the rate of state x is calculated from all others;
rinse
and repeat) - that looks like it will be stable and could allow a solution
to adapt as probabilities drift with time.

That sounds cool enough, but it may still be a complicated way to
calculate the value you'd get by calculating the average state (worrying
about weighting).  Does the extra complexity buy anything?  I think there
may be some use if probabilities change - it's not yet clear to me whether
"expiring" transition probabilities is equivalent to using a running
average.

Second, we can predict the expected behaviour (and explore the variations)
in the short term.  We can tell the difference between a stable state
(with a high "self" probability) and an unstable one - for the stable
state we can
make an accurate prediction of near-term download speeds.  This seems to
be a significant advantage over other schemes - it occupies a middle
ground between global average and latest value.

Andrew

Grouping States

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:06:33 -0400 (CLT)

Grouping states simplifies / speeds calculations.  Dynamically regrouping
seems problematic (re-assignment after throwing away information).  So
either atomic values should be accumulated and then grouped for each
analysis (which sounds expensive) or we ned a "fixed" system that is
sufficiently general for all cases.

An extensible "buckets" system would have to be logarithmic to be
manageable.  I guess an increase of 10% bin-on-bin might be about right,
giving sufficient resolution while keeping sizes sufficiently small.

Perhaps better would be an ordered tree where new values descend until
they are within 10% of an existing value (whose value might shift
accordingly?).  Which suggests width might depend on popularity, but that
raises the resampling problems again.

Hmmm.  The tree will have overlap/shadowing issues unless bins are
regular.  So we're left with a tree that implements sparse buckets.

GUI (Clocks)

From: "andrew cooke" <andrew@...>

Date: Mon, 1 May 2006 16:23:05 -0400 (CLT)

"Progress" is typically shown as a bar, but "time remaining" doesn't seem
to have an equivalent standard representation.

It seems to me that you could show the appropriate information on a clock
face.  Different scales (few minutes, many minutes, few hours, many hours
etc) by selecting seconds, minutes or hours as units (perhaps indicated by
the clock hands, or in text) and by showing either a whole dial or a
quarter dial.

The time remaining could be marked as an arc from 12, but you could also
factor in current time and have an arc starting from when the process
started, changing colour at the current time, and finishing at the
expected finish time.

Comment on this post