| Andrew Cooke | Contents | Latest | RSS | Twitter | Previous | Next


Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Lepl parser for Python.

Colorless Green.

Photography around Santiago.

SVG experiment.

Professional Portfolio

Calibration of seismometers.

Data access via web services.

Cache rewrite.

Extending OpenSSH.

C-ORM: docs, API.

Last 100 entries

China Gamifies Real Life; Can't Help Thinking It's Thoughtcrime; Mefi Quotes; Spray Painting Bike Frame; Weeks 10 + 11; Change: No Longer Possible To Merge Metadata; Books on Old Age; Health Tree Maps; MRA - Men's Rights Activists; Writing Good C++14; Risk Assessment - Fukushima; The Future of Advertising and Surveillance; Travelling With Betaferon; I think I know what I dislike so much about Metafilter; Weeks 8 + 9; More; Pastamore - Bad Italian in Vitacura; History Books; Iraq + The (UK) Governing Elite; Answering Some Hard Questions; Pinochet: The Dictator's Shadow; An Outsider's Guide To Julia Packages; Nobody gives a shit; Lepton Decay Irregularity; An Easier Way; Julia's BinDeps (aka How To Install Cairo); Good Example Of Good Police Work (And Anonymity Being Hard); Best Santiago Burgers; Also; Michael Emmerich (Vibrator Translator) Interview (Japanese Books); Clarice Lispector (Brazillian Writer); Books On Evolution; Looks like Ara (Modular Phone) is dead; Index - Translations From Chile; More Emotion in Chilean Wines; Week 7; Aeon Magazine (Science-ish); QM, Deutsch, Constructor Theory; Interesting Talk Transcripts; Interesting Suggestion Of Election Fraud; "Hard" Books; Articles or Papers on depolarizing the US; Textbook for "QM as complex probabilities"; SFO Get Libor Trader (14 years); Why Are There Still So Many Jobs?; Navier Stokes Incomplete; More on Benford; FBI Claimed Vandalism; Architectural Tessellation; Also: Go, Blake's 7; Delusions of Gender (book); Crypto AG DID work with NSA / GCHQ; UNUMS (Universal Number Format); MOOCs (Massive Open Online Courses); Interesting Looking Game; Euler's Theorem for Polynomials; Weeks 3-6; Reddit Comment; Differential Cryptanalysis For Dummies; Japanese Graphic Design; Books To Be Re-Read; And Today I Learned Bugs Need Clear Examples; Factoring a 67 bit prime in your head; Islamic Geometric Art; Useful Julia Backtraces from Tasks; Nothing, however, is lost with less discomfort than that which, when lost, cannot be missed; Article on Didion; Cost of Living by City; British Slavery; Derrida on Metaphor; African SciFi; Traits in Julia; Alternative Japanese Lit; Pulic Key as Address (Snow); Why Information Grows; The Blindness Of The Chilean Elite; Some Victoriagate Links; This Is Why I Left StackOverflow; New TLS Implementation; Maths for Physicists; How I Am 8; 1000 Word Philosophy; Cyberpunk Reading List; Detailed Discussion of Message Dispatch in ParserCombinator Library for Julia; FizzBuzz in Julia w Dependent Types; kokko - Design Shop in Osaka; Summary of Greece, Currently; LLVM and GPUs; See Also; Schoolgirl Groyps (Maths); Japanese Lit; Another Example - Modular Arithmetic; Music from United; Python 2 and 3 compatible alternative.; Read Agatha Christie for the Plot; A Constructive Look at TempleOS; Music Thread w Many Recommendations; Fixed Version; A Useful Julia Macro To Define Equality And Hash; k3b cdrom access, OpenSuse 13.1; Week 2

© 2006-2015 Andrew Cooke (site) / post authors (content).

Re: [Cute] URI names - nice argument

From: "andrew cooke" <andrew@...>

Date: Fri, 9 Dec 2005 09:12:24 -0300 (CLST)

And here's the reason why voi:// is used...


On 2005 Dec 8 , at 21.42, Doug Tody wrote:

> A given URI may resolve into multiple URLs pointing to multiple
> instances.

That's the difference!  I had completely forgotten about the one-to-
many resolution.

I'm working this through out loud here, Doug, for my benefit rather
than yours, as I imagine you've been through this already, and
because it might be useful (to me if noone else) to have the whole
argument in one place.

The underlying reason is that the resources in question are biggish.
This breaks the assumptions of the best practice/architecture
analysis in two independent ways:

1. The resources are replicated, and large enough that the client's
location on the network matters.

2. The size means that HTTP is probably not the best transport
mechanism, but instead GridFTP, or BitTorrent, or something else.

In both cases, the client can't be expected to make a good decision
about which source to use (because that will depend on details of the
national and intercontinental network, which will moreover change in
time), nor which protocol to use (which will also depend on network
environment and time).  A local resolver can be expected to know
these things, either by discovery or configuration.

The assumption that's broken is the single, almost hidden, assumption
that the transport issue is solved -- `use HTTP'.  Even if that were
sorted out, and everyone decided that GridFTP (say) was the single
best transport, the analysis also assumes that there is a single
source -- a single DNS host -- for the resource; the replication in
(1) means that we're not assuming that.  That can also be got around,
by having a DNS name be handled by multiple geographically dispersed
IP addresses (Google is well known to do this), but this is
technically complicated and therefore fragile, and also centralised.

Even if they acknowledge the first HTTP point, the response to this
second point on the part of the TAG (the W3C Technical Architecture
Group, authors of the Web Architecture document) would be to point at
the replication implicit in (1).  One of the good features of HTTP is
that it is stateless, which means that it is very friendly to caches
and proxies, so you _can_ have a simple single source, and just rely
on caches to speed things up -- don't try to outsmart the network!
But the sizes undermine that argument, too: few places have the
resources to cache lots of multi-GB files, and if regional or
national centres were set up which could handle that, it would
require configuration cleverness to use them.  Thus the replication
is essentially a type of preemptive caching.

On the other hand: I suppose there is still one case for using HTTP
with a (nominally) single source, along with a smart local proxy,
which spots when you're requesting a resource/source it knows about,
and satisfies those requests using (transparently) a separate network
of replicas and protocols.  That way, the client gets all the
simplicity, predictability and API advantages of using HTTP naively
(because that would work fine over a local network).  The proxy is
effectively acting as a resolver, but the client is interacting with
it using an extremely simple and possibly built-in protocol/API, and
so doesn't have to care.  Is there milage in that?

...but I think I'm going on at too much length now, so I'll shut up!

All the best,


compute mailing list

Comment on this post