# C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

© 2006-2013 Andrew Cooke (site) / post authors (content).

## Graphs, Maps and Trees

From: "andrew cooke" <andrew@...>

Date: Sun, 15 Jan 2006 10:18:17 -0300 (CLST)

Last night I read these essays on a possible future for the study of
literature:

I paid most attention to the first, which argued for an emphasis on the
study of texts as a population, rather than focussing on individual
readings.  This immediately suggests a more "scientific" approach, and the
title "scientist" is given to the author in this tribute (the source of
the links above) - http://www.thevalve.org/go

Now in what follows I suspect much can be explained in the difference in
emphasis between social (history, geography, economics) and natural
science (physics, chemistry, biology).  As a graduate of the latter, I
find the former somewhat disturbing.

"Disturbing" rather than "wrong" largely because while it does, indeed,
feel wrong, it is very difficult to explain exactly why (I don't manage to
do so  here).

When I read the "Graphs" essay I had a couple of initial responses. First,
I was interested.  The aim is for a violent cultural change in an academic
discipline (or the creation of a new one).  And the argument is clear and,
moderately, convincing.

Second, I was amused and concerned.  The "science" that the article
illustrated seemed very immature.  Now any new discipline will suffer from
this, but my worry focussed (ironically) more on this individual text,
written by a "scientist".

You can see this most clearly in the way information is presented. Graphs
have poor labeling, the presentation is crude (for example, the first
illustration shows several curves which, it is claimed, are similar apart
from a translation in the x axis, yet they are presented without
normalisation in that direction, and show what appears to be exponential
growth yet are given a linear y scale).

Once this issue is raised in the illustrations it is apparent throughout
the text.  I did not notice any reference to statistical error; in a study
of a population of objects that is a remarkable omission.  This is not
just a problem of procedure - there are interesting questions that cannot
be asked without the appropriate framework.  For example, in the essay on
maps, I wondered whether there exist two dimensional non-parametric
statistics (ie based only on relative position) that might allow analysis
of maps of "imaginary worlds".

Perhaps the most worrying "analysis" was the emphasis on "cycles".  This
is a notoriously difficult subject - detecting periodicity in existing
data.  Several pages of wild speculation make no reference to even the
most basic analytic tools (eg power spectra).

But, as I have already asked, perhaps this is normal for the "soft"
sciences?  I have no idea.  What disturbs me is to what extent the
differences above are qualitative (the use of a different professional
vocabulary) rather than quantitative (allowing a different level of
confidence in the conclusions).

Two related topics make this particularly clear:

1 - Inferring models from existing data.  If you study some data, and
draw some conclusion, then I, at least, am uncertain how you test
whether that conclusion is correct.

2 - Little emphasis on testable hypotheses.

The problem with (1) is that the complete set of "what might be
interesting" in a set of data is impossible (I believe - although it seems
like it might be possible) to define.  We have standard tools that can say
whether a particular peak in a power spectrum is significant, but does
that signficance include the fact that the data did not show any distinct
change on the date when my favourit pet goat died?  Probably not.  So what
happens when I look at some data and see that correlation?  In other words
- we don't approach data with a list of hypotheses pre-determined.  We are
not testing anything.

And so (1) is tied to (2).  With all the problems that entails for
evolution, astronomy, etc.

This is interesting and incomplete and probbaly wrong in many places, but
I've run out of time (breakfast over - time for work!).  Perhaps more
later.

From painquale on Mefi - http://www.metafilter.com/mefi/48244

Andrew