| Andrew Cooke | Contents | Latest | RSS | Twitter | Previous | Next

C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Lepl parser for Python.

Colorless Green.

Photography around Santiago.

SVG experiment.

Professional Portfolio

Calibration of seismometers.

Data access via web services.

Cache rewrite.

Extending OpenSSH.

C-ORM: docs, API.

Last 100 entries

Crypto AG DID work with NSA / GCHQ; UNUMS (Universal Number Format); MOOCs (Massive Open Online Courses); Interesting Looking Game; Euler's Theorem for Polynomials; Weeks 3-6; Reddit Comment; Differential Cryptanalysis For Dummies; Japanese Graphic Design; Books To Be Re-Read; And Today I Learned Bugs Need Clear Examples; Factoring a 67 bit prime in your head; Islamic Geometric Art; Useful Julia Backtraces from Tasks; Nothing, however, is lost with less discomfort than that which, when lost, cannot be missed; Article on Didion; Cost of Living by City; British Slavery; Derrida on Metaphor; African SciFi; Traits in Julia; Alternative Japanese Lit; Pulic Key as Address (Snow); Why Information Grows; The Blindness Of The Chilean Elite; Some Victoriagate Links; This Is Why I Left StackOverflow; New TLS Implementation; Maths for Physicists; How I Am 8; 1000 Word Philosophy; Cyberpunk Reading List; Detailed Discussion of Message Dispatch in ParserCombinator Library for Julia; FizzBuzz in Julia w Dependent Types; kokko - Design Shop in Osaka; Summary of Greece, Currently; LLVM and GPUs; See Also; Schoolgirl Groyps (Maths); Japanese Lit; Another Example - Modular Arithmetic; Music from United; Python 2 and 3 compatible alternative.; Read Agatha Christie for the Plot; A Constructive Look at TempleOS; Music Thread w Many Recommendations; Fixed Version; A Useful Julia Macro To Define Equality And Hash; k3b cdrom access, OpenSuse 13.1; Week 2; From outside, the UK looks less than stellar; Huge Fonts in VirtualBox; Keen - Complex Emergencies; The Fallen of World War II; Some Spanish Fiction; Calling C From Fortran 95; Bjork DJ Set; Z3 Example With Python; Week 1; Useful Guide To Starting With IJulia; UK Election + Media; Review: Reinventing Organizations; Inline Assembly With Julia / LLVM; Against the definition of types; Dumb Crypto Paper; The Search For Quasi-Periodicity...; Is There An Alternative To Processing?; CARDIAC (CARDboard Illustrative Aid to Computation); The Bolivian Case Against Chile At The Hague; Clear, Cogent Economic Arguments For Immigration; A Program To Say If I Am Working; Decent Cards For Ill People; New Photo; Luksic And Barrick Gold; President Bachelet's Speech; Baltimore Primer; libxml2 Parsing Stream; configure.ac Recipe For Library Path; The Davalos Affair For Idiots; Not The Onion: Google Fireside Chat w Kissinger; Bicycle Wheels, Inertia, and Energy; Another Tax Fraud; Google's Borg; A Verion That Redirects To Local HTTP Server; Spanish Accents For Idiots; Aluminium Cans; Advice on Spray Painting; Female View of Online Chat From a Male; UX Reading List; S4 Subgroups - Geometric Interpretation; Fucking Email; The SQM Affair For Idiots; Using Kolmogorov Complexity; Oblique Strategies in bash; Curses Tools; Markov Chain Monte Carlo Without all the Bullshit; Email Para Matias Godoy Mercado; The Penta Affair For Idiots; Example Code To Create numpy Array in C; Good Article on Bias in Graphic Design (NYTimes); Do You Backup github?

© 2006-2015 Andrew Cooke (site) / post authors (content).

Compiling Recursive Descent to Regular Expressions

From: "andrew cooke" <andrew@...>

Date: Sat, 4 Apr 2009 09:20:37 -0400 (CLT)

I just finished some initial tests on "compiling" the recursive descent
parser in LEPL to a discrete finite automata (DFA) using regular
expressions.

There are some limitations, of course - I only change the lower parts of
the tree that match characters.  This is not quite as obvious as it may
sound because my regular expression engine can handle arbitrary Python
objects, so regular expressions do not have to be made of letters.  But I
do need to write the conversion from matcher to regular expression for
each matcher, and currently only handle And, Or, Any, Literal and some
calls to DepthFirst (which is the core repetition matcher).

But even that explanation is not complete, because those matchers are
actually a large fraction of what is used in most parsers (LEPL provides
many more matchers, but they are sugar built on top of these).  In
practice the biggest problem is that arbitrary transforms (functions) can
be invoked on the results as they are generated.

I ameliorated the effect of actions by making composition explicit -
composite actions are now available for inspection internally as lists of
functions, and the regular expression rewriting engine makes use of this
to identify "add" (the function used to combine strings).

Another limitation is that the fastest regular expression engine gives
only a single greedy match.  But a second engine, using a pushdown
automaton, is nearly as fast (see results below) and provides all possible
matches.


Anyway, as an example, here is the regular expression that is
auto-generated for the Float() matcher:
([\+\-]|)([0-9]([0-9])*(\.|)|([0-9]([0-9])*|)\.[0-9]([0-9])*)([Ee]([\+\-]|)[0-9]([0-9])*|)


Note that the code would be even faster if people used the Regexp()
matcher to provide a regular expression directly (which uses Python's fast
"re" library), but then you start to lose some of the other advantages of
LEPL (you only get the greedy match, the syntax is uglier, reuse is
harder).

Even then, I could replace my "greedy" engine with Python's (and keep the
automatic rewriting).  In practice, I don't do that because (1) the regexp
syntax I use is simpler and easier to target and (2) my engine works with
streams of data, while Python's requires (as far as I can tell) that the
string be in-memory (in theory you can use my regexp to parse a file that
is larger than the memory available to Python; testing large files is
still on my todo list).


Anyway, to the performance tests.  I used my standard expressions example,
but "spiced up" to add some complexity (yes, this improves the results
below).  So instead of matching integers I match float values (including
exponents).

The expression to match is '1.2e3 + 2.3e4 * (3.4e5 + 4.5e6 - 5.6e7)'

The results are (in arbitrary units):
Default config: 5.8
NFA (slower pushdown) regexp: 2.9
DFA (faster greedy) regexp: 2.8

So the parser is "twice as fast".  Note that this is only timing for
parsing - rewriting the parser will take more time with the extra
rewriting (I haven't measured it, and it's not noticeable in use, but it
must take more).


In summary the following aspects of LEPL's design helped here:
- Using a small core of matchers (with syntactic sugar on top)
- Exposing the DAG of matchers for rewriting before use
- Exposing composed actions to rewriting

Andrew

Comment on this post