Everything is a Ghetto

While reading this controversial link bait, consider buying my product/service

Transducers in SICP

I was re-reading the first two chapters of SICP at the weekend as I am starting to prepare SICP Distilled and I was pleased to see:

The value of expressing programs as sequence operations is that this helps us make program designs that are modular, that is, designs that are constructed by combining relatively independent pieces. We can encourage modular design by providing a library of standard components together with a conventional interface for connecting the components in flexible ways.

Modular construction is a powerful strategy for controlling complexity in engineering design. In real signal-processing applications, for example, designers regularly build systems by cascading elements selected from standardized families of filters and transducers. Similarly, sequence operations provide a library of standard program elements that we can mix and match.

While we know Rich finds his words in an etymology dictionary I thought it interesting to see someone else arrive at the same word for a similar concept.

If you have not seen Rich’s into to transducers please do so, it’s excellent.

SICP Distilled - Closing Soon

Less than 2 days left!

Please check out my recent Kickstarter - SICP Distilled

Over 600 people have signed up now, I am really excited at the interest.

At first I figured that the course would be worth ~£45 and put up limited quantities building up to that amount. Very soon after the £20 price ran out someone commented it was a shame they missed out and I realised artificial scarcity is bullshit and unlimited all the rewards and invited people to pay less if they liked. A while after I put up a concession price for those less able to pay and again invited people to pay less (a friend commented on Facebook that I was a shit capitalist, a title I quite like).

I will be using some of the money raised to buy a camera and rent a room for a face to face course in London (tickets at reduced prices, venue/timings announced soon) which I’ll record and use to inform the screencasts and projects for the course. I will also be pulling in some friends to work with me on the interpreter and compiler parts, I have some really interesting ideas for ways to get the ideas across (hopefully including a minimal x86 or LLVM lisp compiler project)

Thanks to all that pledged to support the project, I have a massive obligation to make it as awesome as I can now and the resource to do it.

Thanks again, Tom

PS: Retweets and other sharing is really appreciated in these final few days.

SICP Distilled

I launched a Kickstarter project yesterday to do an idiosyncratic tour of the best of Structure and Interpretation of Computer Programs in Clojure

Please sign up if you are interested (below is copied from Kickstarter)


It is a great introduction to computation, Peter Norvig probably said it best:

To use an analogy, if SICP were about automobiles, it would be for the person who wants to know how cars work, how they are built, and how one might design fuel-efficient, safe, reliable vehicles for the 21st century. The people who hate SICP are the ones who just want to know how to drive their car on the highway, just like everyone else.

I was going to write more about why one should study SICP, but Kai Wu of Hacker Retreat did a stellar job here


It’s a long book, with lots of exercises and lots of people I know have started, loved it, but somehow not finished.

Abelson and Sussman themselves highlight the important lessons of SICP in their paper Lisp: A Language For Stratified Design and I have my own favourite bits.

As the book itself is available online for free I want to make the perfect accompaniment to it - an ebook summarising the key ideas, short videos describing them, screencasts of solving some of the exercises, translation of the examples into Clojure, example projects, partial solutions for you to complete (similar to 4clojure and Clojure koans) and a place to discuss solving them with people.

Something more like the exercises for the Feynman Lectures on Physics, to be enjoyed alongside SICP, rather than completely replace it.

Maybe some ideas come out a little different in Clojure, or I take a slightly novel approach (hence idiosyncratic), maybe I miss something out (hence tour, sorry), but I hope we can have some fun along the way.

I’ve surveyed friends and people who signed up early about which bits they enjoyed, where they got stuck and how best to approach it.

People that join in on the Kickstarter get a reduced price and a chance to affect the direction of the project (and, let’s be honest, be test subjects)

Why Clojure?

  • A modern take on Lisp
  • Slightly less parens (though obviously you will come to love them anyway)
  • More immutability! (SICP gets as far as possible without mutation and quite rightly warns you to be careful when you start)
  • Target JVM or Javascript, and use their libraries
  • An excellent parser library
  • A more fully featured logic engine for us to peek at
  • Different concurrency models

What will I do?

You will:

  • Come to appreciate and use higher order functions
  • Build everything from (almost) nothing
  • Deeply embed DSLs into Clojure
  • Draw Escher pictures
  • Create datastructures
  • Learn why reduce is considered harmful (not SICP but I have some fun exercises planned)
  • understand iteration and recursion
  • Create an object system
  • Make a compiler
  • Build a Lisp interpreter (or a mini version of your other favourite language)
  • Do some logic programming

Plus more!

Why Kickstarter?

Having the money up front allows me to judge interest, spend more on post-production of the videos, devote more time to it and create a way for people to collaborate and help each other along the way.

I want it to be free!

So do I! If I was independently wealthy I would love to spend all my days on this, but alas I am not. I do believe in freedom though, and promise to make all the resources free after 12 months.

To quote RMS:

The copyright system works by providing privileges and thus benefits to publishers and authors; but it does not do this for their sake. Rather, it does this to modify their behaviour: to provide an incentive for authors to write more and publish more. In effect, the government spends the public’s natural rights, on the public’s behalf, as part of a deal to bring the public more published works

How long will it take?

I will roughly follow the books 5 chapters, one a month, that gives enough time for people to follow along. If we have a bunch of people going through at the same time we will be able to benefit from interactions within the group as well.

However I am often put off by artificial deadlines in MOOCs so want the material to stand alone (well at least alongside the SICP text) for someone doing it by themselves at whatever pace they like.

101 Goals Day 200 Update

This is my (month late) day 200 update of my second 101 Goals in 1001 days


43 - Build a robot

Did my raspirobot at the weekend at NodeBots of London.

59 - Finish 4clojure

Finished the final few over Christmas.

In Progress

41 - Reasoned Schemer

Loving it, on chapter 8. 2014 is looking like the year of logic for me.

89 - Walk Capital Ring

Petra and I have decided to do it together, we did section 10 as it goes past our place.


I now have a Trello Board for planning. Mostly need to get cracking cycling, walking and swimming :-)

Check out the full list and let me know if you want to join in any.

Concurrency and Parallelism in Clojure

I was pleased to be speaking at the Clojure Exchange recently (at least it was recently when I started writing this blog post). I suggested to Bruce that a talk on concurrency and parallelism would be good because since most of the books were published reducers and core.async have appeared giving us even more options in Clojure and we have a new runtime in Clojurescript (I also ranted that our books get it a bit wrong).

The goals of the talk were:

  • Give people solid working definitions of concurrency and parallelism
  • Refresher on reference types (and that you don’t automatically get parallelism with them)
  • Show some easy ways to get parallelism in your programs
  • Demo a nice pattern serialising access to something non-threadsafe with an agent
  • Explain Reducers as a way of getting parallelism (and the idea reduce/fold itself is no good for it)
  • Introduce CSP via core.async as a way of programming concurrently (that sometimes runs in parallel)
  • Point people at people way smarter than me talking for a bit longer than I had about just one of the things I tried to

Slides are here, video here but I will expand on a few ideas here too

Concurrency is not parallelism

It has been said better by Rob Pike in this talk but the definition is:

Concurrency is the composition of independently executing ‘processes’

Parallelism is the simultaneous execution of (possibly related) computations


Concurrency is about the way we structure programs

Parallelism is about the way they run

The nice thing about this definition is it’s then easy to find examples of concurrent but not parallel (running multiple threads on a single core) and parallel but not concurrent (instruction-level parallelism, data parallelism), and even when concurrency and parallelism are both present but to a different extent (running 50 threads multiplexed across 4 cores).


Is correct

Joe Armstrong in Programming Erlang

C2 Wiki

I like this one as it brings out the two main ways of doing concurrency, either by message passing with no shared state or by managing access to the shared state.

I have the three main books on Clojure so took a look at how they introduced the idea

Clojure Programming

Both definitions assume shared state, though it captures the idea that parallel means running at the same time and that to get the most from your hardware you want to avoid coordination overhead.

Joy Of Clojure

I don’t like ‘roughly the same time’ here, and again seems to presume shared state.

Programming Clojure

I think that saying parallel programs execute concurrently is wrong (replace it with ‘at the same time’ and I think it is right) and the first bullet misuses ‘concurrent’ at least once.

This is not to say it is a massive deal and stops people understanding and using the features of Clojure the books go on to explain, just that until seeing Rob’s talk I don’t think I could have given a clear definition of what the terms mean.

Reference types

Again this was just a refresher and has probably been done better elsewhere.

All of the reference types hold a value and are updated by functions with the same signature

(<changer> reference f [args*])

All can be derefed with @reference or (deref reference)

The main difference between them if they are synchronous or asynchronous, coordinated or uncoordinated.

  • Synchronisation - Does the operation block?
  • Coordination - Can multiple operations be combined in a transaction?

Atoms (Synchronous,Uncoordinated)


(swap! atom f)
(swap! atom f x)
(swap! atom f x y & args)


(compare-and-set! atom oldval newval)


(reset! atom newval)

Refs (Synchronous,Coordinated)

(Must be inside a dosync transaction)


(ref-set ref val)


(alter ref fun & args)


(commute ref fun & args)

Agents (Asynchronous,Uncoordinated)

send (executed on a bounded thread pool)

(send a f & args)

send-off (executed on an unbounded thread pool)

(send-off a f & args)

Using agents to serialise access to non-threadsafe resources

If we start multiple threads and start writing to the console with println from them, very quickly you start to get overlapping writes.

(defn start-thread
   (Thread. fn)))

(defn loop-print
  (let [line (str n ":**********")]
    (println line)
    (Thread/sleep (rand 5))
    (recur n)))

(defn -main []
  (dotimes [n 50]
    (start-thread #(loop-print n))))

One way around it is to wrap the contested resource in an agent

(defn write [w content]
  (doto w
    (.write w content)
    (.write "\n")

(def console (agent  *out*))

(def log-file (agent (io/writer "LOG" :append true)))

(defn loop-print
  (let [line (str n ":**********")
        sleep-time (rand 5)]
    (Thread/sleep sleep-time)
    (send-off console write (str n ":*********"))
    (if (= n 50)
      ;; We have a separate file log for the 50th thread
      (send-off log-file write (str "sleeping for" sleep-time)))
    (recur n)))

(defn -main []
  (dotimes [n 100]
    (start-thread #(loop-print n))))

Here the write function takes something writeable and a string, writes the string to it with a newline at the end and then (importantly) returns the object (this is the behaviour of doto). This allows it to be passed to an agent, this way the value of the agent is always the writer and as the agent serialises application of the functions passed into it the writer is only written by one function at a time.

The guidance for whether to send or send-off to an agent is if the function does any IO or not, send-off is for IO operations and is on an unbounded pool whereas send is for CPU bound operations and is on a fixed-size pool. Either way the message queue is unbounded and you might eventually suffer memory issues (this is one of the reasons core.aync and CSP are better as they provide back-pressure and make you think up front about buffering and queue size)

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.RuntimeException: Agent is failed, needs restart
	at clojure.lang.Util.runtimeException(Util.java:223)
	at clojure.lang.Agent.dispatch(Agent.java:238)
	at clojure.core$send_via.doInvoke(core.clj:1915)
	at clojure.lang.RestFn.applyTo(RestFn.java:146)
	at clojure.core$apply.invoke(core.clj:623)
	at clojure.core$send_off.doInvoke(core.clj:1937)
	at clojure.lang.RestFn.invoke(RestFn.java:442)
	at clojure_exchange.agents$loop_print.invoke(agents.clj:24)
	at clojure_exchange.agents$_main$fn__34.invoke(agents.clj:31)
	at clojure.lang.AFn.run(AFn.java:24)
	at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

Agents and STM

Thanks to Philip Potter for reminding me to mention that agents are integrated with STM, any dispatches made in a transaction are held until it commits, and are discarded if it is retried or aborted so you can use them to safely trigger side-effects from within transactions. See the docs for more.

Delays and futures

> (defn slow-inc [n]
   (Thread/sleep 1000)
   (inc n))


  • runs code on deref, blocks
> (def d (delay (slow-inc 5)))
;; Returns immediately

> @d
;; Will take a second as this is when 'slow-inc' will run


  • runs code in another thread (from a pool)
  • blocks on deref (till ready)
> (def f (future (slow-inc 5)))
;; Returns immediately as delay did but is already running slow-inc

> @f
;; Depends how quickly you can type
;; but if you took longer than a second it will be ready to return,
;; if not it will block till it is ready

So now we have a chance at some parallelism!

We can use future as a way to just kick off new threads, so we can use future where we had start-thread (and we gain a bounded thread pool to run them on)

pmap and friends

;; Make a sequence of futures
> (def futures (doall (map #(future (slow-inc %)) [1 2 3 4 5])))
;; Returns immediately (we have to use doall to force evaluation as map is lazy)

;; Now to deref them
> (map deref futures)
(2 3 4 5 6)
;; Again should return pretty rapidly but at most a second


Here is a version of a function called pmap that captures that idea

(defn pmap [f col]
  (let [futures (doall (map #(future (f %)) col))]
    (map deref futures)))

So we can do

> (pmap slow-inc [1 2 3 4 5])

and it will take 1s or thereabouts.

The actual pmap looks like this, notice the warning that f needs to be expensive enough to justify the overhead

(defn pmap
  "Like map, except f is applied in parallel. Semi-lazy in that the
  parallel computation stays ahead of the consumption, but doesn't
  realize the entire result unless required. Only useful for
  computationally intensive functions where the time of f dominates
  the coordination overhead."
  {:added "1.0"
   :static true}
  ([f coll]
   (let [n (+ 2 (.. Runtime getRuntime availableProcessors))
         rets (map #(future (f %)) coll)
         step (fn step [[x & xs :as vs] fs]
                 (if-let [s (seq fs)]
                   (cons (deref x) (step xs (rest s)))
                   (map deref vs))))]
     (step rets (drop n rets))))
  ([f coll & colls]
   (let [step (fn step [cs]
                 (let [ss (map seq cs)]
                   (when (every? identity ss)
                     (cons (map first ss) (step (map rest ss)))))))]
     (pmap #(apply f %) (step (cons coll colls))))))

It also has a few friends

(defn pcalls
  "Executes the no-arg fns in parallel, returning a lazy sequence of
  their values"
  {:added "1.0"
   :static true}
  [& fns] (pmap #(%) fns))

> (pcalls function-1 function-2 ...)

(defmacro pvalues
  "Returns a lazy sequence of the values of the exprs, which are
  evaluated in parallel"
  {:added "1.0"
   :static true}
  [& exprs]
  `(pcalls ~@(map #(list `fn [] %) exprs)))

> (pvalues (expensive-calc-1) (expensive-calc-2))

Waiting for futures

You need to call shutdown-agents to exit when the agents/futures are done.

(taken from clojuredocs )

;; Note: If you leave out the call to (shutdown-agents), the program will on
;; most (all?) OS/JVM combinations "hang" for 1 minute before the process exits.
;; It is simply waiting for background threads, created by the future call, to
;; be shut down.  shutdown-agents will shut them down immediately, or
;; (System/exit <exit-status>) will exit immediately without waiting for them
;; to shut down.

;; This wait occurs even if you use futures indirectly through some other Clojure
;; functions that use them internally, such as pmap or clojure.java.shell/sh


Just pushed the bits I was playing with up to github

Next time

I hope that was interesting, I am going to stop for now as it has already taken 2 months to write this. I have already described CSP and core.async here and here so won’t cover that again but I will try and describe reducers (and why reduce is considered harmful) but to be honest you would probably be better just looking at my references from the talk (especially the Guy Steel talk)

101 Goals Day 100 Update

In the middle of the year I set myself another 101 Goals in 1001 days and added into my calender reminders to do updates every 100 days (this was due October 19th actually)


4 - Read Beowulf

I finally read all of Seamus Heaney’s translation, shortly after he died.

So. The Spear-Danes in days gone by
and the kings who ruled them had courage and greatness.
We have heard of those princes’ heroic campaigns.

There was Shield Sheafson, scourge of many tribes,
a wrecker of mead-benches, rampaging among foes.
This terror of the hall-troops had come far.
A foundling to start with, he would flourish later on
as his powers waxed and his worth was proved.
In the end each clan on the outlying coasts
beyond the whale-road had to yield to him
and begin to pay tribute.

Afterwards a boy-child was born to Shield,
a cub in the yard, a comfort sent
by God to that nation, He knew what they had tholed,
by long times and troubles they'd come through
without a leader; so the Lord of Life,
the glorious Almighty, made this man renowned.
Shield had fathered a famous son:
Beow's name was known through the north.
And a young prince must be prudent like that,
giving freely while his father lives
so that afterwards in age when fighting starts
steadfast companions will stand by him
and hold the line. Behaviour that's admired
is the path to power among people everywhere.

Shield was still thriving when his time came
and he crossed over into the Lord's keeping.
His warrior band did what he bade them
when he laid down the law among the Danes:
they shouldered him out to the sea's flood,
a chief they revered who had long ruled them.
A ring-whorled prow rode in the harbour,
ice clad, outbound, a craft for their prince.
They stretched their beloved lord in his boat,
laid out by the mast, amidships,
the great ring-giver. Far-fetched treasures
were piled upon him, and precious gear.
I have never heard before of a ship so well furbished
with battle tackle, bladed weapons
and coats of mail. The massed treasure
was loaded on top of him: it would travel far
on out into the ocean's sway.
They decked his body no less bountifully
with offerings than those first ones did
who cast him away when he was a child
and launched him alone out oer the waves.
And they set a gold standard up
high above his head and let him drift
to wind and tide, bewailing him
and mourning their loss. No man can tell,
no wise man in hall or weathered veteran
knows for certain who salvaged that load.

Goal 5 is to memorise the opening so I chose the above extract (not quite done it yet)

The Fight With Grendel

The Last Survivor’s Speech

Beowulf’s Funeral

These and other Old English readings are available at The Norton Anthology Of English Literature

Hackercup 2014 - Qualifier

I had some fun doing it last year so I thought I would have a go again. This weekend was the qualifier. You can see the questions here

Square Detector

You want to write an image detection system that is able to recognise different
geometric shapes. In the first version of the system you settled with just being
able to detect filled squares on a grid. You are given a grid of N×N square
cells. Each cell is either white or black.
Your task is to detect whether all the black cells form a square shape.

  ....    => TRUE

  ....    => FALSE

We basically find the bottom leftmost and top rightmost black cell and check that the count of black cells is the same as the area of the square they describe

(defn square? [s]
  (let [size (count s)
        black-cells (for [y (range size)
                     x (range size)
                     :when  (= \# (nth (nth s y)
                 [y x])
        ys (map first black-cells)
        xs (map second black-cells)
        x-min (apply min xs)
        x-max (apply max xs)
        y-min (apply min ys)
        y-max (apply max ys)
        y-edge (- y-max (dec y-min))
        x-edge (- x-max (dec x-min))
        square-size (* y-edge x-edge)]
    (if (and (= x-edge y-edge)
             (= (count black-cells) square-size))

If the input was bigger then I would have found min/max in a single reduce but I like the clarity here and naming things is good right?

Basketball Game

A group of N high school students wants to play a basketball game.
To divide themselves into two teams they first rank all the players in the
following way:

Players with a higher shot percentage are rated higher than players with a lower
shot percentage.
If two players have the same shot percentage, the taller player is rated higher.
Luckily there are no two players with both the same shot percentage and height
so they are able to order themselves in an unambiguous way. Based on that
ordering each player is assigned a draft number from the range [1..N], where the
highest-rated player gets the number 1, the second highest-rated gets the number
2, and so on. Now the first team contains all the players with the odd draft
numbers and the second team all the players with the even draft numbers.

Each team can only have P players playing at a time, so to ensure that everyone
gets similar time on the court both teams will rotate their players according to
the following algorithm:

Each team starts the game with the P players who have the lowest draft numbers.
If there are more than P players on a team after each minute of the game the
player with the highest total time played leaves the playing field. Ties are
broken by the player with the higher draft number leaving first.
To replace her the player on the bench with the lowest total time played joins
the game. Ties are broken by the player with the lower draft number entering
The game has been going on for M minutes now. Your task is to print out the
names of all the players currently on the field, (that is after M rotations).

M = 3
P = 2
Wai 99 131
Weiyan 81 155
Lin 80 100
Purav 86 198
Slawek 80 192
Meihong 44 109

Sort all the players by their shot percentage you
get the list: [Wai, Purav, Weiyan, Slawek, Lin, Meihong]. This makes the two
[Wai, Weiyan, Lin]
[Purav, Slawek, Meihong]
The game starts with Lin and Meihong sitting on the bench in their respective
teams. After the first minute passes it's time for Weiyan and Slawek to sit out
since they have the highest draft numbers of the people who played. After the
second minute passes Lin and Meihong will keep playing since they only played
one minute so far and it's Wai and Purav who have to sit out.

Finally after the third minute Lin and Maihong go back to the bench and all the
players currently playing again are:
Purav Slawek Wai Weiyan

Here we sort the players according to the rule, get the two teams using take-nth starting at the first or second. The order the players come onto the pitch is a bit tricky, the ones with lowest draft numbers are on first but then they leave in order of highest first then the people on the bench come on lowest first, so the list of players in the order they come on the pitch is (concat (reverse (take p team)) (drop p team)) which we cycle so it’s infinite then use (partition p 1 cycled-team-list) to get a sliding window of P players moving up in ones. A speedup I did not do would have been using modular arithmetic for large m (both cycles repeat in the team size so we could replace m with their LCM rather then taking the mth of the infinite list directly)

(defn compare-players
  [[name p height] [name' p' height']]
  (if (= p p')
    (> height height')
    (> p p')))

(defn sort-players [players]
  (map first
       (apply sorted-set-by

(defn f [[m p players]]
  (let [players (sort-players (map parse-player players))
        team1 (take-nth 2 players)
        team2 (take-nth 2 (rest players))
        mth (fn [team]
              (let [team-size (count team)
                    cycled-team-list (cycle (concat (reverse (take p team))
                                                    (drop p team)))]
                (nth (partition p 1 cycled-team-list) m)))]
    (str/join " " (sort (concat (mth team1)
                                (mth team2))))))


You may be familiar with the works of Alfred Lord Tennyson, the famous English
poet. In this problem we will concern ourselves with Tennison, the less famous
English tennis player. As you know, tennis is not so much a game of skill as
a game of luck and weather patterns. The goal of tennis is to win K sets before
the other player. However, the chance of winning a set is largely dependent on
whether or not there is weather.

Tennison plays best when it's sunny, but sometimes of course, it rains. Tennison
wins a set with probability ps when it's sunny, and with probability pr when it's
raining. The chance that there will be sun for the first set is pi. Luckily for
Tennison, whenever he wins a set, the probability that there will be sun increases
by pu with probability pw. Unfortunately, when Tennison loses a set, the probability
of sun decreases by pd with probability pl. What is the chance that Tennison will
be successful in his match?

Rain and sun are the only weather conditions, so P(rain) = 1 - P(sun) at all
times. Also, probabilities always stay in the range [0, 1]. If P(sun) would ever
be less than 0, it is instead 0. If it would ever be greater than 1, it is instead 1.

I did not get this one right, I thought the tree was too big to actually sum all of the different ways Tennison can win so tried a Monte Carlo approach which works but is too slow to converge on the precision required.

(defn normalise [p]
  (cond (< p 0.0)
        (> p 1.0)

(defn sim
  ([k ps pr pi pu pw pd pl]
     (sim (int k) ps pr pi pu pw pd pl 0 0))
  ([k ps pr pi pu pw pd pl wins losses]
     (cond (= k wins)

           (= k losses)
           ;; ps sunny win
           ;; pr rain win
           ;; pi chance of sun
           ;; wins: +pu with prob pw
           ;; lose: -pd with prob pl

           (let [pwin (if (< (rand) pi) ps pr)
                 won? (< (rand) pwin)]
             (if won?
               (let [increase? (< (rand) pw)
                     delta (if increase? pu 0)]
                 (recur k ps pr (min (+ delta pi) 1) pu pw pd pl (inc wins) losses))
               (let [decrease? (< (rand) pl)
                     delta (if decrease? pd 0)]
                 (recur k ps pr (max (- pi delta) 0) pu pw pd pl wins (inc losses))))))))

(defn win-probability [& args]
  (let [n 1000000
        won (reduce +
                    (pmap (fn [n]
                            (apply sim args))
                          (range n)))]
    (/ won n)))

After looking at the solutions they suggest a nice dynamic programming approach, but still they do calculate every way tennison can win, here is a clojure version that I think is still a bit slow on the input. I could not find a way to use the memoize function for the recursive function.

(defn win-probability2 [k ps pr pi pu pw pd pl]
  (let [lookup (atom {})
        (fn F  [w l p]
          (cond (== w k)
                (== l k)
                (if-let [cached (@lookup [w l p])]
                  (let [answer
                         (* (normalise p) ps pw (F (inc w) l (+ (normalise p) pu)))
                         (* (normalise p) ps (- 1 pw) (F (inc w) l (normalise p)))
                         (*  (- 1 ps) pl (F w (inc l) (- (normalise p) pd)))
                         (* (normalise p) (- 1 ps) (- 1 pl) (F w (inc l) (+ (normalise p) pu)))
                         (* (- 1 (normalise p)) pr pw (F (inc w) l (+ (normalise p) pu)))
                         (* (- 1 (normalise p)) pr (- 1 pw) (F (inc w) l (normalise p)))
                         (* (- 1 (normalise p)) (- 1 pr) pl (F w (inc l) (- (normalise p) pd)))
                         (* (- 1 (normalise p)) (- 1 pr) (- 1 pl) (F w (inc l) (normalise p))))]
                    (swap! lookup assoc [w l p] answer)
    (F 0 0 pi)))

Please let me know any improvements to tennison you can think of, code is on github

Asynchronous Game of Life

I had the idea to do an asynchronous Game Of Life in the browser using clojurescript’s core.async. I thought it would be fun if there was no global state, no external time and no datastructures - have the cells be channels just signalling to their neighbours when they transitioned between dead and alive.

Go take a look at it here

I will assume you have seen my other post with resources to learn CSP and core.async

I think the implementation is true to the idea of Asynchronous cellular automata but fear it has bugs as the steady state often settles on something that is not a valid GoL position (in particular I often see clumps of alive cells with too many neighbours, all surviving)

The code is on github, you can play with it in cljsfiddle and see a full page version of it.

As it is only 100 lines I may as well walk through them here.

Fairly uninteresting initialisation, prob is the probability a cell is alive/dead at the beginning. I made width height and cell-size atoms because I reset them all on calling init later.

(def line-colour "#cdcdcd")
(def alive "#666")
(def dead "#eee")
(def width (atom nil))
(def height (atom nil))
(def cell-size (atom nil))
(def canvas (.getElementById js/document "world"))
(def context (.getContext canvas "2d"))
(def prob 0.5)

Standard canvas drawing stuff

(defn fill_sq [x y colour]
  (let [cell-size @cell-size]
    (set! (.-fillStyle context) colour)
    (set! (.-strokeStyle context) line-colour)
    (.fillRect context
               (* x cell-size)
               (* y cell-size)
    (.strokeRect context
                 (* x cell-size)
                 (* y cell-size)

I thought it neat to have the draw function as a channel too, the timeout here sets the velocity of the simulation via backpressure on the unbuffered channels

(def draw
  (let [c (chan)]
    (go (loop []
          (let [[x y colour] (<! c)]
            (<! (timeout 10))
            (fill_sq x y colour)

The hardest bit to explain (and probably where the remaining bugs are…)

cell takes an [x y] position and returns 2 channels, one for input that neighbours can use to inform it about their alive/dead transitions and one to be told of new neighbours, via passing in their input channel, so it’s a channel of channels :-)

Each cell creates a single go block that listens for new neighbours and state transitions from neighbours on it’s input channels. Whenever something comes from a neighbour (either 1 or -1 to signal it becoming alive or dead) it updates it’s neighbour count, works out if it’s state transitions and tells it’s neighbours if it did and also sends draw a signal too. It is all kicked off by the alive cells telling their new neighbours they are alive as they connect.

There is an asymmetry in the alive/dead transitions, 3 neighbours

(defn cell [[x y]]
  (let [new-neighbour (chan)
        input (chan)
        neighbours #{}
        initial-state (if (< (rand) prob) :alive :dead)]
    (if (= initial-state :alive)
      (fill_sq x y alive))
    (go (loop [neighbour-count 0
               state initial-state
               neighbours neighbours]
          (let [[val chan] (alts! [new-neighbour input])]
            (cond (= chan new-neighbour)
                  (do (if (= state :alive)
                        (>! val 1))
                      (recur neighbour-count state (conj neighbours val)))
                  (let [neighbour-count (+ val neighbour-count)
                        new-state (if (or (= neighbour-count 3)
                                          (and (= neighbour-count 2) (= state :alive)))
                        draw? (not= new-state state)
                        delta (if (= new-state :alive) 1 (- 1))
                        colour (if (= new-state :alive) alive dead)]
                    (if draw?
                      (do (doseq [n neighbours]
                            (>! n delta))
                          (>! draw [x y colour])))
                    (recur neighbour-count new-state neighbours))))))
    [input new-neighbour]))

Getting neighbours for a position

(defn neighbours [[x y] grid]
  (filter (comp not nil?)
          (for [dx [-1 0 1]
                dy (if (zero? dx)
                     [-1 1]
                     [-1 0 1])]
            (let [x' (+ dx x)
                  y' (+ dy y)]
              (get grid [x' y'])))))

This is the loop that kicks everything off (OK I cheat and have a collection here). We loop through the [x y] positions and create a cell (ie an [input new-neighbour] pair) for each and store the result in the cells map. Then we look up the neighbours for each cell and then pass into their new-neighbour channel it’s input channel. This starts our cascade.

(defn draw-loop []
  (let [xys (for [x (range @width)
                  y (range @height)]
              [x y])
        cells (zipmap xys (map cell xys))]
    (doseq [[xy [input _]] cells]
       (doseq [[_ nn] (neighbours xy cells)]
         (>! nn input))))))

The init function just sets the atoms, grows the canvas to fill the space it is given, draws the empty grid then kicks off draw-loop

(defn ^:export init []
  (set! (.-width canvas) (.-clientWidth canvas))
  (set! (.-height canvas) (.-clientHeight canvas))
  (reset! width 75)
  (reset! cell-size (/ (.-clientWidth canvas) @width))
  (reset! height (/ (.-clientHeight canvas) @cell-size))
  (doseq [y (range @height)
          x (range @width)]
    (fill_sq x y dead))

Hope you enjoyed it, let me know if I can clear up anything (and if you spot a bug).

core.async and CSP Resources

I compiled a list of Communicating Sequential Processes / core.async links for a friend after my Great British Node Conf lightning talk so thought I would share them here too.

Go talks

I still think the best intro to CSP is these talks from Rob Pike

Very interesting is this point of Rob’s IO talk.

The select statement is a control structure somewhat like a switch that lets you control the behaviour of your program based on what communications are able to procede at any moment. In fact this select statement is a key part of why concurrency is built into Go as features of the language rather than just a library. It’s hard to do control structures that depend on libraries

This is where the power of macros is apparent, we can (somewhat) easily add CSP to clojure as a first class citizen alongside all the other great concurrency models that have existed since it was created. It is amazing that core.async works on the JVM and in clojurescript.


core.async in cljs from David Nolen



I can’t resist including one example myself as it’s so neat a programming model

(let [c1 (chan)
      c2 (chan)]
  (go (while true
        (let [[v ch] (alts! [c1 c2])]
          (println "Read" v "from" ch))))
  (go (>! c1 "hi"))
  (go (>! c2 "there")))

Here we create 2 channels and 3 go blocks that are to be thought of as acting concurrently, the first one is an infinite loop pulling off both channels and printing the message received (alts! gets whatever result is ready from the list of channels provided), the next two simply send (with the >! function) the strings “hi” and “there” into c1 and c2 respectively.


Genetic Programming in Clojure With Zippers

This post is another update to my Lisp In Summer Projects project. I blogged before (1 2) about warm-up exercises but I have not yet explained the goal, I intend writing an Artificial Life simulation where the organisms themselves are Lisp programs that runs (via clojurescript) inside peoples browsers.

You may also want to check out a video of a talk I gave (in part) about this stuff for the london clojurians, slides here

After doing Genetic Algorithms, both in Coffeescript and Clojurescript the next step was to look at Genetic Programming. GP is the same as GA except that the genome is the AST of a programming language (Lisp makes it easier as S-Expressions are directly manipulabele).

Take a look a the Wikipedia Article before we continue with how I did it in Clojure using zippers.

As I said in the other post about GA, the basic idea is to:

  • find a way to encode a particular solution to the problem as a genome
  • start with an initial population of random genomes
  • work out their ‘fitness’ (ie how well they perform some task)
  • choose the next generation by selecting and ‘breeding’ them (with some mutation for novelty)
  • repeat until a good solution appears

In the GP world the first step changes, the initial genomes are valid programs rather than fixed length byte sequences. The ‘breed’ function is more complicated too, where before we could get away with choosing a random ‘split point’ and taking the left part of one and the right part of its mate and vice versa we have to be careful about where we slit and how we join the S-Expressions, also the length is not bounded anymore.

Enter The Zipper

In Clojure zippers are found in clojure.zip and I’ll quote the description given there:

A zipper is a data structure representing a location in a hierarchical data structure, and the path it took to get there. It provides down/up/left/right navigation, and localized functional ‘editing’, insertion and removal of nodes. With zippers you can write code that looks like an imperative, destructive walk through a tree, call root when you are done and get a new tree reflecting all the changes, when in fact nothing at all is mutated - it’s all thread safe and shareable. The next function does a depth-first walk, making for easy to understand loops.

See also Wikipedia and FUNCTIONAL PEARL - The Zipper (Huet 97)