DASH7: the numbers game

Now that's a numbers game

Now that‘s a numbers game

Three points of number work from the recent DASH hunt. Where did London’s conspirators spend your money? Did any cities do better or worse than others. And if your team scored 311 on the Novice track, what’s that worth in Expert points?


London’s statement of accounts is as follows:

Income: £625

Printing puzzles & sending to UK: £280
Stationery and local printing: £45
Room hire: £90
Playtest expenses: £40
Props: £95

Expenditure: £550

Surplus: £75

The surplus has arisen because most of the puzzle paraphenalia was brought over in luggage, not sent through the post.

The surplus is available to assist other DASH locations this year; or as additional capital for DASH in any UK location (or anywhere it’s needed) next year.


Did the changes to the puzzles affect London’s scores? Or was London statistically better or worse than other locations at some puzzles?

As with work I did last year, the preferred test is a Student’s t-test. I use a test with two tails (I don’t know whether London will be higher or lower), and assuming equal variance (I have no reason to believe London teams were less diverse than other locations).{A}

The results? No significant differences on any puzzle. London’s Expert teams scored a bit worse than other sides on the Monsters and Regarding (I) puzzle, but these differences could arise by chance. Novice teams also had no significant differences.

We can compare some other location’s performance on the Expert tracks (there aren’t enough Novices to make city-by-city comparisons). Boston did very well on the first three puzzles, all quite wordy. Seattle loved the Tea puzzle, and did well on both legs of the Meta. Conversely, San Jose did poorly on Potions and the ultra-difficult second Meta. Santa Monica also struggled on the second Meta.

Congratulations to the setters for providing puzzles that were already international when I got them, and needed no more than a few little tweaks.


And so to working out the equivalent performances between Novice and Expert tracks. Again, this builds from work I did last year.

While it helps to know how much variation there was between puzzles, this will turn out to be a red herring for the task at hand.

Puzzle and introduction identical: Tea for Two
Puzzles identical, different introductions: Weighing Wands, Quidditch, Potions and Sabotage
Puzzles identical, different printed assistance throughout: Rita Skeeter
Puzzles identical, different time limit: Regarding (I)
Puzzles different: House Elves Help, Monsters

All puzzles had different Cluekeeper hints.

Here, I compare all Novice against all Expert teams. Again, a Student’s t-test is employed. Again, with two tails (I cannot rule out the prospect that Novice teams were given too many hints and scored better), and again assuming equal variance.

It’s no surprise that the Novice teams performed very significantly worse than Expert on all puzzles (except Regarding (I), where they had twice as long). It is very surprising to find the under-performance is very consistent.

Across the first six scored puzzles, where the content was broadly the same, Novice teams were always 4 or 5 points worse than Expert {B}. On Monsters, where Novice teams had some hedges indicated, they were 1 point behind Expert. And on Regarding (I), Novice teams finished 28 points ahead of Expert teams; they may have taken longer, but that was more than compensated in extra points.

The net adjustment is +5+4+5+4+5+4+1-28 = zero points.

We can also analyse the additional time required for Novice to bring their score up to that of Expert.

1 minute: Monsters
3 minutes: Rita, Tea
4 minutes: Wands, Potions, Elves
5 minutes: Quidditch
MINUS 22 minutes: Regarding

The net additional time? Two minutes across the whole day.

And what does this mean when trying to compare Novice with Expert scores? For all intents and purposes, they’re equal. Treat Regarding (II) as well-earned bonus points for the Expert track. Top Novice team Myers Brigands finished on a par with Wombats! and London’s own Moore or Lesk, in the global top 50.

I know the aim was to provide a comparable experience for Novice and Expert tracks. This year, more by luck than judgement, “comparable” has turned out to be “exactly the same”. (Except Novice didn’t take Regarding II.)

{A} Where a team started but did not finish a puzzle, I deem them to have taken the time and achieved the score recorded – this is likely to be 0, but may include partial credit. Where a team did not start a puzzle, or the scoreboard has no record, I exclude them from calculations for that puzzle.

{B} The additional points, or time, are those required to bring the probability that Expert and Novice are drawn from the same population above 0.05. Interpret additional points as “5 for starting the puzzle”; additional time as extending Par by this many minutes.

We are looking for organisers, both in London and for the global event. Interested? Email gcs@playdash.org, leave a comment, or find me at Puzzled Pint London (East) on 14 July.

Leave a Comment.