DASH 7: “There’s never plenty of time”

Cartoon of a permanently stopped watchDo not take the graphic as a dig or a suggestion that DASH 7 was in some way broken, that most absolute and damning term of game criticism…

A common theme in the commentary of DASH 7 was its quantity, as well as its undoubtedly very high quality. There was more than people were expecting, possibly to the point where it strained the logistic constraints of practicality that its players had to place on it, and that’s where some of the relatively negative feedback has come from. This post concerns the Experienced players’ track only; primarily this is from inevitable self-centredness, though it’s worth noting that (provisionally) the convincing majority of players were on the Experienced track.

A phrase frequently used when describing the hunt in advance ran, roughly, to the effect of “We expect that most teams will solve all puzzles in 6-8 hours“, though the precise wording varied from location to location. Some locations announced specific wrap-up times in advance, others used phrases like “All teams across the world will be working on the same 10 puzzles over the course of a max of 8-hours“; it’s not completely clear where the concept came from that there would be an overall time limit, including non-solving time, of eight hours this year, except possibly from expecting a repeat of last year’s hard limit in the absence of anything to set our expectations otherwise. That said, this site probably propagated this incorrect notion; if so – whoops, sorry, genuine mistake.

The combined par time of the nine scored puzzles for DASH 7 was 5:45, very similar to the combine par time of the nine scored puzzles for DASH 6 of 5:50. However, as previously discussed, a reasonably representative total solving time (based on early, probably incomplete data) for a globally mid-table team rose from 5:10 for DASH 6 to 6:55 for DASH 7. Another way of looking at it is that the median score for DASH 6 was 411 and for DASH 7 was 349. True, DASH 6 had five minutes more par time and thus scores might be expected to be five points higher, but the other way of looking at it is that people were scoring far fewer bonus points than in previous years.

In DASH 4, the par value was described as a “generous average solve time”; this year, that was rather less the case. Looking at the nine global-median-scoring teams (usual caveats: early, possibly incomplete, data subject to revision), in DASH 6, a typical team earned bonus points on seven (sometimes six) of the nine scored puzzles whereas in DASH 7, a typical team earned bonus points on two, maybe three, of the nine. This is rather an abrupt analysis; fuller analysis would consider practice from previous years still. Nevertheless, the DASH 7 par values broadly didn’t feel like generous average solve times.

The very dear Snoutcast used to mention the phrase “Everybody likes solving puzzles, nobody likes not solving puzzles” often. From there, it’s not much of an extension to “Everybody likes solving puzzles, everybody likes solving puzzles and earning bonus points from doing so even more”. Teams who were used to having sufficient time to solve puzzles and frequently earning bonus points in previous years may not have had their expectations set to the higher standard this year, which doesn’t just cause “we’re not doing as well as we did last year” ill feeling but also can cause “we might not have time to get all the fun from solving puzzles that we want before the hard time limit expires” worries, which may knock on to causing teams to take sub-optimal decisions over their self-care, worsening their experience further.

There’s a very interesting discussion on the GAST scoring system on the Puzzle Hunters Facebook group at the moment. When the par times are sufficiently generous, then the ordering by (highest) scores and (fastest) solve times are identical; when they are not, some teams are arguably over-rewarded, or insufficiently punished, for relatively slow solves on some puzzles. This was an arguable issue as high as the top ten this year.

DASH has one of the hardest calibration issues of all puzzle hunts because it aims to cater to teams of so many different abilities, even among those who self-select for one level of difficulty or another. Previous DASHes perhaps might not have got the degree of credit that they have deserved for making the balancing act work quite so well. So this all points to a question of where DASH should seek to target its activities.

Is the number of puzzles correct? Should the puzzles be shorter… or the same length, with longer par values? Would DASH be better served by having the sort of quantity of content (i.e. total solve time 4½-5½ hours for median teams) that is had in previous years, or a similar quantity of content to that of this year spread over a longer day? The considerable downsides of a longer day could include that it might well put potential players off, potential GC and volunteers off and that it might make finding appropriate locations even more difficult still. On the other hand, challenges as meaty as those of this year were an awful lot of fun!

This is a very INTP-ish “throwing things out there” sort of post, so perhaps time to be a bit more concrete. It’s inevitable that calibration suggestions will turn out to be self-interested, though the self-interest will be subconscious as efforts have been made to try to eliminate conscious bias. For an eight-hour-overall-time-limit day, perhaps the calibration target should be that 75% of teams solve all the puzzles, in their division of choice, within 5½ hours solving time, and that 80% of teams beat the par value for each puzzle.

That said, it’s not as if tuning puzzle difficulty up or down is at all an exact science, or that playtest results are necessarily reflective of how puzzles will turn out in real life. The whole process is the endeavour of fallible humans after all; the puzzle community at large is truly grateful to those who submit puzzles, those who edit them, those who make the selections and turn raw puzzles into complete hunts. The quality has once again been extremely high, even if the quantity was not what people had been led to expect.

It could be possible for a DASH to offer so little challenge to the fastest teams as to hurt their experience, so here’s an out-there suggestion to finish. While adding multiple levels of difficulty by writing more sets of puzzles adds very considerably to the workload – and while the BAPHL series of hunts offers two levels of difficulty, this site isn’t aware of any other hunt that offers three, what with the brilliantly thoughtful junior track as another labour of love – here’s a possibility.

Consider the addition of a hardcore mode that shares the same material with the experienced track, but is different in the proactivity with which it offers hints, and also limits team sizes to three. This could slow the best solvers down while hurting their experience in only the “it’s fun to solve in large teams” fashion – but, if you’re that hardcore, you’re likely to have access to other events which will let you solve in larger teams as well. It’s also been proven to be the case that the best three-player teams can match the best larger teams as well!

Come and have a go if you think you’re hard enough

Hands holding a question mark and an exclamation markHere’s a treat, and hopefully it might get some discussion going. This site is proud to feature a guest post by Ed Roberts, proprietor of Breakout Manchester. Breakout Manchester is one of the busiest and most popular sites in the country and Ed has travelled extensively, playing games around the country for research purposes (and because he, like everyone else, is a massive fan). Here is a starting-point for a possible ranking table of different games’ difficulties; if you agree or disagree with his rankings, please share your opinions in the comments. Different people will find different things difficult, of course, but if there’s any consensus of opinion, then this would be useful for people deliberately looking for a relatively hard or relatively easy game. Thank you so much, Ed, and take it away!


So I’ve played a fair few escape games. Here in my opinion is how they rank from hardest to easiest. This is no indication of which games are good or bad; an easy game may be great, so may a hard game. Likewise an easy game may be awful as may a hard one. This is also based on nothing more than my personal opinion.

I’ve never played the Scottish, Bath, Bournemouth, Cryptopia, or Irish offerings so I can’t comment on these. I’ve also ranked the Breakout game rooms for where I believe they would sit. You will also notice some games I escaped from are higher up the lists than some other I didn’t escape from, for two reasons: some of the people I was with are better at these games than others – and, as with any game, sometimes there are good performances and other days bad performances.

Approaches to difficulty in exit games

GameCamp logoYesterday, at GameCamp in London, there was a talk on the exit game phenomenon given by Adrian Hon. It’s not clear what GameCamp etiquette is, whether what gets said at GameCamp stays at GameCamp, whether things can be reported under the Chatham House Rule, or whether there can be wider reports, but it would be great to hear more from the event. Failing that, Adrian discussed the genre towards the end of episode 41 of “The Cultures” podcast.

Some exit games take the approach that they will be very generous with the distribution of hints to their players, even making it clear that this is the policy right at the outset, in the discussion before players enter the room. As this was the approach taken by the first exit room site to open in the UK, this may well be the dominant approach nationally.

By contrast, there are other sites which offer some games where few or no clues are offered. The pitfall there is either that you set the difficulty level relatively high and have no or very few people crack the room, or you set the difficulty level relatively low and risk having people finish the game in less than half the permitted time, which can be something of a flat ending. There are plenty of other solutions, especially if players are prepared to enjoy the possibility of partial credit for solving some, but not all, of the game, but these appear to be less frequent.

Incidentally, the “deliberately very few winners” approach is the one taken by many of the games offered by SCRAP, considered the originators of the genre, among other operators. Befitting the Japanese origins of the “Nintendo hard” stereotype, as an example, their “What is the Real Escape Game?” page talks of a 2% success rate, and their Flickr photostream has a couple of photos of ongoing scoreboards suggesting victory rates not much higher than that. (Anecdotally, more recent games suggest an easing of standards to around 10%.) This site is not yet aware of any UK exit games that take quite such an extreme approach.

The issue of difficulty of exit games is an open one; there appears not to be a consensus on a single correct approach. As different players want to face different challenges, this variety of approaches may well be a good thing for the world. The difficulty is to match potential players up to the right game for them. This web site will do whatever it can to help in this regard.

The Keyhunter site in Birmingham takes a particularly interesting approach in this regard; it advertises its three games as having different levels of difficulty, and advertises its teams successes on social media not only in terms of the time they took but also by how many clues were needed. If you want the added challenge of completing an exit room and having “with 0 hints used” decorating your performance, perhaps Keyhunter might be the right site for you. There may well be other games that offer the same option and this site will make it clear when it’s available.