Wrap-Up
Hunt Philosophy
This event was heavily inspired by Patrick's Puzzlebox, an event by Patrick Xia where he wrote a puzzle a day, then ran them as part of a teammate social, with an unconventional scoring system. This event was in many ways a copy of that one, with Enigmarch as a forcing function for writing the puzzles. Thanks to Patrick for doing the first testsolves, and then thanks to everyone who showed up to the beta test: Austin, Catherine, Connor, Diyang, Eugene, Ivan, Liam, Nicholai, Olga, Robin, Rohit, and Tracey.
I approached the process of writing and hosting from the standpoint of minimizing work as much as possible. Among other things, this meant:
- The moment I came up with a solvable idea, start writing.
- Pay attention to solver experience, but if the constructed puzzle looks bad, just keep going.
- Testsolves are primarily for factchecking. Only nerfs, no buffs.
- Cut corners on postproduction. I'm sure some of you noticed how many postprods were screenshots of Google Sheets.
- Don't make the hunt site fancy.
The hunt site is a barebones version of gph-site, which I picked over tph-site since I didn't need any of the React machinery. I then further stripped out gph-site features I didn't want to deal with, like email, hints, and notifications. It still took more time than I expected to postprod, because of implementing Day 32, a custom scoring system for the beta test, and then a different scoring system for the public run.
One thing I learned over testing was that the small things like copy-to-clipboard are actually super important to what puzzles get worked on under time pressure. Although the puzzles were all finished by March 31, I did spend April uncutting some postproduction corners.
Format
The format of this hunt was inspired by other mindsport events, in particular security CTFs, which commonly use dynamic scoring to reward solves on harder flags. In fact, a few use more aggressive decay curves that weight low-solve flags much higher. In the beta test, I liked that this encouraged teams to look at old puzzles even while new puzzles were getting unlocked. I didn't want the hunt to be decided by the meta because I felt the meta was much simpler than many of its feeders. Based on the survey feedback, people either preferred the dynamic scoring or didn't care. I wouldn't extrapolate from this too much though. On average, I think solvers just want to do puzzles.
As for the time limit, this was partly logistics. Since I was the only person on admin, I wanted to limit how much time I would need to spend "on call", However, there were non-logistics reasons too. Given the scoring system, I didn't like the idea of competitive teams grinding interminably on the hardest puzzles of the hunt. Especially because the hardest puzzles were more likely to be the messiest or least fun puzzles. The time limit was to cap how long a team could spend in that state.
The aggressive time unlocking came because I strongly felt all teams should see all puzzles, but I wanted some element of suspense on seeing what would unlock next, and the surprise of unlocking Day 32. I set the unlocks to something I thought would be faster than most solvers, to ensure teams always had a puzzle to solve. Congrats to the few teams who caught up to the time unlocks and proved me wrong!
My goal was that at least 1 team would fullsolve, but not much more than that, and that worked out. Nice! I'm honestly shocked it played out that way, given how little I was thinking about puzzle difficulty when writing.
Future
I'm not sure if I'm doing this again. Let's put it as a maybe. But I hope other people do it. I found that once I started, writing a puzzle a day was very achievable, and the lower standards made me more willing to construct genres I haven't done before.
Stats
You can download a guess log here.
You can view a graph of team performances here, or if you prefer a big table, you can see the Bigboard here.