subrosa-newThis is the fifth and final post on the making of Sub Rosa, which placed 6th in the 2015 Interactive Fiction competition. As before, expect spoilers.

Mistakes & Missed Opportunities

The game was successful, for what it set out to be. We put in the time to make the implementation as solid as we could. From our previous games, we’d learned lessons about rushing, poor proof reading, and focusing on the wrong things to spend time implementing. All games have bugs, but we made a concerted effort to be as pleased as possible with the game before it went out. Some things I’d consider doing differently:

  • Not enough time was spent tweaking the end conditions. It turns out, people usually have a lower score at the end of their first playthrough than I’d anticipated and so what I thought would be the default ending (94%-99%) was actually one of the rarer endings to be seen. This is adjusted in the postcomp release.
  • The game isn’t very dynamic. After things open up, the player just wanders about the house, does a bit of tidying and then leaves. At one stage I envisioned having plot-related flashback sequences (like in Hunger Daemon) after getting each secret.
  • Another idea would be to make the some of the rooms much messier. There’s a great sequence in the point & click adventure, Toonstruck where the protagonist utterly destroys everything in a room in order to pick up one item. In Sub Rosa, there’s the button popping off, but even more environment-changing little-catastrophes might have been better.
  • Mechanically, the game could have been more ambitious. There’s a lot of interesting elements, but it isn’t advancing the medium all that far. That was okay for the initial envisioning of the game as a short treasure hunt, but if we’re to spend as much time on a game again we should be spending it doing more ground-breaking work.

Wrap up

The earlier parts of this retrospective were written before the results came in. I’m delighted to see the game came 6th out of 53 in the Interactive Fiction Competition 2015. One of our goals was to beat my previous best comp showing of 5th out of 27 with The Chinese Room (co-written with Harry Giles). While The Chinese Room had a better absolute ranking, we came in a higher top percentile and the average rating of 7.21 beat The Chinese Room’s 7.03.

We are very much indebted to our testers (Neil Butters, Miguel Garza, Joseph Geipel, Andrew Schultz, Emily Short, Hanon Ondricek, Ryan Veeder, and Jim Warrenfeltz): testing is absolutely vital for the success of any game and it’s handy to get a range of perspectives before releasing to the general public. Further than that, the competition itself has thrown up lots of sticking points for players that we’re addressing in the post-comp release. It’s was a surprise how many people decided to take the rug instead of looking under it (we deemed looking under rugs the canonical thing you do with them in these sorts of games). We only updated the game once in the comp, to fix a small disambiguation bug with the books due to ‘read’ being taken as a synonym for ‘consult’. Some more unusual things have since come to light but we’re confident that this is our least buggiest comp game. Expect the post-comp release soon!

Advertisements