Test Me: Debug functionality in my text adventure engine

Aside from all the game features I’ve implemented in my text adventure game so far, there is one thing I have hardly ever touched upon. Testing and debug features. With an IDE like PyCharm it is really great to be able to step through your code, keep an eye on the content of variables and follow the flow of your code to make sure it does what it is supposed to do, but as my game grew, I decided to come up with a few features to help me cut through the lengthy debugging process of more complex gameplay features.

The most important debugging help I have is the one I’ve been using since the first day of my development. It is a line that prints all the recognized words in the command the player entered.


As you can see, the command here was Look under the bed. This line helps me to quickly see if all the command words have been properly recognized but it also allows me to check if my parser logic is working as it should be because a command like Take the key from the table will result in this output.


It shows me that Take…from has been correctly recognized by the parser and converted into its own verb token, TakeFrom, that sets it apart from a regular Take command.

This simple debug output has helped me on many occasions to identify problems with the parser, the parser logic, as well as the actual game logic.

But in all honesty, it is really just a very basic debug feature. You may recall that I mentioned the programming language Inform some time ago. It is a programming language that has been specifically designed for text adventures, or rather interactive fiction as it is called these days, evidently.

Inform has a very cool feature for developers that helps debug even the most complex of game situations: a Test Me function. Now, I am not all that familiar with Inform but reading up about it in the documentation, Inform’s Test function allows the programmer to define a set of commands to be tested for any object in the game. When triggered, these commands would be executed just like regular user input would, generating responses that can be easily reviewed for correctness. Inform goes even further—instead of merely generating the output, it can compare it to an expected output to ensure the object actually generates exactly the kinds of responses the developer expected and flag those that don’t. Wow!

So, the whole concept really got me to thinking and I looked into how I could implement something like this in my own text adventure engine.

The first step for me was to implement a TestIt() method in every scenery and item object in the game. It means, adding it to the respective base classes.

As you can see from the code, all this really does is generate the command Examine followed by the object’s name. It is simply a default fallback for all objects at this point and makes sense, because, clearly, we want to be able to examine any object in the game.

The much more interesting part comes when we create specialized versions of this function for specific objects. Let’s take, for example, the bed from my previous parser output and work with that.

I can now create a specialized self-test function for the bed that quickly runs a number of commands related to the bed so we can take a look and see if the output is correct for each. To do this, I simply add a custom TestIt() function to the bed object itself.

As you can see, I am creating a list of strings where each entry is a command, just the way a player would enter it.

(Note: I originally had this implemented as one string with commands being separated by the new-line character \n but while writing this blog post, I realized how silly that was and I decided to change it into a list right there, right now. It just goes to show that thinking and re-thinking my programming problems and solutions while writing these blog posts is really helping me become savvier at approaching certain things.)

When I start my game now, I can simply enter the command Test bed and the game will automatically execute the test command sequence for me. Depending on whether there is a bed in the room or not, these will either generate useful responses, or error messages, both of which are equally important. In fact, I can force this sort of behavior by adding commands that move the player in and out of a room with the bed. That could look like this.

As you can see, I am running through the same commands, then I tell the player to go south and do it all over again, one time with a bed in the room and the second time without the bed in sight.

As you add more functionality to an object, you can simply add the necessary command to the test function to make sure you can easily debug it and check its output as it grows. 
Why is this really helpful? you may ask, but once you start using this, you will realize that it can save you a lot of time testing and re-testing individual commands, especially as your game world and objects become more complex and interact with each other.

Let’s assume that you have a puzzle of sorts, revolving around a lantern. In order to make it work, a number of actions are necessary, such as filling it with oil first, then lighting it, and then doing something with it. An initial TestIt() function could look like this.

This is all nice and good, but what if you don’t have anything to fill the lantern with? What if you don’t even have a lantern? What if you don’t have matches to light it? The test function would be useless 99% of the time, so we need to fix that because, ideally, we want the test functions to work anywhere, anytime.

The solution is to provide a new debug function that allows us to obtain objects regardless of where the player is. A forced obtain, essentially. I decided to create a special keyword for this and call it purloin. With this power command at my disposal, I can then extend my test function to look like this.

This runs through a series of commands, purloining some of the necessary items and then trying to light the lantern, first without matches—which should generate a particular response—, then without oil in it—which should generate another specific response—then fill it and light it, and so on.

This kind of test allows me to test some of the most complex puzzles in the game and also easily makes it possible for me to confirm failure conditions and alternate responses, like trying to light the lantern without matches or oil and make sure that the proper responses are being generated.

For now, this is as far as I’ll leave it, but in the future, I might revisit this particular subject and perhaps implement the ability to make a comparison against an expected response part of the functionality.

Leave a Reply

Your email address will not be published. Required fields are marked *