MOPED Lessons: Optimization

MOPED Lessons: Optimization

Welcome to the second of five articles where we take a deeper dive into the principles that make up MOPED. Last time we talked about Modularity, and how it helps to structure your project into easily-manageable chunks. This time, we’ll be taking a closer look at Optimization—why do it, how to do it, and finding some method to the madness. Let’s get started.

Definition

Optimization – the action of making the best or most effective use of a situation or resource. In board game terms, this is best applied to the rules and parameters that the game uses.

As far as board games go, there isn’t some absolute ‘performance’ metric like there is with video games, so what’s considered optimization might not be immediately obvious. For the most part, optimization in board games boils down to removing redundancy and reducing complexity of rules, with minimal (or preferably, zero) changes to the actual functionality of the mechanics.

Details

One of the first things to talk about when discussing optimization is complexity. I’ve already done an article classifying some different types of complexity, but complexity doesn’t really need any fancy definition. We all intuitively understand what is complex, and what isn’t.

Now, to make this as clear as I can: complexity isn’t necessarily bad. Problems arise when there is complexity that either doesn’t contribute to making the gameplay itself more fun, or when the costs of that complexity outweigh the benefits. By far, one of the most important facets of complexity to try and optimize is tracking.

A lot of times, minimizing the tracking involved in a game can drastically increase its ‘performance’, so to speak. Specifically, the performance mentioned here is how much the game allows the players to focus without it creating distractions. As I’ve mentioned before, every second spent tracking is a second spent not playing. The flow of the gameplay shouldn’t have to be interrupted just so players can do bookkeeping. There are a lot of methods of optimizing tracking, depending on what you need to track. You can do things like removing the least used resources or rules, or even trying to merge/consolidate multiple smaller, less relevant mechanics into something that will be more coherent as a whole.

Oftentimes, the overhead cost of multiple smaller things tends to add up quickly. In comparison, a single, bigger thing to track (in the sense of still having the same amount of tracking motions, but centralized) can be easier on the players, since there’s less jumping back-and-forth between different things. Keep things close to each other, both rules-wise and physically, in order to make the required tracking easier.

There are a lot of ways of optimizing your game, and which ones you use depends completely on your needs, as well as what kind of optimizations the game can actually allow/handle. As an example, we can look at something like removing unused values/parameters.

Let’s assume you have a card game that has something like 4 different number parameters on each card. During your playtests, you notice that one of those values has no real use or meaning in most matches played. In the current version of the game, it’s a value that is only ever used/referenced in very rare scenarios. But, how often it’s used doesn’t matter, since it still appears on every single card. For all intents and purposes, it’s just taking up space on the vast majority of cards, and creating confusion for newer players when they look at it on the card itself.

Now here, one of the easier solutions is to outright remove that value from the cards. If it’s not used, it’s not needed. This is one of the tenets of optimization in general. Or, as a ‘softer’ version of that, you can remove it from everything except the cards on which it actually mattered, though this is riskier, and when possible (at least for card games), it’s better to handle things like that in the rules text itself instead of just ‘hiding’ some otherwise global parameter.

This is very important in the opposite direction, too: as well as removing existing unused rules and values, try to look ahead before adding something new. Here, I’ll call upon the YAGNI (You Ain’t Gonna Need It) principle. In its simplest form, YAGNI states that you should refrain from implementing a thing until you’re absolutely sure you’ll need it. Don’t just assume you’ll need something, be certain of it.

The same thing goes with consolidating rules or triggers. If you have things which happen at the beginning and end of a player’s turn (or, both turn and round triggers), unless it’s things which absolutely must happen in those separate timing windows because of turn structure, it’s better to move those triggers/events to a single place in the turn. Also, while on the topic of triggers, avoid delayed triggers. I cannot stress this enough: effects that can cause triggers after the effect itself is done resolving e.g. ‘whenever you X this turn…’ or ‘at the start of your next turn…’ are just magnets for misplays and general tracking nightmares. They’re not always bad (or avoidable), but as a general rule of thumb, if you have delayed triggers, try to minimize the time/actions between when it’s created and when it actually resolves.

Aside: Delegating complexity to content

As something that falls between the Optimization and Modularity principles of MOPED, there is a concept that I’ve gotten to calling ‘delegating complexity to content’. Here, the important thing to note is that the complexity you’re delegating is runtime complexity, or to use non-software terms, the complexity of the game during the actual gameplay. Well, how can complexity exist outside of gameplay in the first place then?

This is something that concerns designers exclusively, rather than the players. As with any optimization, if this is done well, the players will never know it’s there. What it means is that the designer should work to create the game in such a way that most of the complexity or difficult/time consuming processes are taken care of during content creation, rather than showing up during gameplay. The end results of those things are given pre-processed to the players instead of the players wasting time themselves. This mostly applies to values which need to be calculated dynamically in the middle of the game, or edge cases for some rules or effects. Instead of having to exactly calculate what your army’s strength is based on the current number of soldiers mid-game, the designer can handle this through something like a lookup table, where the army’s strength will have fixed values, and the players just need to count things and reference the corresponding value.

When it comes to tracking, referencing is always ‘cheaper’ (in terms of overhead cost) than calculation. Same with edge cases. If only a single card in a game creates some weird interaction, that card should handle its own edge cases in its rules text, rather than the game requiring a global rule to exist for that edge case. This is where having a decently modular game can help a lot, because you can delegate complexity to content with each of the modules individually.

Keep the core rules and system as simple as possible, and create the complex interactions through the specific content of the game, whether it’s cards, different playing boards etc.

Aside: Designing by default

There is also a point to consider when looking at optimization in the sense of just redundant features. Just things in your game that either end up unused most of the time, or even when used, have such a tiny impact on the game that removing them wouldn’t really change much.

One major way these kinds of features can creep into your game is ‘designing by default’. Essentially, this amounts to putting a mechanic in your game because similar games (or most games in the genre) have those mechanics as well. Hegemon did a podcast episode-thing on this, and I suggest giving it a listen to familiarize yourself with the scenario.

Now, there’s a big difference between designing by default and actively putting in some common/default mechanic in your game. The latter means that you actually evaluated your options, checked if and how the mechanic in question would fit in your game, and ultimately decided that it’s worth using. If most games of a genre use a certain mechanic, it’s probably because it’s been tried and tested to work for the genre’s ‘formula’. However, do keep in mind that precedent does not necessarily mean something is a good idea.

Benefits

• Better gameplay experience for players—minimizing or eliminating extraneous tracking and decision complexity
• Easier design and development—when there aren’t too many moving parts, looking into things at any level of detail is kept fairly straightforward and simple; it’s easy to get a proper picture of how things work and fit together
• Easier testing—same as the above, more streamlined games are easier to test; too many moving parts can blow up the amount of test cases exponentially
• Tying complexity to content rather than the core system can allow the game to have a certain level of fault tolerance if applied correctly; only content will break instead of core rules/the game can self-recover from bad states

Costs

• It’s a process that usually doesn’t add anything to the game, and optimizing something without changing it in some way is rarely possible
• It’s potentially dangerous—the designer needs to take the time to ensure that changes made for the sake of optimization don’t break the parts of the system they’re trying to optimize
• It’s easy to overdo—again, complexity isn’t inherently bad, but extraneous complexity is; determining which is which is not trivial and requires a very thorough understanding of the system’s goals and internal workings
• The designer must take care not to cause any collateral damage to other rules/mechanics when optimizing something; regardless of the level of modularity, the game still has to work as a whole

Examples

As already mentioned, optimization is ‘invisible’ when done well. Since it’s an extremely low-level process that involves changing the actual rules and mechanics of the game, when the players interact with the end product, it’s more or less impossible to tell which decisions are specifically the result of optimization.

However, optimization (and especially the lack of it) is something that is definitely felt in-game. Ensuring your game flows well goes a long way towards improving the overall experience.

Optimizations themselves don’t have to be complicated or esoteric. As an example, let’s look at Dungeons & Dragons (DnD). Now, full disclosure, I haven’t played much DnD myself, but I’ve played enough to notice the following optimization: you always add values together, instead of doing subtraction (there might be subtraction in the system, I just haven’t seen any in my gameplay). This is a subtle thing, but it can help a lot. When it comes to doing quick math, people in general are much faster with addition compared to subtraction. The same is true for multiplication vs. division. So in a way, this method optimizes the calculations which happen very often during gameplay. Doing simple addition to get a value and then checking if it’s over some threshold is about as straightforward as this type of mechanic can get. Also, in a lot of scenarios, regardless of the game, things which use subtraction can instead be inverted to use addition. Flavor is easier to change or fix than rules in pretty much all cases.

Another example, this time within the context of handling edge cases, comes from Magic: the Gathering. In card games, most cards break the rules in some way, which creates the unique effects that the cards have. When an effect creates some kind of edge case, it helps a lot if the text of the effect itself defines how to handle that edge case. The following example shows what can happen when that’s done poorly, or not at all. There is an old MtG card, Rasputin Dreamweaver, that has a unique effect that, to the best of my knowledge, hasn’t been put on any card newer than it.

1994 was just a year into the existence of the game, and designs were experimental, to say the least

A seemingly innocuous effect, the last part of the card says (in the most recently updated version of the rules text): “Rasputin can’t have more than seven dream counters on it.” That’s it. It doesn’t say what that implies, i.e. what to do if somehow the card has more than seven counters on it.

Even though a simple case like this is intuitively easy to handle, in my opinion the game itself should provide an explicit way of resolving things. And MtG actually provides a well-defined rule for this edge case. But not on the card itself. Rather, it exists as part of the Comprehensive Rules. Specifically, rule 704.5r:

704.5r If a permanent with an ability that says it can’t have more than N counters of a certain kind on it has more than N counters of that kind on it, all but N of those counters are removed from it.

While the end result is the same, i.e. the edge case is actually handled by the game’s rules, requiring the player to pull away their attention from the game and look at external sources in the middle of gameplay can mess up the experience a lot, even if that source is the rule book itself. Again, in this specific scenario, one most likely doesn’t need to look at the specific ruling to figure out how things work, but what happens when more complex effects create even worse edge cases? (The traditional MtG example being Humility + Opalescence)

To sum up, the key takeaway from this example is: Information proximity is very important. The modules (cards) should handle their own edge cases. This helps the players keep their focus on the game and the decisions they make, rather than having to step aside and look at things outside the game.

Golden Hammer Rating

As a quick aside, it needs to be said that it is possible to test the ‘performance’ of tabletop games. In software, you can do test runs of a program to see which parts of it take the most time to process. This is called profiling and is used to zero in on the biggest performance bottlenecks, i.e. the areas which should be prioritized for optimization.

Profiling can be done on tabletop games as well, either as dedicated playtest sessions or just keeping an eye out during normal playtests. Trying to optimize everything is usually not really feasible, and can actually backfire, or just waste a lot of time. When profiling, try to find what causes the most performance loss (i.e. wastes the most time) across the total gameplay session, and then try to optimize that bit.

The benefit of profiling is being able to find the performance-critical parts of the system and invest most of the optimization effort in them. You want to guarantee you’re getting the maximum possible benefit out of each change you make.

With that said, time to get to the Golden Hammer™ rating for Optimization. As a refresher, this is a measure of how easy a principle is to overuse, regardless of other factors.

GOLDEN HAMMER RATING: 2/5

 

Optimization scores less than Modularity on this front, mostly because it assumes you have an already-functional system in place. Of course, things can just as easily break with optimization as with anything else. We’re still talking about making changes to rules, but in general optimization shouldn’t really introduce breaking changes. As long as you can make things smoother without altering functionality, optimization is much harder to overdo. However, it’s good to take things cautiously and delegate optimization to the later stages of the design process. This way you can make sure you have a good idea of how the system behaves and have already identified at least some areas that can be optimized.

Conclusion

So, that’s part 2 done of the MOPED deep-ish dives. With these I have to stop myself from either blabbering on too long or getting sidetracked constantly with related topics. Recently, I’ve been tinkering with the idea of doing voice recordings as companion pieces to my articles. This can allow me to more easily touch on topics that couldn’t make it into the article for the reasons stated above, and might even end up as full-length tirades discussions, possibly with others. I’ll try the idea and see what can be done about it in the near future. If it turns out well, I’ll probably add things retroactively to other articles as well.

That’s enough reading/writing for one article. Here’s the tunes of the day. Enjoy!

This is InvertedVertex, signing off.

Tags: , ,

3 Responses

  1. […] — ModularityO — OptimizationP — ParameterizationE — EmergenceD — […]

  2. […] is bad regardless of the medium—delayed triggers. I ranted about these for a few sentences in my article on optimization, and I’m gonna rant about them here for a few sentences more. Basically, humans suck at […]

  3. […] keep playing, and keep testing. Next time we’ll be tackling another MOPED principle: O for Optimization, which is in equal part trial-and-error and actually knowing what you’re […]

Leave a Reply to MOPED Lessons: Introduction – Tabletop Designers Association Blog Cancel reply

Your email address will not be published. Required fields are marked *