This article describes and elaborates four best practices of software engineering in the context of game development. These practices are:
Software development is plagued with many serious problems. Complexity is increasing, time to market is decreasing and quality requirements are skyrocketing. This puts great strains on the software developing organization and the people therein. It is no longer possible to work harder; we must work smarter.
Best practices of software engineering are commonly observed, commercially proven approaches to successful software development. Developers that wish to compete on the bleeding edge of software technology should adapt these practices to the specifics of their own organization and projects.
The most complex part of any computer game is the software behind it. The software needs to incorporate gameplay mechanics, artificial intelligence algorithms, network protocols, real-time physics simulations, decompression and playback of music, filtering and mixing of sounds, 3d visualization and more. In addition to all of this, multiple platforms must often be supported. The software must also be packaged with huge amounts of data in the form of textures, models, animations, sounds, scripts etc. to create something fun and entertaining; an elusive mix of technology and art.
Considering all of this, it is not surprising that many games are released with poor quality, often requiring fixes in the form of software and/or content patches from day one. All of these complexities cause games to exceed their budgets and miss time to market by not days, but months and years. As a result, many game developers are fighting a losing battle to survive.
We maintain that a large number of these problems are the very same problems "traditional" software developers have faced and conquered by implementing best practices in their organizations.
Developing iteratively can be viewed conceptually as splitting up the development of the software into several miniature projects, called iterations. Central to this style of development is that each iteration continuously delivers a working executable that allows for proper assessment of current project quality and progress.
Each iteration is a micro version of the project as a whole, with each area of the final product represented at some level of abstraction, and as iterations progress, the quality of each aspect of the product increases. Each iteration incorporates the natural flow of working with requirements through analysis, design, implementation, test and final delivery of the product, as working software is the product of each and every iteration.
This flow repeats not only on a per-iteration basis but also on a project basis; the focus of the earliest iterations being more towards nailing down the fun of the game, often working in conjunction with focus groups and requirements, while later iterations focus more on detailing features of the product. This workflow can also be found on a daily basis, with a single developer's day starting with the acceptance of a task, investigation of the requirements of the task, perhaps doing some small analysis/design sketches, moving on to implementation, and finally testing and integrating the feature into the product.
The key driver behind iterative development is to reduce as much risk as possible as early as possible. In game development a major risk is that the gameplay is simply not entertaining enough. Iterative development enables the developer to discover such fundamental problems early in the project instead of towards the end, as a working executable (albeit at a very high level of abstraction) is available as soon as the first iteration is completed. The developer is thus able to change the design or even abandon the project before a huge part of the budget is spent.
As mentioned, a central aspect of each iteration is that it should result in a stable, functional, testable software system that should be delivered to the end user. In game development this could mean delivering the product to key stakeholders and focus groups and/or to the actual end user (perhaps represented by focus groups). The output of one iteration becomes the input of the next, with the most critical flaws and/or missing features becoming the goals of the next iteration. This way the developer has a chance to re-evaluate and steer the project in the most appropriate direction once per iteration, reacting to real experiences from a real working system.
One key effect of this procedure is that the game design and the understanding of the game design evolve with the project. In essence there is no longer any need for a complete design document before starting the project; indeed, creating such a document would most definitely be a waste of resources since the elusive property of fun simply must be tested and tweaked using a working software product. It is our continuing experience that no game design document survives contact with actual implementation, and accepting this fact and acting accordingly is essential.
Iterative development also lets you decide when the game is "good enough". In essence this means that you can stop the project when you feel that the resources spent each iteration do not yield sufficient return. The software is always "complete" at any particular time in development. This property of iterative development is very interesting and leads to the important realization that there is no point in worrying about unknowns in the project, you simply do the best you can to improve the game based on the state it is in today.
It is very common for projects to be completely unproductive and paralyzed by analysis if they start worrying over speculative possible future problems and issues. Worrying about problems that in 90% of the case actually never happen is a waste of resources when what you really need to do is solve the 10% that actually do occur. You deal with concrete problems that actually exist, not with abstract problems that might occur.
Another key benefit of working iteratively is team motivation. Always having a product that steadily evolves and moves in the right direction is very important. The opposite -- working months or even years without seeing any progress -- is a certain project killer.
Iterative development is also the basis of many other best practices, making it probably the most important practice to understand, implement and refine.
Many projects fail simply because the system solves the wrong problem, meaning that when the software is finally delivered it fails to provide any benefit to the customer. Requirements need to be collected, analyzed, documented, tracked and organized. This is fundamental to any organization wishing to deliver on time and within budget. Good requirements should be a major driving force behind the project. Requirements need to be handled with care and respect. Failing to do this means almost certain project failure, even if all other practices are observed and practiced.
It is however important to stress that in an iterative project there is no dedicated "requirements phase". Neither does there exist a set of people (separate from the developers) which handles the requirements management tasks. Such a division is bound to create requirements that are never read or understood by the developers actually responsible for building the game. Instead, requirements management is an almost daily task for every team member.
In traditional game development requirements come from the (lead) game designer, described in a big bible called the "game design document". This is the same as the "grocery lists" of old time software development and will almost certainly not produce a fun game.
The "fun-ness" of a game is something that is very hard to nail down completely on a piece of paper; you simply need a working game to be able to evaluate, iterate and refine the requirements. The consequence of realizing this is that the big bible game design document is a waste of time and resources and a more lightweight and agile approach is needed.
Typically the game designer evaluates the latest iteration together with the focus group. This yields a number of new requirements, requirements that are no longer needed, requirements that need to be altered. Ideally these are put in a tool and then prioritized, analyzed, etc in the beginning of the iteration.
This also means that the game design is finished when the project is ended and the game is shipped, as opposed to being completed before production actually begins. In essence we suggest the game designer move away from having the role of the "game creator" using a word processor or spreadsheet, to becoming the person that makes sure that the project gets sufficient feedback from the iterations to be able to move in the right direction.
As a typical project starts off with more requirements and feature requests than can be implement during a single iteration, a "backlog" of requirements immediately exists at the end of the first iteration. Also, the evaluation of each iteration will invariably invalidate or change existing requirements, as well as creating new ones.
In our experience a good prioritization of this ever increasing "pile" of requirements is critical to the quality of the game. This is because most of the time you are in a situation where there a far more requirements than there is time to implement, given the entire project time frame and budget. As a result, it is critical that the most important requirements get done first; ordering is significant.
This kind of prioritization (as is that of any significant "to-do" list) is a daunting task without good tool support, but nevertheless one of the most important tasks faced by the project leaders and game designers. Constant intelligent prioritization of requirements, based on customer expectations and risk, is a must if a quality game is to be created.
At the foundation of iterative requirement management is the game vision. The vision is a short but information packed artifact focusing on the whats, whys and whos of the game. The vision answers questions: what the game is about, what the player actually does most of the time, why the player is doing these things, what the surrounding game universe is, what feelings the game should convey to the player, etc. These are described as top-level features. All other requirements are meant to support one or several of these features, creating a logical hierarchy.
As the vision is a very short artifact it can easily be read and understood by everyone on the team. It should be a major focus of the early iterations to establish a shared vision of the project. Everyone from business to development must have a clear, shared and unambiguous vision of the project. Failing to establish a concise vision is a sure sign that the project should be reevaluated and possibly even canceled as it is simply too unfocused, and hence risky, to continue.
Quality requirements have special weight in the vision. Games are especially ripe with interesting quality requirements. Not only technical quality requirements like performance and platform independence, but in particular softer, more intangible quality requirements like fun-factor, replayability, and feelings of the player should play a major part. In the book 'A Theory of Fun for Game Design' Raph Koster describes a number of fun-factors to be considered. In a recent Gamasutra feature, Tom Hammersley also discusses quality requirements in the context of games.
It is also important that the organization has a clear picture of what business goals the game has. This can be described in various ways from units sold for a given period of time, net income, and other economical measurements. It can also be focused towards business development. A new developer could for example have a business goal of getting the game published by a certain set of publishers, getting a certain amount of media coverage or getting a certain minimum review score. Thirdly, you can also have organizational goals, like introducing a new technology or project practice; like learning to work truly iteratively.
Business and organization goals form high level evaluation criteria for the project.
It is very important to have a clear picture of who the person playing the game actually is. You should form one or several personas and look into questions like; when is the game played, where is the game played, for how long is the game played, is the game played together with friends, family, kids, spouse etc., what other games does the persona play, why does the persona play games. You should dig as deep as possible and not only answer basic questions like gender, age and level of education.
Getting into the head of your gamer and having a clear unambiguous vision will make it easier to evaluate high level changes and requirements by simply deciding if they support the vision and goals of your personas. Investigating personas carefully could also yield business opportunities where you can find niches that are not addressed by current products. For example:
"a real-time strategy game played at the office on lunch breaks in ten minute sessions..."
Change is inevitable due to a number of factors. Due to the ambiguity of requirements vs. the specifics of defining code it is very hard to deliver the perfect match to customer expectations in the first attempt (iteration). This leads to changes in the requirements, hopefully making them more concise and specific, and as a result the software must also change.
"...you can't stop the change, Anakin..."
However, change is natural, and should not be viewed with fear. The very point of software is indeed for it to remain "soft" (as opposed to the "hardness" of hardware) and thus enable modification. Being a game developer who is able to change your software as requirements change is a good sign; you have software that is resilient to change, you listen to the customer and the customer is committed to the project. Change should in many respects be embraced.
Studying software engineering at Blekinge Institute of Technology in the 1990s, we were taught that software was an engineering discipline, much like planning and constructing buildings. A big part of that was about finding the requirements, analyzing and understanding the problem, designing an optimal solution and finally actually implementing and testing.
After becoming involved in Massive Entertainment, we found that a lot of the things they taught us in school were of limited applicability. This wasn't something that we came to realize quickly or painlessly; indeed we tried our very hardest to apply all the nifty "architectural and design pattern stuff" that we had learned in school to the game we were creating (the first Ground Control).
As the years went by it became more and more obvious that the central concept of planning software systems flew very much in the face of our everyday realities at work, mostly due to the fact that there was no way to factor in the reality of constantly changing requirements.
For a long time we (the programmers) felt that the fault was that of the game designers. "They haven't done their job!" we would say, "They don't think things through!" Occasionally this was true, but for the most part we slowly came to realize that we, the programmers, were building artificial limitations into our software.
This was shocking, because we figured that our superior Swedish software engineering skills had allowed us to rise above and create systems that were truly adaptable and in essence "future-proof". In a way we had all come to think of this as the whole point of being "Object Oriented"; we were big on "systems", we were big on "engines".
The limitations were based on our assumptions about how the software was supposed to function. This was in turn based on early drafts of the game design and on the requirements we extracted from the same. However, it became obvious that no matter how much we prepared our shiny software "systems" for change (that we knew would be coming), no matter how much abstraction we introduced to be able to handle "holes" in the game design, we always came up short.
One day in the latter stages of Ground Control II: Operation Exodus the lead programmer -- that's me, Johannes -- simply gave up on the old learning. The day-to-day situation at the office was such that requirements could change so quickly, the focus of the next big push could change almost daily, that we were gaining nothing by trying to adhere to an old plan and to anticipate where the software was going to go.
I didn't abandon my thoughts on what was good code on a small scale, and I don't think I totally abandoned the high-level architecture of the software either, but I know that I in general stopped trying to force earlier software designs to fit the changing requirements. It became OK to throw out code. It became OK to change interfaces that had been fixed for a long time.
In short order, a matter of days, something very wonderful happened. I started having more fun in my work, despite severe crunch-mode. The "pain" of having the game designers mess with my "software system" was quickly replaced with a joyful feeling of coding stuff that was "useful" and "to the point". I experienced a great resurgence of the sheer fun of programming, and I think that is very much due to me consciously letting go of my need to control the design of the software.
Instead of trying to be "future-proof", I could simply admit that I didn't and couldn't know everything beforehand, and let myself discover the way the software "wanted to be" in the same instant I coded it. Even more surprisingly and rewardingly, I started producing better code. It was smaller, it was more to the point, and it was even more optimized.
From that time onward, my approach to software engineering and to game development has changed in a fundamental way. I basically stopped trying so hard to anticipate the future, because I had learned that if something comes up that requires a change to the fundamental assumptions of the software I am writing, I am confident in my capability to be able to change that assumption. The result is code that is in better sync with the actual problem that I am trying to solve. And we all know that in the field of game development that problem is apt to change all the time.
In hindsight this is not so surprising. Not if you are prepared to remember one of the main points of software: it is supposed to be soft. We had been treating an early technical architecture as something that was fixed (i.e. hard) when we should have been taking advantage of the very core aspect of software as opposed to hardware: you are supposed to be able to change it easily.
The holy grail of object oriented software -- reuse -- is something that seems really great in concept, but in practice it tends to shoot you in the foot. Premature reuse is even worse than premature optimization, often cited as "the root of all (software) evil".
Becoming overly enamored with the potential for reuse gets you thinking about systems and frameworks, things that are general, flexible, pluggable, and generic, as opposed to creating real value and functionality in your program, which are things that are specific, to the point, and concrete.
On top of that, game designers and gamers always want special cases, things you had not anticipated -- things that make you angry, since they don't fit into your grand software design!
But how then are you to achieve software reuse? Instead of planning and designing for reuse you should discover opportunities for reuse! This is often applicable when you find yourself to be duplicating work you have already done, and when you do, you should extract the actual real commonality and put it somewhere re-usable (i.e. in a common library). Truly re-usable software elements are discovered, not designed.
Designing for reuse is more often than not based on a "coolness" factor, i.e. it "feels cool" to design and implement reusable systems. The fact of the matter is, however, that such design is always speculative, because you are anticipating things that you might need, or could do with the system you are creating. This is the worst kind of speculation, because you are going outside of the scope of the requirements of the application at hand. That is not the point of writing software. The point of writing software is to deliver useful systems that deliver business value (or entertainment value, or both) to the end customer.
Again: never design for reuse. That is speculative, and you are much better off writing code that you know you need right now than code you think that you might need tomorrow. Only when, whilst maintaining old software and/or writing new software, you realize that you are doing something that you've done before -- only then -- do you reuse some code.
Even then you have options: if you are still maintaining the code base from which you are re-using code, you factor out the commonality to a library and make both the old and the new software depend on this common library. If you are not maintaining the old software, you simply copy the source.
This view is based on a few factors. We all want to write successful and useful software; otherwise what is the point of writing it in the first place? The natural state of such software, due to its usefulness and softness, is a state of constant maintenance. The reason to extract commonality is to have less total code to maintain, which makes maintenance easier. This gives rise to reuse of the common code.
Another reason to reuse code is simply because software is trivial to copy (if you have access to the source), so why duplicate work effort that you can simply copy for free? It is still reuse. It is a very common form of reuse, especially when it comes to copying/re-using an entire application. At the very least it will give you a much better start off point for the new application than having nothing at all to start with. The decision to do this kind of application-wide reuse is based much on how similar the new application is to the old, because you do not want to end up rewriting/adjusting more old code than you write new code.
Reuse can happen on small scales too, even within the same application. This is typical when you are just starting on a piece of new software. In the natural course of implementing customer value, you realize that you can factor out some commonality or parameterize some method in order to minimize the total size of the code base.
All of the above is related to refactoring, which is restructuring your code in order to enhance maintainability and extendibility while not changing its function, a very essential and natural part of software growth and adjustment. Obviously, reuse is part and parcel of the exact same mindset as refactoring, but people become confused due to the (often only perceived) differences in scale and scope. Working with the software design through refactoring should indeed be part of every programmers' daily tasks, with the goal to make the code base as light and changeable as possible.
However, uncontrolled changes can easily push a project into chaos. Changes should not be allowed to, uncontrolled, creep into the software. Much like requirements, changes need to be collected, analyzed, documented, tracked and organized. A change request have indeed many aspects that are similar to a requirement, though change requests tend to be less detailed and have a greater emphasis on the motivation for a change. The primary goal of a change request is for being a base for evaluation. Typically you want to do a fast, triage-like evaluation -- "toss, wait, treat"-- in order to quickly and efficiently focus on the changes that are believed to have the most positive impact on the project.
Game developers face an endless source of changes and must at the start each iteration be very careful to choose the "best" changes to be handled, otherwise budgets are soon to be exceeded, and very little is added to the product. Being able to embrace change means you have to travel light, to be as agile as possible. Having a big speculative design document, or a complete upfront software design blue-print over the game will certainly make you too heavy and you will fail in making those critical changes that will evolve your game.
Since game developers aim for the moving target, "fun" games tend to change in a more radical way than traditional software products. It is therefore very important to track changes, reasons for changes, and the results of changes.
This database of changes and their respective result could also be an interesting thing to refer to between projects -- in essence, this would become a database of game design dos and don'ts. In this respect it becomes very important to be able to trace a from a change request through requirements to a working copy of the software. You simply need to know what features are implemented in a certain build of your game, otherwise it will be very hard to correctly evaluate the impact of the changes you make.
In a truly iterative project the natural state of the software is to be under maintenance and not hidden from end users in some "in development" state. Changes to and additions of requirements constantly flow in, and the game slowly but surely oscillates towards its optimal goal. In all of these changes it's also important to keep the game vision on your mind.
Some changes might at first lead to a game that evaluates more poorly than its predecessors. A big challenge is to realize if and when a change like this will be truly beneficial, or if it actually was a bad call to introduce the change .You need to be both courageous and humble to not get stuck in local optimum scenarios as well as have good traceability and tool support.
As humans we tend to forget very quickly, and the permutations of implemented requirements quickly grow very large. This easily puts projects in a position where the same changes and requirements are tried and retried over and over, some iterations apart, with pretty much the same dismal result: a game design that runs in circles instead of moving forward.
Therefore it's very important to have a change backlog for the reasoning behind a particular change. This backlog provides very valuable information when selecting changes to push and what changes to drop. Being able to know if something has been tried and at what result is very powerful, yet often overlooked. Having access to the exact builds where the particular change was part of the game and being able to replay that particular version also provides valuable information.
The earlier problems can be found, the better. It is therefore very important to verify quality as soon as possible. Software should be tested for functional and quality problems throughout the entire project. It is impossible to add quality as a separate testing phase.
Quality should be a major concern for everyone involved in the project every day. Continuously delivering top quality software is also a major contributor to team motivation, and lack of quality certainly drains the organization of commitment and courage to change.
Testing games is a particularly daunting task. Just consider functional testing of all possible permutations of different hardware, operating systems and API versions! Add to this specifically elusive quality requirements specific for games like measuring emotional response on test subjects and the fact that working in testing or quality assurance is regarded as a stepping stone in your career towards more highly regarded positions (i.e. it is a lowly and not so important task).
The greatest risk of game development is the possibility of inadvertently developing a game that is not entertaining enough to be profitable. Game designers struggle to describe the entertaining aspects of the game in their game design documents, but "fun" is such an emergent aspect of a game that it can only be verified by actually playing the game, or a rough version of it.
If the game can be demonstrated to be fun without a lot of flashy content and special features, then it's a good bet that it will be even more fun and rewarding once these things are in place. On the other hand, if the core gameplay is boring or frustrating, no extra fancy content or functionality can save it. Game developers need to find out if their games are fun as early as possible.
Viewed in the light of being the primary means of verifying game "fun-ness", testing needs to be treated professionally and with great care. We propose that since testing is such an important part of development, every developer should be involved in testing and quality assurance should be under the guidance of a small professional testing team.
The testing team has the responsibility of monitoring testing quality and advising developers on how to apply different testing techniques. This will make quality assurance a more integrated and daily task for every developer. Also, when hiring for your QA department, you should be wary of people viewing QA as a stepping stone towards some other career.
As the software changes and evolves, the testing procedures must keep up. It is essential remember that testing is part of every iteration and a daily task for every developer on the project. Updating tons of testing documentation to keep up with the software can be a major task, and performing these tests over and over again is very time consuming if performed manually.
Good tools for automated test support and tracking are needed here, and ultimately you should only need the press of a button to test the entire software. This means that the actual implementation of the game needs to be testing friendly, testability could indeed be a quality requirement per se. This could range from using unit testing frameworks and test friendly software architectures to enabling recording and replay of user input to enable automatic creation of user scenarios.
Testing is not the only way of assessing quality in software. Code quality inspections, walkthroughs and other preemptive techniques should also be implemented and refined to suit the organization. Such techniques are also good for spreading knowledge in the project team.
The requirements form the basis of different test cases to be executed, and when writing requirements, test cases should be written in parallel. This will both raise the quality of the requirements and also make testing activities a more integrated part in development. Care should also be taken not to duplicate information between requirements and test.
Games are often open ended and the amount of input options is more or less endless. Finding good test cases based only on requirements is very hard and techniques like exploratory testing should be used to find both test cases and important input sets to be considered.
Best practices of software engineering are observed approaches to successfully delivering software products. A software process for game development should therefore have the best practices of general software engineering as a base, and this base should then be tailored into a development process specifically aimed at the game development community. The process should also support further refinement on corporate and even inter-project levels.