Here’s something I want to understand better:
How do computer/video game studios do it? Where “it,” here, is
- successfully build and deliver very complex software at a relatively fast pace
- … when (I would imagine) a bug in virtually any component of that software has a fairly high probability of tanking the whole product (being ‘game-breaking’)
- … and the components have to be not just individually fast but capable of running reliably fast as a whole (some having to wait for the others’ outputs and so forth)
- .. and do all that while building, not just the simplest bare-bones version of whatever core gameplay they’re trying to deliver, but (frequently) something that is packed with bells and whistles (“oh yeah there’ll be a crafting system and emergent NPC behavior patterns and random weather patterns and optional sidequests and … ”)?
Like, I keep having this experience where I go to work and talk to people about the design challenges involved in, I dunno, adding a new button to a UI or making the user interaction flow slightly more complicated, and there’s a general (and understandable) fear of making anything more complex or flexible than it has to be, because you lose guarantees and create new edge cases and stuff …
… and then I go home, and play a video game, and I’m interacting with a way fancier UI that works perfectly, with little contextual sub-menus popping up in battle and stuff, and I’m interacting with a bunch of complex nested application logics from the ones that determine who’s alive in battle and what they can do, to the ones that determine what will happen when I talk to NPC X based on the fact that I have item Y and haven’t completed quest Z, and the ones that layer a scripted animation on top of the custom body and clothes I’ve given to my character and the weapon they’re holding (which could be any one of 30), and I’m just like, how did people build this?
Is the answer just “re-usable and extensible game engines”? That still feels too much like a free lunch, though; any engine flexible enough to give you this much creative freedom is (presumably?) too generic to give you any guarantees that things won’t blow up in your face.
Maybe it’s “re-usable and extensible game engines” plus “tools for managing and automatically testing things you build in those engines”? Like really sophisticated integration tests that work for arbitrary components placed into a pre-established framework?
Maybe it’s also the fact that (many, if not all) games basically get released once and then aren’t incrementally developed much after that? Maybe every large game builds up more and more technical debt as release nears, but that’s okay as long as you still make it past the finish line?
I know the game industry is infamous for working people inhumanely hard, and I do imagine that plays some role, but I don’t think it can explain that much here. Overworked people can write more code per week, sure, but they’re going to be worse, not better, at complex cross-team collaboration.
Another question, which I’m realizing is kind of latent in this one, is: how do game engines get created in the first place, as a matter of process?
The usual lore (see e.g. these two posts) advises against building things like engines in the course of building an application, not just because it’ll take more work, but because it’s hard to predict in advance what kind of flexibility you will need and which assumptions will stop being reliable. Which makes sense for building individual applications, but then when does the engine get made and extended, and by whom, and what procedures are in place to ensure this happens? Or do big game studios ignore this “usual lore” entirely? (In the terms of one of those posts, they are building the space shuttle, I guess.)



