Game Design, Programming and running a one-man games business…

Speeding up my game from 59fps to 228 fps.

I recently saw a comment online that the ‘polls’ screen in Democracy 4 was horribly slow for a particular player. This worried me, because I pride myself in writing fast code, and optimizing to a low min spec. The last thing I want to hear is that my game seems to be performing badly for someone. I thus went to work on improving it. This involved about 15 mins looking at the code, about an hour musing, while trying to sleep, and about 20 mins coding the next day, plus an hour or more of testing, and profiling. Here is what I did, and how I did it.

The game in question is the latest in my series of political strategy games: Democracy 4. its a turn-based 2D icon-heavy game that often looks a lot like a super-interactive infographic. Here is the screen in question, which shows the history of the opinion polling for all of the different voter groups:

There is actually quite a lot of stuff being drawn on that screen, so I’ll explain what is going on, in ‘painters algorithm‘ terms.

Firstly, there is a background screen, containing a ton of data and icons, which is displayed in greyscale (black & white) and slightly contrast-washed out to de-emphasize it. This is non-interactive, and is there just to show that this is, effectively, just a pop-up window on top of the main UI, and you can return there by hitting the dialog close button at the top right. This is drawn really easily, because its actually a saved ‘copy’ of the backbuffer from earlier, so the whole background image is a single sprite, a quad of 2 triangles, drawn in a single draw call, filling the screen.

Secondly I draw the background for the current window, which is very simple as its basically a flat white box, with a black frame around it. This is actually 2 draw calls ( TRIANGLESTRIP for the white, LINESTRIP for the frame).

Then I draw a grid that represents the chart area, as a bunch of single-pixels lines. Thankfully, done efficiently as a single big draw call.

Then for each of the poll lines I do the following:

  • A single draw call for a bunch of thin rectangles representing the faded out lines from pre-game turns
  • A single draw call for a bunch of circular sprites representing the dots for pre-game turns
  • A single draw call for the lines from player-turns (full color)
  • A single draw call for the dots for player turns (full color)

Then a bunch of other stuff, such as the filled-progress-bar style blue bars at the right (all a single draw call) and then text on them, and then the title, the buttons at the top, and the close button.

In total, this amounts to a total number of draw calls for this screen of 102. This is almost best-case though, because its possible for that grey bar at the bottom to fill up with a ton of icons too, which would make things worse.

Now…you may well be sat there puzzled, thinking ‘but cliff, who cares? 102 draw calls is nothing? My FragMeister 9900XT+++ video pulverizer can apparently do 150,000 draw calls a millisecond’. You would be correct. But let me introduce you to the woes of a ten year old laptop, and not a bad 10 year old laptop, a pretty hefty, pretty expensive HP elitebook 8470p, which was inexplicably a gift to me from intel (thanks guys!), and came with their HD 4000 graphics chip.

I appreciate free laptops, and the intel GPA graphics debugging tools are incredible, but the horsepower of the actual GPU: not good at all. It could only manage 58fps on that screen. Assume some extra icons at the bottom, and assume a poorly-looked after laptop with some crapware running the background, and we could easily drop to 40fps. Assume a cheaper-model laptop, or one a few years older, and we suddenly have bad, noticeable ‘lag’. Yikes.

When you delve into the thrills of the intel Graphics Frame Analyzer, the problem is apparent. This chip hates lots of draw calls. It hates fill-rate too, but the draw calls seem to really annoy it, so its clear that I needed to get that number down.

The obvious stupidity is that I am drawing 21 sets of lines individually instead of as a group. There was actually method to the madness though. The lines were composed of lines AND dots, and they were being drawn differently. Each thick ‘line’ between 2 dots is actually just a flat-shaded rectangle. To do this, I create a ‘sprite’ (just a quad of 2 triangles), with the current texture set to NULL, and draw it at the right angle. Draw a whole bunch of them and you get a jagged thick ‘line’. The dots however, use an actual texture, a sprite thats a filled circle. To draw them, I need to set the current texture to be the circle texture, then draw a quad where the dot should be.

Luckily I’m not a total idiot, so I do these in groups, not individually as line/dot/line/dot. So I draw 32 lines, then I draw 32 dots, and thats 2 draw calls, one with a null texture, one with a circle texture. Actually its WORSE, because I make the mistake of treating the pre-game (simulated) turns as a separate line, because its ‘faded out’. This is actually just stupid. When you send a long list of triangles to a GPU, the color values are in the vertexes, you don’t need to swap draw calls to change colors! but hey ho.

Clearly when I wrote this code, I thought ‘well one uses a null texture, and another uses a circle texture, so this needs to be 2 separate calls’. and because I cant even make a single dotted line 1 call, no way can I make the whole thing 1 call right?

This was stupid. I could have changed my ‘circle’ texture to be 2 images, a circle, and a flat white quad, and change the UV values so that the dots used the circle, and the rectangles used the flat white quad, but there was actually an even easier method. I just had both systems use the circle texture, but set up UVs on the rectangles so that they were using only the very very center of the circle sprite, which was effectively all white anyway…

By setting those UVs, I can avoid having to use a NULL texture. An all-white texture is exactly the same thing. This way, all the lines and dots are exactly the same thing, its just a big long list of sprites, which means just a big long triangle list, which means a single draw call. And because its just 1 draw call, that means it can merge with the next line, and the next. So instead of doing 21x2x2 = 84 draw calls, I can just do one. Just one. Awesome.

Amusingly, in my actual code, I was sloppy and didnt even use the matching UVs perfectly, but it doesn’t matter:

/////////////////////////////////////////////////////////////////////////

void GUI_AnimatedGraphLine::RenderThickLines(V2* ppoints, int countpoints,
 float width, RGBACOLOR color)
{
      BaseSprite sp;
      sp.SetUV(0.49f, 0.48f, 0.51f, 0.51f);
      sp.width = width;
      sp.SetVertexColor(color);

Just making this one change made a ridiculous difference to the frame rate for this screen, sending it from 89fps to 228 fps, a completely huge win. It will be in the next update to the game.

Its absolutely not the final word on speeding up that part of the UI though. I have some very un-optimised nonsense in there to be honest. Those last little lines at the right of the chart, that connect the plotted lines to their matching text bars…. those are done as a draw call separate from the main graph. Why? Simplicity of code layout I guess. There is also a LOT of wasted CPU stuff going on. The position of all those lines and dots is calculated every frame, despite the fact that this window is not draggable or resizable, and they never change. I am 99% sure the game is always GPU bound on screens like this, so I haven’t bothered looking into it, but its still a waste.

According to the GPA Frame analyzer, the BIG waste of time in this screen is definitely the fact that I have massive overdraw going on with this whole dialog box. The window is a perfect, simple rectangle, so its easy to work out how much fill-rate was wasted drawing that background image behind it. There is zero opacity, so it would actually be fine to cookie-cutter out the top, bottom, left and right of this screen and then only draw the sections of the background image I needed. That could boost the FPS even more.

In practice though, we always have to strike a balance between code readability and simplicity, and the target of fast code. I could probably code all sorts of complex kludges that speed up the code but cause legibility hell. Arguably, I have made my code more confusing already, because there is no simple bit of code where ‘the dotted line for this group gets drawn’. There is code that copies some data into a vertex buffer…and then you have to trust I remember to actually draw it later in the screen…

Luckily I’m the only person looking at this code, so I don’t need to debate/explain myself to anybody else on this one. I’ll take happy players of old laptops over some minor inconvenience regarding following the code by me later.

I’m always surprised by how seemingly simple code can be slow on some GPUs. Whats your experience of hardware like this? Do you have an HD4000 intel GPU? Whats it like for games in your experience?

Optimization for fun!

I am well aware that my game Democracy 4 is not exactly slow with huge framerate issues. However, optimization is fun! or at least it should be, but in practice, getting profiling to work on remote PCs is not exactly easy. I have basically used every profiling software imaginable and still have not got one that I think really does the job well…

I have basically wasted about an hour today trying to work out why I couldn’t get the intel vtune amplifier stuff to work with event based profiling and get rid of this pesky error that was clearly nonsense about ‘not able to recognize processor… until I finally realized that I actually have an AMD chip in my (relatively) new PC so…yeah… That drove me to try out the AMD uProf profiler, which is something I had not used before.

It took me a moment to realize that this software, good though it is, does not suggest to you ‘hey if you run me in administrator mode I will show you 50x more config options’ but luckily I worked that out. My first act was a brief run of Democracy 4, starting a new game then immediately going to the next turn. In the list of functions taking up all the time (and ignoring windows system functions) I get this list:

Which is about what I would expect. The game is implemented as custom-coded neural network structure, hence the terminology. Mostly everything is a neuron, and most of the processing is where each neural effect (the links between neurons) processes its equation, and then neurons do some math on their inputs and outputs.

The inner machinery of the neural network ultimately comes down to that top item there:

SIM_EquationProcessor::Interpret Value.

This is code that basically takes those equations in the game’s csv files like this:

StateHealthService,0-(0.4*x),2

And actually calculates a value from that. There are 2,000 voters with about 10 connections each, pre-processed on a new game 32 times, so thats 640,000 equations right there, plus all of the actual simulation stuff layered before that. In other words, that equation processer probably runs a million times on a new game, and the equation might have 5 values in it, so max case is 5 million values get interpreted when you click on ‘new game’.

Can I speed it up?

First step is to see how stable these values are any way, so I’ll do an identical profile run and check that the +/- errors on different profiling runs are small…

I think thats pretty close. I definitely have numbers here that are in the same ballpark. So now lets try some optimisations to speed this puppy up. Looking at the top function with a double-click gives me a whole bunch more data:

The function is much longer than this, but thats mostly catering to relatively rare edge cases. Looking at the bits that actually have numbers on it show pretty clearly that its pretty much all about the pesky strcmp() call. A separate piece of code has already parsed the full equation of 0-(0.4*x), so I have a bunch of char buffers for each variable, declared like this:

char Vals[MAX_VARIABLES][32];

The thing is, do I need the overhead of calling strcmp() when I am only really checking for whether the first letter is x? Sadly I cannot JUST check that, because that would prevent we having a named variable starting with an x. Lets imagine this equation:

0-(0.4*xylophone)

Obviously not very likely, but theoretically possible. If the length of the buffer was 1, and the first letter is x, then thats a hit, but the question is, will inlining 2 manual checks be faster than a strcmp function call? Lets replace that code with

if(Vals[valindex][0] == 'x' && Vals[valindex][1] == '\0')

And check out the results:

Hmmm. Actually WORSE as far as I can tell. So it looks like whether we strcmp or not, just checking the value of two bytes at that point is slow. probably because its not immediately available memory? Its notable that the code at line 232 is super fast by comparison, as its just checking a bool value we cached earlier. Maybe I should try that? When I parse the function, just keep a bool for each Value, saying if its ‘x’ or not?

Whoahh. This looks like a pretty major speedup. 326 cycles versus 1,143. What the hell? why didn’t I do this earlier? Lets look at the line by line…

This is awesome. I then tried to make this code inline, but it seemed to not make things any better. I haven’t fully explored uProf yet, but it does do cool flame graphs:

Profiling UIs are great fun :D

Code bloat has become astronomical

There is a service I use that occasionally means I have to upload some files somewhere (who it is does not matter, as frankly they are all the same). This is basically a simple case of pointing at a folder on my hard drive and copying the contents onto a remote server, where they probably do some database related stuff to assign that bunch of files a name, and verify who downloads it.

Its a big company, so they have big processes, and probably get hacked lot, so there is some security that is required, and also some verification that the files are not tampered with between me uploading and them receiving them. I get that.

…but basically we are talking about enumerating some files, reading them, uploading them, and then closing the connection with a log file saying if it worked, and if not what went wrong. This is not rocket science, and in fact I’ve written code like this from absolute scratch myself, using the wininet API and php on a server talking to a MySQL database. My stuff was probably not quite that robust compared to enterprise level stuff, but it did support hundreds of thousands of uploaded files (GSB challenge data), and verification and download and logging of them. It was one coder working maybe for 2 or 3 weeks?

The special upload tool I had to use today was a total of 230MB of client files, and involved 2,700 different files to manage this process.

You might think thats an embarrassing typo, so I’ll be clear. TWO THOUSAND SEVEN HUNDRED FILEs and 237MB of executables and supporting crap, to copy some files from a client to a server. This is beyond bloatware, this is beyond over-engineering, this is absolutely totally and utterly, provably, obviously, demonstrably ridiculous and insane.

The thing is… I suspect this uploader is no different to any other such software these days from any other large company. Oh and BTW it gives error messages and right now, it doesn’t work. sigh.

I’ve seen coders do this. I know how this happens. It happens because not only are the coders not doing low-level,. efficient code to achieve their goal, they have never even SEEN low level, efficient, well written code. How can we expect them to do anything better when they do not even understand that it is possible?

You can write a program that uploads files securely, rapidly, and safely to a server in less than a twentieth of that amount of code. It can be a SINGLE file, just a single little exe. It does not need hundred and hundreds of DLLS. Its not only possible, its easy, and its more reliable, and more efficient, and easier to debug, and…let me labor this point a bit… it will actually work.

Code bloat sounds like something that grumpy old programmers in their fifties (like me) make a big deal out of, because we are grumpy and old and also grumpy. I get that. But us being old and grumpy means complaining when code runs 50% slower than it should, or is 50% too big. This is way, way, way beyond that. We are at the point where I honestly do believe that 99.9% of the code in files on your PC is absolutely useless and is never even fucking executed. Its just there, in a suite of 65 DLLS, all because some coder wanted to do something trivial, like save out a bitmap and had *no idea how easy that is*, so they just imported an entire bucketful of bloatware crap to achieve it.

Like I say, I really should not be annoyed at young programmers doing this. Its what they learned. They have no idea what high performance or constraint-based development is. When you tell them the original game Elite had a sprawling galaxy, space combat in 3D, a career progression system, trading and thousands of planets to explore, and it was 64k, I guess they HEAR you, but they don’t REALLY understand the gap between that, and what we have now.

Why do I care?

I care for a ton of reasons, not least being the fact that if you need two thousand times as much code as usual to achieve a thing, it should work. But more importantly, I am aware of the fact that 99.9% of my processor time on this huge stonking PC is utterly useless. Its carrying out billions of operations per second just to sit still. My PC should be in super-ultra low power mode right now, with all the fans off, in utter silence because all thats happening is some spellchecking as I type in wordpress.

Ha. WordPress.

Computers are so fast these days that you should be able to consider them absolute magic. Everything that you could possibly imagine should happen between the 60ths of a second of the refresh rate. And yet, when I click the volume icon on my microsoft surface laptop (pretty new), there is a VISIBLE DELAY as the machine gradually builds up a new user interface element, and eventually works out what icons to draw and has them pop-in and they go live. It takes ACTUAL TIME. I suspect a half second, which in CPU time, is like a billion fucking years.

If I’m right and (conservatively), we have 99% wastage on our PCS, we are wasting 99% of the computer energy consumption too. This is beyond criminal. And to do what? I have no idea, but a quick look at task manager on my PC shows a metric fuckton of bloated crap doing god knows what. All I’m doing is typing this blog post. Windows has 102 background processes running. My nvidia graphics card currently has 6 of them, and some of those have sub tasks. To do what? I’m not running a game right now, I’m using about the same feature set from a video card driver as I would have done TWENTY years ago, but 6 processes are required.

Microsoft edge web view has 6 processes too, as does Microsoft edge too. I don’t even use Microsoft edge. I think I opened an SVG file in it yesterday, and here we are, another 12 useless pieces of code wasting memory, and probably polling the cpu as well.

This is utter, utter madness. Its why nothing seems to work, why everything is slow, why you need a new phone every year, and a new TV to load those bloated streaming apps, that also must be running code this bad.

I honestly think its only going to get worse, because the big dumb, useless tech companies like facebook, twitter, reddit, etc are the worst possible examples of this trend. Soon every one of the inexplicable thousands of ‘programmers’ employed at these places will just be using machine-learning to copy-paste bloated, buggy, sprawling crap from github into their code as they type. A simple attempt to add two numbers together will eventually involve 32 DLLS, 16 windows services and a billion lines of code.

Twitter has two thousand developers. Tweetdeck randomly just fails to load a user column. Its done it for four years now. I bet none of the coders have any idea why it happens, and the code behind it is just a pile of bloated, copy-pasted bullshit.

Reddit, when suggesting a topic title from a link, cannot cope with an ampersand or a semi colon or a pound symbol. Its 2022. They probably have 2,000 developers too. None of them can make a text parser work, clearly. Why are all these people getting paid?

There was a golden age of programming, back when you had actual limitations on memory and CPU. Now we just live in an ultra-wasteful pit of inefficiency. Its just sad.

Anatomy of a rare, and weird heisenbug in Democracy 4

Heisenbugs are bugs in code which seem to go away, or change when you look at them in a debugger. They are the worst possible bugs, because they actively resist being found, or even observed by the coder responsible. Any kind of bug that can be reproduced on a developer’s machine, in their development environment can easily be analyzed, stepped through and reasoned with. But heisenbugs are a nightmare. I just (I think) fixed one, and certainly fixed A bug, even if not this one. I thought you might enjoy the process…

First some background. Democracy 4 is the fourth and latest in my series of political strategy games. They are text and diagram heavy, with a properly unique user interface, in 2D, and all of the code is custom, with a custom engine originally coded by me, with a unicode text implementation and vector rendering system coded by jeff from stargazy studios. This means all of the UI elements you take for granted in middleware you use, were custom coded by us. In this case, the bug is my fault, in code written by me. Here is a screenshot:

Then key thing that matters there is the text block under ‘food stamps. This is a policy screen in the game, and that text is the description. Depending on the length of the description (partly influenced by the language you play in), and the screen resolution, it might be that all the text fits in the available space, or it does not. That means sometimes there is a scrollbar to the right of that text, and sometimes its not there. As you would imagine, the scrollbar works just like any windows scrollbar. That means it responds to mousewheels, and also clicks, on the top and bottom buttons (very small) for example.

…anyway…

VERY rarely. And ONLY when I was playing the game outside the debugger, and only in very arcane sets of circumstances that seemed totally utterly random… screens like this (but not just this one) would occasionally not let me click on some elements in the interface. They would not respond at all… BUT… that block of text would scroll upwards, like it was being scrolled through. It was like completely unrelated events (clicking somewhere else) activated a scroll function on a user element I was nowhere near.

This was of course, absolutely infuriating, because it was REALLY rare. And the times I tried crazily to reproduce it in the debugbuild, or even in the release build, but launched from Visual C++, it would not happen. And there were NO reliable steps to reproduce. In fact, even quitting a screen when it DID happen, and then returning to that screen would make it go away.

What the hell?

After a LOT of time, I realized two interesting things. Firstly, when it happened, the game DID make the click sound associated with a mouse click (so the game IS realizing I clicked somewhere) and the scrolling only happened with text inside a specific UI element called a GUI_TextContainer. Although I did not realize it at the time, it was also the case that it ONLY happened where that text container did NOT have a visible scroll bar.

The clue to the bug lies in a closeup of the scrollbars used:

Like all such bars, there is a clickable ‘scrollup’ button, a draggable bar, with above/below page up/down regions, and a clickable down scroll button.

When I create a new GUI_TextContainer, I created a new GUI_ScrollingWindow, which is a container with all of those elements in side. Like any child element, I added it to the Text Container object with AddChild() to ensure it responds correctly and is processed blah blah. Thjn, later when the textwindow is initialized with its position, and the relevant translated text, I do a lot of calculating to work out if that text *is* going to fit in the available space. If not, I set the scrollbar in the correct position on the right, and make a note that the text does not fit (sets BFits to false). Then later, when drawing this UI element, I could skip out a lot of nonsense I don’t need if the text fits, and render it simply as a word-wrapped string. If It does NOT fit, I then drew the scrollbar, and the text in a different way, cropping the text render to the space given the scroll position and so on, blah blah.

All of this sounds reasonable. But it was a big mistake.

I was correctly NOT drawing the scrollbar if it was not needed (why bother! the text fits!) but I had pre-emptively added it as a child of the window anyway. Worse still, I had never initialized the position of the scrollbar and its sub-components AT ALL. This means that in debug / release-from-the-development-environment, all of the relevant variables in that window were getting set to zero, But in a live ‘customer’ environment, the initial positions of those subcomponents could be ANYTHING.

So what could happen, really rarely, is that the ‘scroll up’ or ‘page up’ parts of the scrollbar, might have coordinates like -32054,-5128,27140,41543. In this case, those clickable elements filled the whole screen… but were INVISIBLE. Thus, I have a huge UI mess-up, but I cannot see it, because my ‘quick and easy bath’ that checked BFits, doesn’t draw it.

So very rarely, I create a screen with an invisible scroll up button that fills the entire screen, and happily responds to every click with a scroll-up command. It even nicely plays the button click sound. Luckily, these buttons do not handle keys, so the escape key still quits that whole screen, and the next time it gets created, I’m probably lucky and the invisible bar has moved somewhere harmless.

I’ve changed the code now, to be efficient so that I do not even THINK about creating this scrollbar until I need it, and otherwise its NULL. I also make sure to safely delete it before creating it, in case somehow I go mad and initialize the same text container twice with different blocks of text. I could also have fixed it by initializing the position of a scrollbar safely to 0,0,0,0, but I also like the neatness of ensuring I don’t have a pointless orphan UI element at any point.

This was a classic dumb error. I added an element when I shouldn’t have, and its member variables were not safe. My own dumb slackness. I document it clearly to illustrate how the weird unrelatedness of a bug ‘clicking down here, scrolls text up there. sometimes. only in German, on a Wednesday etc…’ eventually makes total sense.

Hundreds of thousands of people have played probably over a million hours of the game, with this bug in it. No code is bug free, but if you do things right, the ones left in are pretty arcane.

Speeding up Democracy 4 simulation processing (atof is slow)

I just made a major speedup to the loading/next turn/new game code in Democracy 4, and thought I may as well share the gains with readers of this blog :D. It kind of makes me look a bit like an idiot to admit this is a big speedup (I should have known), but anyway, knowledge sharing (especially on optimization) is always good.

Fundamentally, the code in Democracy 4 is structured like a neural network. Without going into tons of details, every object in the game (policy, dilemma option, voter, voter group, situation…) is modeled as a neuron, which is basically just a named object connected by a ton of inputs and outputs to other neurons. You can run through the inputs and outputs, process the values and get a current value for a neuron at any time, which is done for every one of them, every turn.

Also… when we start a new game, I need ‘historical’ data for each value, so the game pre-processes the whole simulation about 30 times before you start to give us meaningful background data, and to ensure the current simulation sits as a reasonable equilibrium.

Those connections to neurons should probably be called dendrites or whatever, but I call them SIM_NeuralEffect. They contain basically the names of a host and a target (resolved to actual C++ pointers to objects), and an equation explaining the connection, and some other housekeeping stuff.

At the heart of it all, is an equation processor which lets you write this:

OilPrice,0+(0.22*x)*GDP

And actually turn it into a value for that effect, given the current situation. The Equation processor runs each turn, on every neural effect, and there are LOTS of them. Thus, if the equation processor is slow, its all slow.

I just installed a new version of the free vtune profiler from intel. Its not recognizing my ultra-amazing new chip, so only doing usermode sampling, but nonetheless it draws pretty flame charts like this:

Before I optimised

This is showing the code inside that 30-turn pre-game processing called PreCalcCoreSimulation. Lots of stuff goes on, but what I immediately noticed was all this atof stuff. Omgz. Thats a low level c runtime function, not one of mine, and it seems to be slowing down everything. This is a HUGE chunk of the whole equation processing code. How is this possible?

Now, you may think ‘dude, atof is pretty standard. No way are you are going to be able to make that code faster’, to which I reply ‘dude, obviously not. But the fastest code is code that never runs.’.

All those calls to atof are absolute nonsense.

Looking back at the equation above (OilPrice,0+(0.22*x)*GDP) there is obviously some stuff in there which is volatile. I do not know what the current value of x or GDP is, so I will need to grab their pointers and query them when I process the equation, but the rest of that stuff is static. That * is going to remain * and that 0 and 0.22 will remain fixed too. This is the key to a roughly 33% speedup of the whole processing in the game.

I actually did know to look into this, and I do not do manual text parsing of the equation each time. I parse them equation on startup, and stick the various values into buckets, so I am not wasting time each turn. But one thing I had not done is store the atof() outcomes. I was still storing variables[0] as ‘0’ instead of just 0.

Now you may think atof is fast. Its fast enough for most cases, but its WAY slower than just accessing the value of a floating point number thats already in RAM, and cached happily in the equation processor itself. Here is the new diagram:

Faster!

The difference is, (on my superfast PC), for the whole precalc simulation function: 0.85 vs 1.50 seconds. This probably makes me sound pedantic as hell, but I’m rocking some stupidly new and pricey PC, so there are likely people playing D4 on laptops a fifth the speed. I might be knocking a whole 3 seconds off the new game time for some players!

Also, and worth remembering, I just saved doing a ton of processing, which means a ton of CPU time, power and heat. If you can make your game run more quietly, more coolly, and faster on players PCs, you absolutely should do it.