Game Design, Programming and running a one-man games business…

An experienced coder’s approach to fixing business mistakes

I was the victim of an email today that was basically an invoice that was wrong. A company that has sent me a wrong invoice in the past has sent me it again another 3 times now, with increasing levels of angry demands and warnings about credit ratings. This raised my heart rate and anger level to extremely high distressing levels, but this is not a rant about how some companies are just awful, that would be too simple. This is more of an investigation into why stuff like that happens, and how software engineering can help you learn how to not fuck up like that.

I have been coding for 44 years now, since age 11. In all that time I’ve only coded in BASIC, C, C++ and some PHP, but 90% of it has been C++, so that’s the one I’m really good at. I don’t even use all of C++, so lets just say when it comes to the bits I DO use, I’m very experienced. This is why I’m using the term software engineering, and not coding or hacking, which implies a sort of amateurish copy-pasting-from-the-internet style of development. I’ve both worked on large game projects, and coded relatively large projects entirely from scratch, including patching, re-factoring and a lot of debugging. I coded my own neural networks from scratch before they were really a popular thing. Anyway…

Take the simple example of my experience today. An invoice is wrong, and sent to a customer anyway. This has happened three times now, despite the customer replying in detail, with sources and annotations and other people copied in, AND including replies from other people at the offending company who openly admit its wrong and will be fixed. Clearly this is a smorgasbord of incompetence, and irritating. But why do companies do this? what actually goes wrong? I think I know the answer: Once a problem like this is finally flagged up (which often involves threats from someone like me to visit the CEOs home address at 3AM to shout at them), the company ‘fixes’ the mistake, maybe apologizes, and everyone gets on with their life…

This is the mistake.

Companies want to ‘fix’ the mistake in the quickest, easiest and cheapest way possible. If an invoice was accidentally doubled, then they ‘go into the system’ (they just mean database) and they halve it. Problem solved, customer happy. Now I don’t give a damn about the sorry state of this company’s awful internal processes, but I am aware of the fact that companies like this try to fix things in this way, rather than the true software engineering way. So how SHOULD it be done?

Before I was a full time computer programmer I worked in IT. I had a bunch of jobs, gradually going up in seniority. I saw everything from ‘patch it and who cares’ mentality (consumer PC sales) to ‘The absolute core problem must be fixed, verified, documented and reported on with regards to how it could have happened’ (Stock trading floor software). I can imagine that most managers think the first approach is faster, cheaper, better, and the second is only needed for financial systems or healthcare or weapons systems, but this is just flat out wrong.

When a mistake happens, fixing it forever, at a fundamental level, will save way, way more time in the lifetime of the company than patching over a bug.

Spending a lot of time in IT support taught me that there are many more stages to actually finding the cause of a problem than you might think. Here is a breakdown of how I ended up thinking about this stuff:

Stage 1: Work out what has actually happened. This is where almost everyone gets it wrong. For example, in my case I got an email for a £1k bill. This was not correct. Thats the error. Thats ALL WE KNOW RIGHT NOW. You might want to leap immediately to ‘the customer was charged too much’ or ‘the customers invoice was doubled’ but we do not know. Right now we only know the contents of the email are wrong. Is that a problem with the email generation code? Does the email match the amount in the credit control database? Until you check, you are just guessing. You might spend hours trying to fix a database calculation error you cannot find until finally realizing the email writing code is borked…

Stage 2: Draw a box around the problem. This was so helpful in IT. Even when you know what you know, you need to be able to define the scope of the problem. Again, right now all we know is the customer ‘cliff’ got a wrong invoice. Is that the problem? Or is it that every invoice we send out is wrong? or was it only invoices on a certain date? or for a certain product? Until we check a bunch of other emails we do not even know the scope of the problem. If its systemic, then fixing it for cliff is pointless.

Stage 3: Where did things go wrong. You need to find the last moment when everything worked. Was the correct amount applied to the product in the database? was the correct quantity selected when entering the customer’s order? was the amount correct right up until the invoice was triggered? Unless you know WHERE something went wrong, you cannot work out WHAT or WHY.

Stage 4: Developer a theory that fits all available data. You find a line of code that seems to explain the incorrect data. This needs to not only explain why it screwed things up for cliff, but also explain why the invoice for dave was actually ok.

Stage 5: Test a fix: You can now fix the problem. Hurrah. Change the code or the process and run it again. Is everything ok? If so you may be preparing a victory lap but this is laughably premature.

Stage 6: Reset back to the failed state: Now undo your fix and run it again. Seems redundant doesn’t it? Experience has taught me this is vital. If you REALLY have the fix, then undoing your fix should restore the problem. Maybe 1 in a 100 times it will not. The ‘random’ problem just fixed itself for some indeterminate period. Your fix was a red herring. A REAL fix is like a switch. Turn it on and its fixed, turn it off and its broken. Verify this.

Stage 7: Post Mortem: This is where you work out HOW this could ever have happened. This is actually by far the most important stage of the whole process. If you just fix some bad code, then you have just fixed one instance in one program. The coder who made that mistake will make it again, and again and again. The REAL fix is to make it IMPOSSIBLE to ever have that error again. This takes time, and analysis. The solution may be better training, or it may mean changing an API to make it impossible to process bad data. It might mean firing someone incompetent. It might mean adding a QA layer. Whatever the mistake was, if you do not go through this stage, you failed at the task of fixing the problem.

Coders and organizations that work like this have fewer bugs, fewer mistakes. They need smaller QA teams, and less time devoted to fixing mistakes and implementing patches. Their complaints department is tiny, and yet still provides great communication, because there are hardly any complaints. In short, companies that run like this are awesome, popular, profitable, and great places to work. Why is everywhere not like this? Because it requires some things:

  1. A culture of giving people fixing problems free reign to follow the problem everywhere. If only the department that sends the emails is allowed to get involved in complaints about email, database errors or coding errors will never get fixed because they CAN never be fixed. The phrase ‘not my department’ needs to be banned.
  2. A culture of accepting that someone is STILL working on a proper fix, even after the complaint has been handled and the customer is happy. When there is a bug, there are TWO problems to fix: the customers bad experience AND the company process failure that allowed this. Managers who pull people away from bug-fix post-mortems to fix the next thing right now are a curse.

Of course your mileage may vary. Be aware I am a hypomanic workaholic who works for himself so my ‘advice’ may not be applicable to everybody, but I hope this was interesting and thought provoking anyway. Too many coders are just copy-pasting from stack overflow or asking chatgpt to quickly patch their bad code. Working out how to fix things properly is a worthwhile pursuit.

Re-thinking painters algorithm rendering optimisation

Before I even start this post, I should point out that currently the project I am coding is for fun. I actually really ENJOY complex programming design challenges, unlike many people, so the comment ‘this is done for you in X’ or ‘there is middleware that solves this without you doing anything in Y’ is irrelevant to me, and actually depressing. Whatever happened to the joy of working on hard problems?

Take an imaginary game that has a lot of entities in it. Say 500. Each entity is a sprite. That sprite has an effect drawn on top of it (say its a glowy light) and a different effect below it (say its another glowy light). The objects have to be rendered in a sensible way, so that if one object passes over another (for example) the glowy lights ‘stack’ properly, and you don’t see aberrations where the lights from something are seen through something else. Here is a visualisation:

In this case red is the ‘underglow’ grey is the sprite, yellow is the overglow. The bottom right duo shows the issue where things have to be rendered correctly. In a 3D game with complex meshes and no antialiasing, you just use a zbuffer to draw all this stuff, and forget about it, but if you are using sprites with nice fuzzy edges (not horrible jaggy pixels, you really need to use the classic ‘painters algorithm’ of drawing from back to front for each object. There are clear performance problems:

A naïve way to code this would be my current system which handles each object in 3 draw calls. So we set up the blend mode for the underglow, and draw that, then set a normal blend mode, draw the sprite, then a glowy blend mode to draw the top glow, then repeat for every other entity. The trouble is, with 500 entities, thats 1,500 draw calls simply to draw the most simple part of the game…

Unfortunately this only scratches the surface because there may also be other layers of things that need drawing on top of each entity, and between drawing each sprite. However there is a major optimisation to be had…. <drumroll>. Actually my game really works like this:

Where there is a grid, and <generally speaking> the entities stay in one grid box at a time. WITHIN that grid box, all sorts of stuff may happen, but items in the top left grid box will not suddenly appear in the bottom right grid box. In other words I can make some clever assumptions, if only I can find a way to generalize them into an algorithm.

For example I could draw the underglow in one object in each grid box all together in a single draw call like this:

So this is suddenly one draw call instead of 8. I then do the same for the sprites, then the overglow, and then do the other object in each grid square as a second pass (basically a second Z layer. Assuming 8 grid squares, 2 per square, 3 passes, thats 48 draw calls for naïve method and 6 for the new method. Pretty awesome! However, thats oversimplifying things. Firstly in some cases there are 16 entities in each grid, in others just 1. Secondly, in some cases items ‘spill’ over into the next grid square, so I need to ensure I do not get an anomaly where 2 objects would overlap just slightly and thus z-fight when drawn…

Like everything in game coding, the simple code design always accelerates towards spaghetti, but I would like to do my best to avoid that if possible…

In order to decide what is drawn at what time, I basically need 2 pieces of metadata about each object. Item 1 is what grid square it is in, and item 2 is the Z value of that object, where I can cope with say 16 different Z ‘bands’ for the whole world, meaning a group of 16 entities in a square will definitely be drawn with 16 different draw calls, and thus be painter-algorithm-ed ok.

I also want to do this without having to do insane amounts of sorting and processing every frame, for obvious efficiency reasons…

So my first thoughts on this are to handle everything via Z sorting, and make the list of entities the de-facto Z sort. In other words, when I build up the initial list of entities at the start of the game, I have pre-sorted them into z order. So instead of a single list of randomly jumbled Z orders. I get this situation, where the black -number is the Z position, and the blue references the grid square:

So my now pre-sorted object list goes like this:

A1 B1 C1 D1 E1 F1 G1 H1 A2 C2 F2 etc..

I can then do batches of draw class so my actual code looks like this

  • DrawUnderGlowFor1()
  • DrawSpritesFor1()
  • DrawOverglowFor1()

The nature of the game means that generally speaking, Z order can be fixed from the start and never change, which simplifies things hugely. The bit where it becomes a real pain is the overlap of objects to adjacent grid squares. I would also like a super-generic system that can handle not just these fixed entities, but anything else in the game. What I really need is a two layered graphics engine:

Layer 1 just send out a ton of draw calls, probably with some sort of metadata. This is like a draw call, plus relevant texture, plus blend state for the object. This is done in a naïve way with no thought to anything else.

Layer 2 processes layer 1, does some analysis and works out how to collapse draw calls. It can calculate bounding boxes for everything that is drawn, see what overlaps with what, and what can be batched, and then decides on a final ‘optimised’ series of calls to directx. This is done before any textures get set or draw calls are made. It then hands all this off to ideally a new thread, which just streams those draw calls to directx, while the other threads get on with composing the next frame.

I have attempted such a system once before in the past, but I was a less experienced coder then, and full-time working, and on a (self-imposed) deadline to ship the game, rather than obsess over noodling with rendering engine design. So hopefully this time, I will crack it, and then have something super cool that can handle whatever I throw at it :D. Also the beauty of this design is it would be easy to bypass stage 2 and just render everything as it comes in, so I could even hook up a hotkey to watch the frame rate difference and check I’m not wasting all ny time :D.

Gratuitous Space Shooty Game released!!!

And you probably thought I wasn’t still making games right?

After the long and intense development of Democracy 4, which is a HUGE sprawling game with a LOT of code, and a ton of content, and is now in about 10 languages and has 3 expansion packs… it was nice to be able to make something small, and simple, and not at all commercial or serious. With that in mind I started messing around making a space-invaders style vertical shooter, using the art assets I have from an older game of mine: Gratuitous Space Battles.

GSB is pretty old now, but TBH the spaceship graphics for it still look incredibly good to my eyes. I generally think its very wasteful that the games industry hires so many people to make music, SFX and graphics, and then makes a single game with them, never to be re-used in any way. Frankly a spaceship is a spaceship, whether its used in an RTS or a shooter or a turn based grand strategy game.

I know some people worry that gamers will bombard you with abuse for daring to use the same artwork in another game, because they will feel ‘cheated’. This strikes me as utter nonsense. Sensible re-use of assets just makes sense. As a general principle I hate waste, and I love efficiency. Also, not doing something because a tiny, tiny percentage of vocal gamers may complain about it is definitely a losing strategy in gamedev. There are always people who complain about any choice you will make.

After working on this game for a bit, and initially thinking it was a little throwaway thing I’d probably keep to myself, I started to really enjoy its development. I have never made a vertical shooter, but I loved Star Monkey, which is very old, and I am old enough to remember the first space invaders arcade cabinets as a kid, as well as Galaxian (far superior imho) and then Phoenix and the rest. I also spent a lot of time playing Astrosmash on our intellivision console as a kid.

Gratuitous Space Shooty Game is a bit of a mashup of a lot of those shooters, with some extra ideas that occurred during development. My wife playtested it a lot, and HATED the asteroids, so I added a repulsor beam to keep them away from you. Once implemented, it became a very cool new gameplay mechanic, as it allowed you to ‘balance’ attacking ships above you to get some extra shots in before they leave the screen.

During development I experimented with a bunch of ideas, and after a lot of playtesting, I’m happy with what I chose to do. The fact that you can accidentally shoot ship bonuses gives the player an incentive to keep moving and not risk a volley destroying a bonus. Penalizing you for every ship that escapes, INCLUDING the left-right ‘saucer’ ships also adds to the challenge. Making it so that the best power-ups are only dropped by those ships was also a good move from a design POV. Adding friendly ships you have to avoid is an evil mechanic, but its still in there!

In the end I went with 25 levels, and the levels get slightly longer as you go along. I don’t do any adaptive difficulty stuff, although I considered it. I do offer 3 difficulty levels from the start though. The top one is seriously hard. In-between levels you get to spend your cash, earned from shooting aliens and collecting bonuses (and a cool 10% bonus if nobody escapes) on upgrades for your ship.

Right now the game is only on itch, for $3 with a suggestion of $5 if you want to. It will not be a big financial success :D. Because I was doing it for fun, its currently windows only, and fixed aspect ratio of 1920×1080, or scaled to fit fullscreen. Windowed option literally went in the day before release :D. Its English only for now. I may try a google-translate for the limited text at some future point if I do an update to it.

So there you go, its another game by me! the first non-strategy one for a long time. I’m quite proud of it. Its a fun short laptop-friendly game you can play in lunch hour or multiple coffee breaks. If you like the look of it, get a copy!

Programming in just ONE language should be lauded.

I recently read about the news that garbage collection support, which was added to C++, is now actually being removed from it. Apparently most people didn’t use it, or even knew it was officially added, so it is no great loss. It always shocks me to read articles about C++ with a version number, because as far as I am concerned, C++ has no version number and never will, in the same way that a language such as English has no ‘version number’. I’m 54 and its pretty rare that I add a new word to my English vocabulary, and its even rarer for me to learn something new about C++ that I start to use in my code.

Back in the early days of modern computing. I worked in IT. My CV was basically: CNA, MCSE. That was it. That was all you needed to earn £54k a year 30 years ago in IT. There were basically 2 big computer systems, run by Microsoft and Novell, and your IT dude ideally knew them both. That was a long time ago now, and the amount of buzzwords and brand names the average IT admin has to put on their linkedin profile is probably quite ridiculous. However, I think its worse in the land of software engineering.

Again, go back a while and you were probably pretty employable if you could just mention C and C++. Then Java became a big deal, then a bunch of other stuff appeared. I have no idea whats cool now, but it feels like Python, Rust are much in demand. Then you have to add all of the recent methodologies. Do you know Agile and Scrum? How familiar are you with AWS? Whats your AI/ML skills like? PyTorch? Do you know the buzzword technologies that will get you hired this year? You better get a job quick, because the buzzword technologies will change every 2 years. Did I say 2? I meant ever year. No sorry, month.

I recently found myself thinking about poetry and code. My wife writes poetry, so I am exposed to this stuff. As a writer, she spends a lot of time… a LOT of time deciding what words to use in a sentence. Its a big deal. Sentence-by-sentence writing is an absolute skill that takes most people there entire life to perfect. Its worth noting that few poems are praised because they use the latest hip words. Good writing is not a matter of having a large vocabulary. Needlessly obscure word-choice is rightly seen as pretentious and alienating.

We really need to take some of that perspective and apply it to code

Take this sentence: “It is a truth universally acknowledged, that a single man, in possession of a good fortune must be in want of a wife.”

Thats considered literary genius, and it is. But its not using arcane language. Every word is commonplace. And idiot could have put that sentence together! but it took Jane Austen, and considerable experience, and huge skill to do it. We do not mock Jane Austen because she could only write in English. We do not mock her because she only wrote from a woman’s point of view. We do not mock her because all her novels were contemporary, in a similar setting, set in a single country, with a linear narrative. We accept all of those limitations and accept that she brings incredible skill to use a limited set of tools to create genius.

Imagine a modern programmer trying to get their first novel published. “English, yup I could write it in English, French, Italian, Chinese or South Korean if you like? I can do all the genres, yup, no problem. I can do first or third person if you like, and I’m familiar with fractured narrative or linear. If you want it funny I can do that, or harrowing, or in short story form too if thats what you are looking for.”

Madness

For some reason, people think that ‘proficiency’ in a programming language is something as superficial as being able to say ‘hello’ or order a beer in another language. This is insane. I am able to say ‘Hello’ ‘Thankyou’ and ‘Sorry’ In Korean, but you wont see me apply for a job writing Korean-language fiction.

If you have under ten years experience in using a programming language, let me be blunt and tell you that you don’t REALLY know that language. 20 years is better. 30+ years is ideal. Do you really think you speak French like a native after speaking if for a few hours a day for a few years? Of course not. Thats laughable. And here is the thing: A mistake in a language can cause confusion and maybe embarrassment, but unless you are a lawyer writing contracts, its not CRITICAL. Miss-using C++ can cause rockets to crash, reactors to overload, and god knows what else.

Why do we accept a superficial understanding of a language that is safety critical, but expect mastery of a language by anyone paid to use English?

I know C++. Thats it. A little bit of php, but a trivial amount. I use container classes and std::string from STL but thats it. A very few macros. My C++ vocabulary, even after 28 years using it, is tiny. The amount of std library stuff I know is very small. And yet… I can type C++ with as much confidence and speed as I type this blog post. In fact I can write C++ faster, with fewer mistakes than I can English. In many ways, I am MORE fluent in C++ than English. I code almost every day, and love it. I feel absolutely that I know what I’m doing, after 28 years, and a subset of C++.

The world is full of people claiming to have that fluency with 12 languages, and they are often literally half my age (I’m 54). This is utter bollocks. None of those people should be allowed ANYWHERE near mission critical code, or any code even tangentially involved with safety or security. I am sure that they ARE doing those jobs, every single day, because they all confidently think they are experts, and the people hiring them do not know any better. It a recipe for disaster, and its why year-after-year, software gets WORSE. Windows 11 runs dramatically worse than Windows 3.11 did, and it does it on hardware ludicrously faster. Skype is running at about 0.1% of its potential efficiency, has scrollbars that do not function as well as windows 3.11 did, and uses easily 100 times the RAM it needs.

Your computer is an absolute trainwreck of clusterfucks crashing into a dumpster-fire of wasted resources. All the people involved in arranging the trainwreck think they are multi-skilled geniuses, but hardly any of them have any real understanding of the code they write.

It doesn’t have to be that way.

We don’t appreciate Picasso based on how many colors he used, or how many styles he knew. We don’t berate any musician for only knowing one style. In Japan, people who make the SAME SUSHI DISH their entire lives, without variation, are considered legends, and experts. Its the norm in South Korea for restaurants to only serve one dish (but do it WELL).

I beg of you: If you are involved in recruiting software engineers, for the love of god only employ people who have real, genuine experience, measured in years but preferable decades, for stuff where you expect them to be able to code from day 1. No, they will not ‘pick it up quickly on the job’. Hiring interns or juniors is different, obviously.

I know I’m an old man yelling at a cloud, but sometimes old people know a lot about the cloud. I’ve been coding since I was 11, and its taken me this long to realize that programming languages should be treated like any other language. It might not be a popular view, but I want to put it out there. Experience really matters.

Gratuitous Space Shooty Shield tweaks

Maybe there is an easier way to code this, and I’m sure 99% of devs would just unity and copy paste some asset store effect, but I like to code these things myself becausae I like the intellectual challenge, and it gives you total freedom and zero dependencies so here we go:

I just finished coding a ‘shield impact effect’ for Gratuitous Space shooty game, and thought I’d explain how its done. The idea is that the ships are surrounded by energy fields, with various grid patterns, and when laser bullets hit them, they generate a sort of encapsulating ‘wash’ effect over that portion of the shield. To pull this off, I need two things: It needs to be clear where the impact hit the ship, and it needs to feel like it wraps around the target in 3D.

How do you do that in a 2D engine?

The first challenge is getting a graphic that looks like a ship shield in 3D. This is easy. You just get any pattern you want, such as a hex-link pattern:

Then you use the photoshop sphereize distortion filter to give you a nice alpha channel map of a cool space shield:

If I just blap that on top of a spaceship thats hit by lasers, you get a nice effect, but the problem is its non directional, and its pretty clear that its just one image placed on top of another. What I need to do, is to draw this image, but only bits of it at a time, and to wash over the image, revealing it over time.

The way I found was best to do this was to enlarge that texture a lot so it had a ton of empty space, and then use it at twice the size. The reason for this will become apparent, but I’m not working with this:

What I then do, for my shield effect is to use a sort of 2D mesh to wrap around that image over time. That wrap-around effect will originate from the point where the bullet hits the shield. To do this, I get the bullet position, then get the angle from shield center to bullet center, and travel the exact shield radius along that angle from the shield center. That gets me an EXACT location on the perimeter of the shield, even if the bullet is super fast and has already moved inside the perimeter. This is important!

So I now have the impact location, and I want to kind of ‘wash’ the image of the sphere into the players view over time like this:

The red dot is the impact point, the yellow rectangle gets thicker over time as it washes over the sphere. To do this, I ‘virtually’ place the sphere texture on the screen, but do not render it. Its there just as a placeholder for where the ‘full’ sphere would be if it was all revealed. I then calculate how to position that rectangle, and over time I stretch it so it completely covers the target sphere. In order to render the composite image, I render the yellow rectangle, but when I get the UV values of each vertex in the rectangle, I actually look up where they are in relation to my oversized virtual sphere and use that UV value, but the texture of the sphere.

This is why I had to enlarge the canvas the sphere is on, so that those UV values make sense and are still 0 to 1 in both directions. When I do this, I get a cool effect, but its basically a watermelon (shown here with early rubbish shield texture)

To fix that, I just need to set the color values of the ‘inner’ vertices at the top of the rectangle to be fully transparent, and I then get a nice fade effect over the sphere. To be honest, that looked fine, and I was happy with it, but even though NOBODY would notice, something annoyed me…

When you imagine something wrapping around a sphere like this, from a top-down view, you would notice that the speed at which the ‘frontier’ washes over the sphere is non-linear. In other words, I’m not taking any account of the curvature of the sphere itself. To do things right, the speed at which that rectangle washes overs the sphere should vary along the length of the top of the rectangle. To do THAT, I need the rectangle to actually be a tri-strip, so I can curve the top edge. And not only that, the speed at which the center point of that edge moves needs to be a nice curve defined as a sine wave…

In other words I need to achieve this:

That wireframe thing is my yellow triangle. Over time it will wash over the whole shield right to the back. here is two overlapping impacts:

You can see whats going on quite easily in the wireframe version there. The game is way too fast and gratuitous for anyone to notice, and TBH I am still tweaking it.

One of the really fiddly bits was getting that curve just right. I had to start at the left hand top of my yellow triangle, and go across the top edge, noting my progress along there. I then interpolated between the extent to which the ‘height’ of the rectangle came from a fixed point (I picked the top left at the ‘back’ of the shield’) or from my sine-eave inspired non-linear curve of the point that represented the center of the rectangle.

At the end of all that I then basically have a single tri-strip. I then run some separate code to derive UVs from the ‘virtual’ sprite, and just render it. For those who care about the details, here is the code for what I’ve described:

void GUI_ShieldRipple::CalculateVerts()
{
	//place relative to the shield
	float radius = PParent->GetGlowSprite()->Width / 2;

	//we have a tristrip here that is angled at RippleAngle and whose bottom center is at
	//InitialImpactOffset from the current shield center;
	float shieldcenx = PShip->GetWorldPosition().X;
	float shieldceny = PShip->GetWorldPosition().Y;

	//derive bottom center
	float radangle = D3DXToRadian(RippleAngle);
	float cosangle = COS(radangle);
	float sinangle = SIN(radangle);
	//note this is the inverse because the angle is from offset to cen and we want the reverse
//	WorldPos.X += (sinangle * ourspeed);
//	WorldPos.Y -= (cosangle * ourspeed);

	float botcenx = shieldcenx - (sinangle * radius);
	float botceny = shieldceny + (cosangle * radius);

	//now turn 90 degrees to the left to go to the start
	float angletostart = RippleAngle - 90;
	if (angletostart < 0) angletostart += 360;
	radangle = D3DXToRadian(angletostart);
	cosangle = COS(radangle);
	sinangle = SIN(radangle);

	float botleftx = botcenx + (sinangle * radius);
	float botlefty = botceny - (cosangle * radius);

	//invert for right
	float botrightx = botcenx - (sinangle * radius);
	float botrighty = botceny + (cosangle * radius);

	//how wide is each block of the tri-strip
	int prims = (MAXVERTS - 2);

	float chunkwidth = PParent->GetGlowSprite()->Width / prims;

	//now deduce chunkheight, which is basically our travel over the shield at the center point
	//we have non linear progress here, and we deduce that from a cosine wave curve imagined from 
	//Pi to 2xPi
	float cosinput = D3DX_PI + (D3DX_PI * Progress);
	float adjusted_progress = cos(cosinput);
	//now convert to 0 to 1 instead of -1 to 1
	adjusted_progress = (adjusted_progress + 1) / 2;

	//this gives us the center value, of the middle of the sphere
	float chunkheight = radius * 2 * adjusted_progress;

	//get top of the strip above the botleft
	radangle = D3DXToRadian(RippleAngle);
	cosangle = COS(radangle);
	sinangle = SIN(radangle);

	float innerx = botleftx + (sinangle * chunkheight);
	float innery = botlefty - (cosangle * chunkheight);

	//start botleft then up, then to 1 along and down...then up
	float currx = botleftx;
	float curry = botlefty;
	
	float offsetx = (botrightx - botleftx) / (prims/2);
	float offsety = (botrighty - botlefty) / (prims/2);

	unsigned long fullcolor = RGBA_MAKE(20, 234, 221, (int)(255.0f * Intensity));
	unsigned long nocolor = RGBA_MAKE(20, 234, 221, 0);
	for (int n = 0; n < MAXVERTS; n+=2)
	{
		Verts[n].dvSX = currx;
		Verts[n].dvSY = curry;
		Verts[n].color = fullcolor;

		//to get innerx we need to adjust between the full corner value and the middle adjusted
		//value based on our progress
		float progress = n / (float)prims;
		//convert that progress to a nice curve
		progress = sin(progress * D3DX_PI);

		//thats the extent to which we get the adjusted value rather than the base value of   chunkheight being radius
		float newheight = (radius * 2 * (1.0f - progress)) + (chunkheight * progress);
		float newinx = currx + (sinangle * newheight);
		float newiny = curry - (cosangle * newheight);

		Verts[n + 1].dvSX = newinx;
		Verts[n + 1].dvSY = newiny;
		Verts[n + 1].color = nocolor;

		currx += offsetx;
		curry += offsety;
	}
}

Its a lot of C++ to just generate that effect, but I love coding this stuff. I might change it so that the top left and right points of that rectangle are in fact the mid point, so the curve goes flat then inverts. I think that may look better. Also I’ll fiddle with render states and lightmaps to make it fizz more. In the meantime I uploaded a video to twitter showing it with, and without the wireframe. There is a lot going on, and there can be multiple overlapping effects at once. Thats another reason I needed to use a virtual shield texture for UVs, this way everything lines up perfectly, even with multiple impacts:

Gratuitous Space Shooty Game will be on sale on itch soon