Game Design, Programming and running a one-man games business…

Update hell. How did we get here?

I have a web server that runs my website, this blog, the back-end stuff for some of my games (which report bugs or browse mods etc). I also, sadly, have a ‘smart’ TV, a samsung galaxy phone, a kindle and a laptop and desktop PC. As a result, I am in a state of perpetual update hell.

Long ago when I worked in IT, we were very obsessive about updating everyone’s PCs. The danger of having a virus run rampant on a corporate network was always a concern, and when I worked on city trading stuff, obviously security was the first priority. We were super paranoid about everything being updated and patched. We were the IT gods of security, who would update everyone’s PC to the latest patch for everything before they even knew they were needed. Updates were essential and good. Otherwise all hell would break loose.

These days my attitude has become: Fuck you, updates. I’ll update only if I have no choice.

How on earth have we fallen so far? I think, speaking for myself, its because software has just generally become so staggeringly badly made. To pick on just one example, I’ll describe the hellscape of bug ridden crap that is Battlefield 2042. This is a game I try to play every day, and sometimes it even works. On a good day, I can launch the game, and it will show me a splash screen trying to sell me shit which I can ‘dismiss’, except the coders are too inept to record that I skipped it, so I see it every day. Every time I launch the game this wastes my time and makes me hate the developers.

So then I can select a game and start playing. HAHAHA. Don’t be so naieve. This is not 2001, I don’t get to choose which map to play, or what server to play on. That functionality was ripped out of the DICE engine so it could matchmake for me, to make my user experience 100x worse. Routinely that means joining a game with 30 seconds to the end, and maybe 75% of the time, it kicks either me or my squadmate back to the menu anyway, so I have to start again. Obviously I have zero idea what map we will be playing, because the developer has thought ‘fuck the player, who cares. They will play what we tell them’.

Actually I’m kidding, its way, way worse that this. Because when the game ends, I then have to watch a load of tiktok inspired US bro-culture dance-move/posturing bullshit before being dumped right back at the main menu. Back in the days of Battlefield 1942 or Call of Duty, I could just stay on the same server and keep playing, but not any more. Why? because fuck the player, thats why.

Speaking of which, in call of duty 2, you could play modded maps, in their thousands, and they would auto-download as you connected to the server. The mod tools were free of course. There was no attempt to sell you stupid fucking hats. A server browser, mod support, flawless multiplayer code, and no anti-consumer F2P pay-to-win game-ruining immersion breaking bullshit. That was the peak era for gaming, unlike now. As a gamer, I am just cannon fodder for F2P whales, or someone to be exploited, marketed at and used. Why? because fuck the player. Nobody at the top of EA even plays games anyway, they might as well be selling fucking washing machines.

So…thats just trying to use an existing bit of software. Imagine the joy of being FORCED to download a 6GB ‘patch’ to play that same game, with zero notice, and finding out that this 6GB is a new weapon ‘skin’ and some ‘balance changes’. Who the hell is in charge of the patch process? How the hell can any software be this badly made? I will never buy any game made by DICE or published by EA ever again. Fuck them.

But forgetting games, and returning to PCs and Smart TVs and phones…it gets even worse. I must have updated the Disney app for my TV 20 times in the last year. Why? What new functionality has been added? Anything? I cannot see any difference whatsoever, but we are expected to constantly run updates. With skype its even worse. Skype’s software is so appalling now that it cannot even patch itself. Every time I reboot my PC skype reinstalls itself. And those skype updates have done nothing but make it worse, and slower. Every single ‘upgrade’ is a downgrade.

I will continue to use Windows 10 on my main PC until a bunch of software police smash through my front door with guns. Windows 11 is another 50GB of bloated crap that manages to make all of the UI WORSE but now you get very slightly rounded window corners. Amazing. Truly the work of 1,000 software engineers for a year. Meanwhile it cannot even synch video to sound on video playback on any website, and the ‘upgrade’ to windows 11 permanently halves the volume of all speakers, with no way to ever fix it. The windows 11 upgrade is just a way to wreck your PC. In 2025, Windows cannot play video, or control volume. Things Windows 3.11 could do just fine. Its embarrassing.

Do I worry about malware? yes, and I have a paid anti-malware suite running on both PCs. But over the last ten years the worst malware I have experienced is Windows update. The second worse is probably my samsung phone. Updates take about 20 minutes, never add ANYTHING I want, but randomly change the icons of every installed app, just to aggravate me. Its a phone. The calculator app does not need an update. Ever.

Right now, I have a laptop that works (apart from video and sound, which is unusable) and a desktop that works. I do not want ANY updates from microsoft on anything, ever again. I simply do not trust the people working there to be able to write code. Ditto samsung. Just leave me the fuck alone. If it was possible to globally opt out of all updates on my TV I would do so. These are not new games getting cool feature improvements and new content. They are apps that work. My expectations of software in 2025 are now so low, that simply having programs that vaguely work is the gold standard, and I will not risk any new code written by people who clearly have zero clue what they are doing.

An experienced coder’s approach to fixing business mistakes

I was the victim of an email today that was basically an invoice that was wrong. A company that has sent me a wrong invoice in the past has sent me it again another 3 times now, with increasing levels of angry demands and warnings about credit ratings. This raised my heart rate and anger level to extremely high distressing levels, but this is not a rant about how some companies are just awful, that would be too simple. This is more of an investigation into why stuff like that happens, and how software engineering can help you learn how to not fuck up like that.

I have been coding for 44 years now, since age 11. In all that time I’ve only coded in BASIC, C, C++ and some PHP, but 90% of it has been C++, so that’s the one I’m really good at. I don’t even use all of C++, so lets just say when it comes to the bits I DO use, I’m very experienced. This is why I’m using the term software engineering, and not coding or hacking, which implies a sort of amateurish copy-pasting-from-the-internet style of development. I’ve both worked on large game projects, and coded relatively large projects entirely from scratch, including patching, re-factoring and a lot of debugging. I coded my own neural networks from scratch before they were really a popular thing. Anyway…

Take the simple example of my experience today. An invoice is wrong, and sent to a customer anyway. This has happened three times now, despite the customer replying in detail, with sources and annotations and other people copied in, AND including replies from other people at the offending company who openly admit its wrong and will be fixed. Clearly this is a smorgasbord of incompetence, and irritating. But why do companies do this? what actually goes wrong? I think I know the answer: Once a problem like this is finally flagged up (which often involves threats from someone like me to visit the CEOs home address at 3AM to shout at them), the company ‘fixes’ the mistake, maybe apologizes, and everyone gets on with their life…

This is the mistake.

Companies want to ‘fix’ the mistake in the quickest, easiest and cheapest way possible. If an invoice was accidentally doubled, then they ‘go into the system’ (they just mean database) and they halve it. Problem solved, customer happy. Now I don’t give a damn about the sorry state of this company’s awful internal processes, but I am aware of the fact that companies like this try to fix things in this way, rather than the true software engineering way. So how SHOULD it be done?

Before I was a full time computer programmer I worked in IT. I had a bunch of jobs, gradually going up in seniority. I saw everything from ‘patch it and who cares’ mentality (consumer PC sales) to ‘The absolute core problem must be fixed, verified, documented and reported on with regards to how it could have happened’ (Stock trading floor software). I can imagine that most managers think the first approach is faster, cheaper, better, and the second is only needed for financial systems or healthcare or weapons systems, but this is just flat out wrong.

When a mistake happens, fixing it forever, at a fundamental level, will save way, way more time in the lifetime of the company than patching over a bug.

Spending a lot of time in IT support taught me that there are many more stages to actually finding the cause of a problem than you might think. Here is a breakdown of how I ended up thinking about this stuff:

Stage 1: Work out what has actually happened. This is where almost everyone gets it wrong. For example, in my case I got an email for a £1k bill. This was not correct. Thats the error. Thats ALL WE KNOW RIGHT NOW. You might want to leap immediately to ‘the customer was charged too much’ or ‘the customers invoice was doubled’ but we do not know. Right now we only know the contents of the email are wrong. Is that a problem with the email generation code? Does the email match the amount in the credit control database? Until you check, you are just guessing. You might spend hours trying to fix a database calculation error you cannot find until finally realizing the email writing code is borked…

Stage 2: Draw a box around the problem. This was so helpful in IT. Even when you know what you know, you need to be able to define the scope of the problem. Again, right now all we know is the customer ‘cliff’ got a wrong invoice. Is that the problem? Or is it that every invoice we send out is wrong? or was it only invoices on a certain date? or for a certain product? Until we check a bunch of other emails we do not even know the scope of the problem. If its systemic, then fixing it for cliff is pointless.

Stage 3: Where did things go wrong. You need to find the last moment when everything worked. Was the correct amount applied to the product in the database? was the correct quantity selected when entering the customer’s order? was the amount correct right up until the invoice was triggered? Unless you know WHERE something went wrong, you cannot work out WHAT or WHY.

Stage 4: Developer a theory that fits all available data. You find a line of code that seems to explain the incorrect data. This needs to not only explain why it screwed things up for cliff, but also explain why the invoice for dave was actually ok.

Stage 5: Test a fix: You can now fix the problem. Hurrah. Change the code or the process and run it again. Is everything ok? If so you may be preparing a victory lap but this is laughably premature.

Stage 6: Reset back to the failed state: Now undo your fix and run it again. Seems redundant doesn’t it? Experience has taught me this is vital. If you REALLY have the fix, then undoing your fix should restore the problem. Maybe 1 in a 100 times it will not. The ‘random’ problem just fixed itself for some indeterminate period. Your fix was a red herring. A REAL fix is like a switch. Turn it on and its fixed, turn it off and its broken. Verify this.

Stage 7: Post Mortem: This is where you work out HOW this could ever have happened. This is actually by far the most important stage of the whole process. If you just fix some bad code, then you have just fixed one instance in one program. The coder who made that mistake will make it again, and again and again. The REAL fix is to make it IMPOSSIBLE to ever have that error again. This takes time, and analysis. The solution may be better training, or it may mean changing an API to make it impossible to process bad data. It might mean firing someone incompetent. It might mean adding a QA layer. Whatever the mistake was, if you do not go through this stage, you failed at the task of fixing the problem.

Coders and organizations that work like this have fewer bugs, fewer mistakes. They need smaller QA teams, and less time devoted to fixing mistakes and implementing patches. Their complaints department is tiny, and yet still provides great communication, because there are hardly any complaints. In short, companies that run like this are awesome, popular, profitable, and great places to work. Why is everywhere not like this? Because it requires some things:

  1. A culture of giving people fixing problems free reign to follow the problem everywhere. If only the department that sends the emails is allowed to get involved in complaints about email, database errors or coding errors will never get fixed because they CAN never be fixed. The phrase ‘not my department’ needs to be banned.
  2. A culture of accepting that someone is STILL working on a proper fix, even after the complaint has been handled and the customer is happy. When there is a bug, there are TWO problems to fix: the customers bad experience AND the company process failure that allowed this. Managers who pull people away from bug-fix post-mortems to fix the next thing right now are a curse.

Of course your mileage may vary. Be aware I am a hypomanic workaholic who works for himself so my ‘advice’ may not be applicable to everybody, but I hope this was interesting and thought provoking anyway. Too many coders are just copy-pasting from stack overflow or asking chatgpt to quickly patch their bad code. Working out how to fix things properly is a worthwhile pursuit.

Re-thinking painters algorithm rendering optimisation

Before I even start this post, I should point out that currently the project I am coding is for fun. I actually really ENJOY complex programming design challenges, unlike many people, so the comment ‘this is done for you in X’ or ‘there is middleware that solves this without you doing anything in Y’ is irrelevant to me, and actually depressing. Whatever happened to the joy of working on hard problems?

Take an imaginary game that has a lot of entities in it. Say 500. Each entity is a sprite. That sprite has an effect drawn on top of it (say its a glowy light) and a different effect below it (say its another glowy light). The objects have to be rendered in a sensible way, so that if one object passes over another (for example) the glowy lights ‘stack’ properly, and you don’t see aberrations where the lights from something are seen through something else. Here is a visualisation:

In this case red is the ‘underglow’ grey is the sprite, yellow is the overglow. The bottom right duo shows the issue where things have to be rendered correctly. In a 3D game with complex meshes and no antialiasing, you just use a zbuffer to draw all this stuff, and forget about it, but if you are using sprites with nice fuzzy edges (not horrible jaggy pixels, you really need to use the classic ‘painters algorithm’ of drawing from back to front for each object. There are clear performance problems:

A naïve way to code this would be my current system which handles each object in 3 draw calls. So we set up the blend mode for the underglow, and draw that, then set a normal blend mode, draw the sprite, then a glowy blend mode to draw the top glow, then repeat for every other entity. The trouble is, with 500 entities, thats 1,500 draw calls simply to draw the most simple part of the game…

Unfortunately this only scratches the surface because there may also be other layers of things that need drawing on top of each entity, and between drawing each sprite. However there is a major optimisation to be had…. <drumroll>. Actually my game really works like this:

Where there is a grid, and <generally speaking> the entities stay in one grid box at a time. WITHIN that grid box, all sorts of stuff may happen, but items in the top left grid box will not suddenly appear in the bottom right grid box. In other words I can make some clever assumptions, if only I can find a way to generalize them into an algorithm.

For example I could draw the underglow in one object in each grid box all together in a single draw call like this:

So this is suddenly one draw call instead of 8. I then do the same for the sprites, then the overglow, and then do the other object in each grid square as a second pass (basically a second Z layer. Assuming 8 grid squares, 2 per square, 3 passes, thats 48 draw calls for naïve method and 6 for the new method. Pretty awesome! However, thats oversimplifying things. Firstly in some cases there are 16 entities in each grid, in others just 1. Secondly, in some cases items ‘spill’ over into the next grid square, so I need to ensure I do not get an anomaly where 2 objects would overlap just slightly and thus z-fight when drawn…

Like everything in game coding, the simple code design always accelerates towards spaghetti, but I would like to do my best to avoid that if possible…

In order to decide what is drawn at what time, I basically need 2 pieces of metadata about each object. Item 1 is what grid square it is in, and item 2 is the Z value of that object, where I can cope with say 16 different Z ‘bands’ for the whole world, meaning a group of 16 entities in a square will definitely be drawn with 16 different draw calls, and thus be painter-algorithm-ed ok.

I also want to do this without having to do insane amounts of sorting and processing every frame, for obvious efficiency reasons…

So my first thoughts on this are to handle everything via Z sorting, and make the list of entities the de-facto Z sort. In other words, when I build up the initial list of entities at the start of the game, I have pre-sorted them into z order. So instead of a single list of randomly jumbled Z orders. I get this situation, where the black -number is the Z position, and the blue references the grid square:

So my now pre-sorted object list goes like this:

A1 B1 C1 D1 E1 F1 G1 H1 A2 C2 F2 etc..

I can then do batches of draw class so my actual code looks like this

  • DrawUnderGlowFor1()
  • DrawSpritesFor1()
  • DrawOverglowFor1()

The nature of the game means that generally speaking, Z order can be fixed from the start and never change, which simplifies things hugely. The bit where it becomes a real pain is the overlap of objects to adjacent grid squares. I would also like a super-generic system that can handle not just these fixed entities, but anything else in the game. What I really need is a two layered graphics engine:

Layer 1 just send out a ton of draw calls, probably with some sort of metadata. This is like a draw call, plus relevant texture, plus blend state for the object. This is done in a naïve way with no thought to anything else.

Layer 2 processes layer 1, does some analysis and works out how to collapse draw calls. It can calculate bounding boxes for everything that is drawn, see what overlaps with what, and what can be batched, and then decides on a final ‘optimised’ series of calls to directx. This is done before any textures get set or draw calls are made. It then hands all this off to ideally a new thread, which just streams those draw calls to directx, while the other threads get on with composing the next frame.

I have attempted such a system once before in the past, but I was a less experienced coder then, and full-time working, and on a (self-imposed) deadline to ship the game, rather than obsess over noodling with rendering engine design. So hopefully this time, I will crack it, and then have something super cool that can handle whatever I throw at it :D. Also the beauty of this design is it would be easy to bypass stage 2 and just render everything as it comes in, so I could even hook up a hotkey to watch the frame rate difference and check I’m not wasting all ny time :D.

Gratuitous Space Shooty Game released!!!

And you probably thought I wasn’t still making games right?

After the long and intense development of Democracy 4, which is a HUGE sprawling game with a LOT of code, and a ton of content, and is now in about 10 languages and has 3 expansion packs… it was nice to be able to make something small, and simple, and not at all commercial or serious. With that in mind I started messing around making a space-invaders style vertical shooter, using the art assets I have from an older game of mine: Gratuitous Space Battles.

GSB is pretty old now, but TBH the spaceship graphics for it still look incredibly good to my eyes. I generally think its very wasteful that the games industry hires so many people to make music, SFX and graphics, and then makes a single game with them, never to be re-used in any way. Frankly a spaceship is a spaceship, whether its used in an RTS or a shooter or a turn based grand strategy game.

I know some people worry that gamers will bombard you with abuse for daring to use the same artwork in another game, because they will feel ‘cheated’. This strikes me as utter nonsense. Sensible re-use of assets just makes sense. As a general principle I hate waste, and I love efficiency. Also, not doing something because a tiny, tiny percentage of vocal gamers may complain about it is definitely a losing strategy in gamedev. There are always people who complain about any choice you will make.

After working on this game for a bit, and initially thinking it was a little throwaway thing I’d probably keep to myself, I started to really enjoy its development. I have never made a vertical shooter, but I loved Star Monkey, which is very old, and I am old enough to remember the first space invaders arcade cabinets as a kid, as well as Galaxian (far superior imho) and then Phoenix and the rest. I also spent a lot of time playing Astrosmash on our intellivision console as a kid.

Gratuitous Space Shooty Game is a bit of a mashup of a lot of those shooters, with some extra ideas that occurred during development. My wife playtested it a lot, and HATED the asteroids, so I added a repulsor beam to keep them away from you. Once implemented, it became a very cool new gameplay mechanic, as it allowed you to ‘balance’ attacking ships above you to get some extra shots in before they leave the screen.

During development I experimented with a bunch of ideas, and after a lot of playtesting, I’m happy with what I chose to do. The fact that you can accidentally shoot ship bonuses gives the player an incentive to keep moving and not risk a volley destroying a bonus. Penalizing you for every ship that escapes, INCLUDING the left-right ‘saucer’ ships also adds to the challenge. Making it so that the best power-ups are only dropped by those ships was also a good move from a design POV. Adding friendly ships you have to avoid is an evil mechanic, but its still in there!

In the end I went with 25 levels, and the levels get slightly longer as you go along. I don’t do any adaptive difficulty stuff, although I considered it. I do offer 3 difficulty levels from the start though. The top one is seriously hard. In-between levels you get to spend your cash, earned from shooting aliens and collecting bonuses (and a cool 10% bonus if nobody escapes) on upgrades for your ship.

Right now the game is only on itch, for $3 with a suggestion of $5 if you want to. It will not be a big financial success :D. Because I was doing it for fun, its currently windows only, and fixed aspect ratio of 1920×1080, or scaled to fit fullscreen. Windowed option literally went in the day before release :D. Its English only for now. I may try a google-translate for the limited text at some future point if I do an update to it.

So there you go, its another game by me! the first non-strategy one for a long time. I’m quite proud of it. Its a fun short laptop-friendly game you can play in lunch hour or multiple coffee breaks. If you like the look of it, get a copy!

Programming in just ONE language should be lauded.

I recently read about the news that garbage collection support, which was added to C++, is now actually being removed from it. Apparently most people didn’t use it, or even knew it was officially added, so it is no great loss. It always shocks me to read articles about C++ with a version number, because as far as I am concerned, C++ has no version number and never will, in the same way that a language such as English has no ‘version number’. I’m 54 and its pretty rare that I add a new word to my English vocabulary, and its even rarer for me to learn something new about C++ that I start to use in my code.

Back in the early days of modern computing. I worked in IT. My CV was basically: CNA, MCSE. That was it. That was all you needed to earn £54k a year 30 years ago in IT. There were basically 2 big computer systems, run by Microsoft and Novell, and your IT dude ideally knew them both. That was a long time ago now, and the amount of buzzwords and brand names the average IT admin has to put on their linkedin profile is probably quite ridiculous. However, I think its worse in the land of software engineering.

Again, go back a while and you were probably pretty employable if you could just mention C and C++. Then Java became a big deal, then a bunch of other stuff appeared. I have no idea whats cool now, but it feels like Python, Rust are much in demand. Then you have to add all of the recent methodologies. Do you know Agile and Scrum? How familiar are you with AWS? Whats your AI/ML skills like? PyTorch? Do you know the buzzword technologies that will get you hired this year? You better get a job quick, because the buzzword technologies will change every 2 years. Did I say 2? I meant ever year. No sorry, month.

I recently found myself thinking about poetry and code. My wife writes poetry, so I am exposed to this stuff. As a writer, she spends a lot of time… a LOT of time deciding what words to use in a sentence. Its a big deal. Sentence-by-sentence writing is an absolute skill that takes most people there entire life to perfect. Its worth noting that few poems are praised because they use the latest hip words. Good writing is not a matter of having a large vocabulary. Needlessly obscure word-choice is rightly seen as pretentious and alienating.

We really need to take some of that perspective and apply it to code

Take this sentence: “It is a truth universally acknowledged, that a single man, in possession of a good fortune must be in want of a wife.”

Thats considered literary genius, and it is. But its not using arcane language. Every word is commonplace. And idiot could have put that sentence together! but it took Jane Austen, and considerable experience, and huge skill to do it. We do not mock Jane Austen because she could only write in English. We do not mock her because she only wrote from a woman’s point of view. We do not mock her because all her novels were contemporary, in a similar setting, set in a single country, with a linear narrative. We accept all of those limitations and accept that she brings incredible skill to use a limited set of tools to create genius.

Imagine a modern programmer trying to get their first novel published. “English, yup I could write it in English, French, Italian, Chinese or South Korean if you like? I can do all the genres, yup, no problem. I can do first or third person if you like, and I’m familiar with fractured narrative or linear. If you want it funny I can do that, or harrowing, or in short story form too if thats what you are looking for.”

Madness

For some reason, people think that ‘proficiency’ in a programming language is something as superficial as being able to say ‘hello’ or order a beer in another language. This is insane. I am able to say ‘Hello’ ‘Thankyou’ and ‘Sorry’ In Korean, but you wont see me apply for a job writing Korean-language fiction.

If you have under ten years experience in using a programming language, let me be blunt and tell you that you don’t REALLY know that language. 20 years is better. 30+ years is ideal. Do you really think you speak French like a native after speaking if for a few hours a day for a few years? Of course not. Thats laughable. And here is the thing: A mistake in a language can cause confusion and maybe embarrassment, but unless you are a lawyer writing contracts, its not CRITICAL. Miss-using C++ can cause rockets to crash, reactors to overload, and god knows what else.

Why do we accept a superficial understanding of a language that is safety critical, but expect mastery of a language by anyone paid to use English?

I know C++. Thats it. A little bit of php, but a trivial amount. I use container classes and std::string from STL but thats it. A very few macros. My C++ vocabulary, even after 28 years using it, is tiny. The amount of std library stuff I know is very small. And yet… I can type C++ with as much confidence and speed as I type this blog post. In fact I can write C++ faster, with fewer mistakes than I can English. In many ways, I am MORE fluent in C++ than English. I code almost every day, and love it. I feel absolutely that I know what I’m doing, after 28 years, and a subset of C++.

The world is full of people claiming to have that fluency with 12 languages, and they are often literally half my age (I’m 54). This is utter bollocks. None of those people should be allowed ANYWHERE near mission critical code, or any code even tangentially involved with safety or security. I am sure that they ARE doing those jobs, every single day, because they all confidently think they are experts, and the people hiring them do not know any better. It a recipe for disaster, and its why year-after-year, software gets WORSE. Windows 11 runs dramatically worse than Windows 3.11 did, and it does it on hardware ludicrously faster. Skype is running at about 0.1% of its potential efficiency, has scrollbars that do not function as well as windows 3.11 did, and uses easily 100 times the RAM it needs.

Your computer is an absolute trainwreck of clusterfucks crashing into a dumpster-fire of wasted resources. All the people involved in arranging the trainwreck think they are multi-skilled geniuses, but hardly any of them have any real understanding of the code they write.

It doesn’t have to be that way.

We don’t appreciate Picasso based on how many colors he used, or how many styles he knew. We don’t berate any musician for only knowing one style. In Japan, people who make the SAME SUSHI DISH their entire lives, without variation, are considered legends, and experts. Its the norm in South Korea for restaurants to only serve one dish (but do it WELL).

I beg of you: If you are involved in recruiting software engineers, for the love of god only employ people who have real, genuine experience, measured in years but preferable decades, for stuff where you expect them to be able to code from day 1. No, they will not ‘pick it up quickly on the job’. Hiring interns or juniors is different, obviously.

I know I’m an old man yelling at a cloud, but sometimes old people know a lot about the cloud. I’ve been coding since I was 11, and its taken me this long to realize that programming languages should be treated like any other language. It might not be a popular view, but I want to put it out there. Experience really matters.