Well I feared as much when I wrote this: https://positech.co.uk/cliffsblog/2014/04/14/reading-back-from-gpu-memory-in-directx9 but it turns out that, yup, that function is the slowest in the entire engine, at least until battle is joined. I really need to fix it.
Essentially what I’m doing there is maintaining a depth buffer for objects in the game. I then do some fancy processing on that buffer (all taking place in video card memory. I then really badly need to know the values of the depth buffer for about 100 different points, and based on the outcome of that, I either don’t draw, draw some stuff quite small, or draw it really big. In short, I’m scaling an object based on specific values of the depth buffer.
Right now, my engine does not use vertex shaders at all. I just use pixel shaders, and have vertex shaders as NULL. I’m pretty sure the solution to my problem is easy if I go to draw all of these objects, then write a vertex shader that can scale the object accordingly. the thing is, I’m using directx9 and therefore I really do have separate vertex and pixel; shaders. This is going to involve me reading up on the most undocumented stuff ever, which how vertex shaders and pixel shaders can be used in a 2D game under directx9.
Bah.