I have a theory, help me out if you know about this stuff. take the following image from the visual studio concurrency profiler for GSB2 pre-draw code…
8156,6792 and 8404 are my additional worker threads I spawn to help me process stuff. Click to enlarge…
What I do is basically build up a queue of tasks. The threads are always running and checking for whats next available to process for them. Meanwhile the main worker thread also does the same thing, ensuring it is not idle while the other threads are busy. Critical sections surround access tro the queue stuff to ensure there are no nasty bugs.
I think my problem is illustrated by the red section with the black line connecting above it. This is a thread sat there doing nothing. Here is what I think happens…
- The main thread builds up the queue of stuff to do.
- 6792 jumps in and grabs a task to do
- 8404 jumps in and grabs a task to do
- The main thread then thinks ‘right then, I’ll do this next task’
- 8156 wants to jump in now and also grab a task, but the main thread is busy doing actual work. In fact, it seems to ‘miss’ its opportunity to grab a task for ages, even though the other threads do ok getting task after task.
Is this just a problem of my code design because the allocation of tasks is done by a thread that is not otherwise idling? It seems horribly wasteful to have a whole thread work just as a 99% idle ‘task allocator’. I thought cpus were clever enough to allow interruption of one cpu by another in these instances?
I know I could queue up the tasks ahead of time, but each task takes a variable amount of time, and also varies each frame. I *could* work off the last known task timings and write a clever allocator that tried to assign things in the best order, but that seems possibly like overkill, and something the cpu surely handles anyway? Or am I totally misreading this data. IO checked a few frames, they all seem to have the same pattern.