Specifically, I’m wondering what
network_send includes, as I’m seeing a big spike in that when stress-testing one of my projects (in single-player). I’m also seeing around 30%
lua_gc (I assume garbage collection) while paused; is that reasonable?
Specifically, I’m wondering what
Network send includes the encoding of objects to send over the network. In single player the transfer itself is just shared memory, so it take negligible time and the vast majority of that stat is the encoding. So if you have huge nested tables which are remoted (i.e. they are in an
_sv variable of a remoted object and don’t start with an underscore), that’ll increase it.
30% GC while paused is not normal. Something may be creating and destroying objects rapidly.
I think the GC increased a bit when we fixed some of the hitches recently.
Apparently there were some tasks piling up at the same ticks so the game ended up hitching and slowing down. Now it doesn’t. (at least, not around combat scenarios).
Can you elaborate on this a little more? (Or is there somewhere it’s already been written about?)
I was suspecting that some frequent adjustments to some
_sv tables I was making could be contributing to my problem, but I refactored such that there’s no regular client tracing of those saved variables, and I also minimized the frequently modified tables being stored and requested from other server services and components. I do still have a couple very large tables that I’m storing, but they’re mostly just large because they contain entities which contain a lot of information, and I would think that would ultimately just act like a reference.
mark_changed() immediately re-encode the saved variables even if there aren’t any explicit traces set up on them? If so, is there a way to only do that immediately before the game is saved? Or is this a fundamental design that I should really be coding to work with in a particular way?
(Or is there somewhere it’s already been written about?)
Not that I know of, but @Relyss might know if it is.
I do still have a couple very large tables that I’m storing, but they’re mostly just large because they contain entities which contain a lot of information, and I would think that would ultimately just act like a reference.
Yes, entity references are not super expensive, but they’re still more expensive than basic data types (you have to lookup their ID in the store and encode it as a URL. It may be a little cheaper to store entity IDs and look them up on demand, but it shouldn’t be a huge difference. The worst is nested tables with cross references. One thing to keep in mind is that if you have a reference to a regular Lua table, even if that table hasn’t changed, but the table referencing it has, the system has to re-encode the whole thing from scratch.
mark_changed()immediately re-encode the saved variables even if there aren’t any explicit traces set up on them?
It will re-encode them at the end of the frame, IIRC, so multiple
mark_changed() calls don’t cause duplicate re-encodings. It’s been a while since I looked at that code, so I don’t remember for sure whether this happens even if there are no traces set, but that seems possible.
If so, is there a way to only do that immediately before the game is saved?
I can’t recall any standard way of doing that. You’ll probably need to do something custom, like have a minimal amount of data remoted, and recreate the full form on both the client and server from it.
So I did something that’s kind of a hack, but it was simpler than refactoring, and it’s a performance increase over the “proper” implementation with one caveat (and a huge performance increase over my previous implementation):
I made a PreSaveCallHandler that reads in a JSON file listing function calls that should be made before a save happens. Then I overrode the
save_game.js / _saveGame(...) function to call my pre-save function instead of directly calling
radiant:client:save_game. My function makes a series of deferred calls to all listed pre-save functions, calling the radiant save_game function at the end of that.
What’s the upside? I was able to get rid of all the
__saved_variables:mark_changed() calls in my connection-based services, so frequent/large changes to graphs cause virtually no lag. The
__saved_variables:mark_changed() calls all happen only immediately before saving the game. This allows me to maintain my design of centralized services that deal with the logic of connected graphs in a more organized and efficient way than trying to distribute that across the entities.
What’s the downside? Anywhere I implement that, I effectively can’t trace those datastores, listening for changes, because they’ll only trigger right before saving.