Thinking about the Multiplayer Performance Problem

Multiplayer performance issues has been bugging me lately, as I really don’t want to see multiplayer have to restrict any aspects of gameplay solely based on performance. That’s why talk of limiting hearthling count scares me. Anyway, I had an idea:

Randomized distributed processing

What this is intended to do is delegate some of the processing tasks (ai, pathfinding, hyrdology, etc.) that the server must do to the client machines.
However, to prevent cheating, the server will randomly select two clients to do this task, that way the process can be verified, since the two results should match. Also, having two clients verify would make it much more likely for the processing to actually finish.

Is this initial idea feasible, or am I just crazy-talkin’ ?


Essentially my goal is to merge distributed processing, similar to tensorflow, with verification you’d see in the Tor network.

1 Like

this reminds me of the way bit coin mining is setup to handle the logistics of allocating resources to different gpus. I had the idea as well, but I really don’t know how viable of a solution it is for this kind of application that requires things to be done with low latency

glad someone else brought it up as well tho, would like to see what the dev response is to something like this

Generally server-client systems use an authoritative server. Tasks like AI should always be processed locally. For whatever you want to do you just send commands to the server, basically asking permission.
So then all processing could be done on the clients, with the server just deciding, and it shouldn’t be an issue to do this for a decent amount of clients.

With a good approach, I won’t see any problems. However with increase in heartlings and other performance stuff, that’s a cpu thing. And I don’t see a solution yet.

Yes, thats why I figure a distributed system would need randomization, so that clients can’t be sure they’d be able to cheat if they want to, since they don’t know which tasks they’ll be processing.

1 Like

The idea of sending jobs to a job pool that could be processed by any client isn’t crazy - the clients have access to a lot of the data that they would need for something like pathfinding. State synchronization might be an issue, the clients don’t have the most recent data ever – so it could find bad paths. That’s not a huge problem since the server could just verify if it’s a valid path.

That said, I want to see where we can get with performance optimization first, since coordinating processing jobs across N nodes would be a little complicated, which means it could be harder to debug and maintain.

Also a note on the randomization/cheating stuff: The jobs would be like ‘find a path from (0,0,0) to (2,4,5)’ and the way the AI works it asks for a lot of paths - so cheating with that kind of information would be hard. one thing you could do is report that things are unpathable if the start coordinates don’t match any of your hearthlings. It wouldn’t work that well though since the AI also checks paths from where your hearthlings will be in the future.