Words and Pictures by 5p0rt5BEArD; Architecture by Farms
Playerchain is a game client architecture that incorporates network code for forming peer-to-peer multiplayer sessions that use consensus rather than servers to coordinate. Session consensus is built on immutable histories maintained by each player. These personal blockchains connect in a DAG structure and live on from session to session, enabling player-centric, persistent and interconnected game worlds.
The core elements that make up a playerchain client.
Game code (the green blocks in the diagram) follows a familiar pattern to existing peer-to-peer multiplayer games: deterministic simulations running on each client with player inputs shared across the network.
Agreeing on the inputs is the job of consensus (the blue blocks in the diagram). As long as all players agree on the order and timing of each others’ inputs, then the game is replicated perfectly for everyone.
The implementation of the consensus is game specific as long as the core elements are present: single player blockchains joining together to form a blocklace that results in totally ordered inputs. This article focuses on a fast-as-possible-finality approach, but other consensus approaches are possible and there are tradeoffs to be made.
Our Playerchain Demo [8] demands fast consensus. Play and view source on github.
Game genres like a space shooter, require fast-as-possible-finality because players must react to each other's moves quickly and corrections of moves have a large effect on the players’ world view. We chose a space shooter for our demo application, so we could show a playerchain working with the most demanding game type.
Permissioned Group Formation
Anyone can create a playerchain but access to the playerchain is controlled by the group.
Before a playerchain session begins, peers must find each other and share a key that ensures their group is private. Fast consensus relies on a known fixed group, rather than the time or money-based security that slows down consensus on a permissionless chain.
Single Player Blockchains
Each input is included in its own block structure that references the previous block by its hash, thus forming an immutable blockchain of this player’s inputs. For our fast finality approach, a block is created every tick, so blocks must also indicate if no input was made. This allows other players to know the difference between waiting for an input and there being no input to wait for.
Forming the Blocklace
The DAG structure formed in a playerchain is a blocklace [1]; a type of CRDT that is suitable for applying multiple types of consensus. The Cordial Miners [2] protocol presents various algorithms for consensus on a blocklace.
Each player shares their own blockchain by sending signed messages containing their latest block to all the peers in the current playerchain group.
Players combined their own chain with those being shared by remote players.
As players share their own blockchains they also share references to each other's blocks. Following the dissemination algorithm laid out in the Cordial Miners [2] paper, a blocklace structure is formed. Blocks are grouped in rounds where a new block cannot be started until blocks from two thirds of peers have been received for the current round.
Game specific blocklace formation matches DAG rounds to game ticks.
For the purposes of a fast-as-possible-finality game for a playerchain, each round can be considered to match a simulation tick of the game. Therefore each round contains either the input or an explicit “no input” for each player at a specific game frame.
Input Ordering
Playerchains can pre-determine total order to speed things up.
By matching blocklace rounds to ticks, clients can add a tick number to their blocks. Combined with a deterministic player order, each block effectively has a predetermined total order. This is not typical in distributed consensus where total order requires further inputs to be able to algorithmically finalise the block’s order. For the blocklace this could be done with the Tau ordering algorithm from Cordial Miners [2] which is optimised to reduce the rounds required for finality yet will always add more time compared to pre-ordering.
Pre-ordering assumes the typical case of honest clients and allows a game to show inputs as soon as they are received, rather than waiting for further inputs to algorithmically confirm the order. Dishonest attacks by equivocating (telling two peers two different inputs) or declaring an input for a much older tick will result in re-simulations but the game will remain consistent.
Prediction and Rollback
There are established techniques [3][4][5][6], like prediction and rollback for ensuring a good playing experience with input-based multiplayer games. Predicting remote players’ inputs before they arrive allows games to continue without having to wait for network latency. The logic simulation runs on, mixing the local players actual inputs with the predicted ones for remote players. When remote player inputs arrive that are different to the predictions, the simulation is rolled back to a state before those inputs and replayed with the correct inputs. The time into the past for which predicted inputs are allowed to be corrected is the “rollback window”.
The netcode article on Fightin’ Words [4] is an excellent walk through of prediction and rollback as used in a beat-em-up
Re-simulations after a rollback happen in a single render frame. If the new inputs did not affect the local player they may not notice. If the new inputs cause state changes that affect the local player they may see game entities suddenly move, disappear or appear.
These side effects are more onerous in games where:
Players are likely to be interacting with each other;
Predictions are hard to make;
Small input changes have a dramatic effect on future state.
(All of these are characteristics of a space shooter).
Re-simulations can be reduced by limiting the rollback window. But if the rollback window time is reached and not all inputs have arrived, a game must either pause or continue and then ignore any late arriving inputs.
Prediction and Rollback in a Playerchain
Prediction and rollback may be handled by a relay in a server-supported network but in a playerchain, it must be handled by consensus. The size of the rollback window, the decision to pause and which inputs to drop must be done predictably. Different algorithms could be used for different games. Below we step through the algorithm for fast-as-possible-finality consensus developed for the space shooter game:
Key for the round diagrams below
Round 0. Group Formation
All clients are deterministically assigned a player index on group formation and can therefore predict the total order of each of their future blocks.
Round 1. Wait for consensus
We, the local player, have made an input but not received any others. No further blocks can be produced until at least one other player input is received. (See tuning below to see how the likelihood of this pause is reduced) .
Round 2. Proceed with prediction
We have received the round 1 input for “remote 1”. This achieves the two-thirds majority needed to proceed and process our round 2 block. Note how any unreceived blocks must be predicted (dotted lines) and in this case are predicted to be “no input”.
Round 3. Rollback
We have received the round 2 input for “remote 1” and it is an input we didn’t predict. The game is rewound and re-simulated from round 2 before our next rendered frame.
Round 4. Finality
Locally we have progressed to round 4 because we have two-thirds of the blocks for round 3. Now round 1 has been finalised because we have enough round 1 inputs acknowledged by our last round. Any future blocks received for round 1 will be ignored.
Round 5. Dropped Inputs
We have finally received some inputs from remote 2. The inputs for round 1 and 2 are ignored because they missed the rollback window. This will be the case on all clients including the remote 2 client itself, which will drop its own inputs and re-simulate to remain in sync.
Each peer tries to send a new block at the target game tick rates so the effect of following the steps above, as long as all peers are receiving all remote inputs within two ticks of each other, then the game runs at target frame rate for everyone. That is a tight constraint on the network so some game-specific tuning is needed of a few elements but in particular getting the best sized rollback window.
Tuning the Rollback Window
Balancing the rollback window is a crucial part in tuning the user experience with prediction and rollback. If the rollback window is too short and less than a third of players are lagging behind, the laggers will get their inputs dropped and have a bad experience. If more than a third are lagging behind, everyone else will have a paused game while they wait for at least two thirds to catch up. If the window is too large then remote inputs that differ from predictions have more chance of causing a jerky view for everyone. Balancing the window must be done with many other factors in mind and is not unique to a playerchain. However, increasing the rollback window in the blocklace is a new problem, which we solve with interlacing sets of rounds:
Introducing sets to blocklace formation to increase the rollback window.
The blocklace dissemination algorithm requires rounds to progress evenly by requiring each peer to wait for two thirds of the previous round to be seen before making a new block. This equates to a small, two frame rollback window.
The rollback window can be increased by increasing the number of rounds each local client can go ahead but the algorithm requires blocks to wait on references from the previous round. We enable processing further ahead by introducing interlaced sets of rounds. While one set is waiting the next set can continue until we run out of sets (see above diagram).
Ticks Faster than Latency
The blocklace formation, using Cordial Miners dissemination, does not require waiting for message acknowledgements, so peers can send blocks unbounded by latency and games can operate at a tick rate higher than latency. Rather than waiting for an acknowledgment of the last message, each new input is combined with references to the latest known inputs of other peers that act as acknowledgements.
Matching message rounds with game ticks and targeting a fixed frame rate allows any tick rate to be targeted including 30 or even 60 ticks per second. At these rates, poor network conditions cause the rollback window to be frequently breached and therefore a poor user experience as some player inputs are dropped or the game is paused. So for the playerchain demo game, which is designed to run in ad-hoc network environments, we set a tick rate of 10 ticks per second (note that rendering will typically be 60fps).
The Global Blocklace
The Playerchain Demo project focuses on demonstrating fast-as-possible consensus for a responsive multiplayer session. The Tashi consensus network [9] implements a similar architecture (DAG consensus over realtime game inputs) as an ephemeral side-chain. As a side-chain, verifiable sessions can connect to public blockchains, which is also true for a playerchain session. The vision for playerchains is that they can also persist and connect in their own right, which is a reason for choosing the blocklace structure.
In the playerchain demo [8], each session creates a new key pair for each player that is discarded at the end, along with any record of the blockchain. By introducing a player identity that persists between games, the blockchains of each player and their last session interactions can also be persisted. As players join new groups and play new games, a global blocklace is formed with each player having a partial view that tracks their progress.
The coloured patches represent playerchains; Partial views of a global blocklace that groups have agreed to progress together while playing a game.
Practically, a global blocklace of all player histories across all games runs into storage issues and time limits for restoring state from an ever growing history. These are well tackled issues in the world of blockchain with techniques like provable state checkpoints and content addressable storage but there is work to do to make the global blocklace a practical reality.
With a global blocklace, trust is built up between players and groups as they share more interactions. An example would be two playerchain worlds from the same game agreeing to trade resources because they can verify each other's partial view of the blocklace. Unlike the public blockchain version of state, no monolithic public view exists. The global state emerges from the many partial, player-centric, views.
Why use Playerchain?
The architecture presented allows multiplayer games to run without third party infrastructure. That could be useful from a cost or ease of maintenance point of view. However, the main reason Playmint have developed playerchains is for making decentralised games that are for playing, not speculating. You can read more about our motivations here [7].
References
[3] Photon Engine’s Quantum; Exit Games
[4] Netcode: Explaining how fighting games use delay-based and rollback netcode; Infil, Fightin’ Words
[5] 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond; Paul Bettner
[6] GGPO; Rollback Network SDK; pond3r
[7] Why We Need Playerchains; 5p0rt5BEArD