Why the REST don’t use WebSockets

Those of you who know and love GameSparks will be aware of our distinctive communication model – we opted for WebSockets rather than a more traditional RESTful HTTP approach. This was a bold decision that required significant engineering effort, but it has created a position of leadership for GameSparks – there are significant benefits for our customers, as this post describes.

RESTful Services for Games

If you are integrating with a BaaS provider who uses RESTful HTTP (and if you are not using GameSparks you probably are) for each API call you make the following sequence of events occur.

  1. Client creates a new TCP connection to the server
  2. Client and server negotiate using SSL handshake
  3. Client sends the request data, along with any headers
  4. Server sends response data to the client, along with any headers
  5. Client closes the connection

Each API call you make does exactly the same thing in exactly the same order.

It’s probably fairly obvious that this is not an optimal approach, especially in a scenario where low latency is required.

  • Opening a TCP connection is not free. Each time you do this it uses resources on both the client and the server. For a chatty API this overhead is very real. The SSL handshake on it’s own consists of 3 distinct messages between client and server.
  • HTTP requests generally have some header data sent with the payload. We have seen services where the header data can be up to 10x the size of the actual data you are transmitting, and this is obviously going to have an impact on performance.
  • There is no way for the server to initiate a send to the client, the hack around this is for the client to continually poll the server to see if there is something new. (process 1-5, ALL THE TIME, even when nothing is new).
  • As each HTTP request is done in this strict order, it is impossible for a single request to be sending and receiving at the same time.

HTTP Keep Alive

This is an optimization that can (and should) be used if using RESTful HTTP. HTTP persistent connections allow the client to open a connection and send multiple requests over the same connection.

There are still limitations, however:

  • the server responses must always come in the same order that the requests were received. This is not ideal if you are sending multiple requests that have different processing times on the server, as a previous request can hold up the transmission of a later request.
  • this technique only reduces the number of TCP connections – the overhead of the HTTP headers remains the same.
  • the connection will only stay open for ~15 seconds, at which point the connection is dropped. This means it works well from a browser, where when multiple resources are requested from the same server it can speed things up. However, from an API perspective, when the client session will be active for an extended period, it’s generally not that useful.

WebSockets – Low latency and Full Duplex with a Single Connection.

WebSockets solve pretty much all of the problems faced by RESTful HTTP:

  • each Game session establishes a single TCP connection to the server. Once this is established, the connection is kept open. Think 1 TCP connection per session rather than hundreds (or thousands) with REST.
  • the initial connection does have some additional headers that are required to connect, but this is sent once per connection, rather than once per request. This drastically cuts down the amount of chat and wasted bandwidth.
  • we can send requests and receive responses with minimal overhead. Each payload sent via the socket is framed with only 2 bytes. It’s impossible to achieve such a low overhead size with HTTP.
  • the connection is full duplex, so the server can be sending and receiving over the same connection at the same time.
  • as the server is also aware of this connection, it allows us to send data without the client having first made a request (GameSparks developers will know these as Messages).
  • with our own protocol built on top of WebSockets, GameSparks is able to perform bi-directional, asynchronous communication between client and server without having to worry about message ordering. This single connection gives us the ability to execute multiple concurrent tasks on the server without worrying about sequencing them to prevent locks.

A Payload Comparison

To highlight the payload differences, we’re going to make a direct comparison of these methods. Rather than picking out a real service, we’re going to imagine the REST endpoint is a well tuned GameSparks interface with most of the overhead in headers removed.

We’re going to take send and receive the following JSON with both WebSockets and REST, and look at the difference in the protocols.

request :  {"@class":".LogEventRequest","eventKey":"EVT1", "requestId":"12345767890"}

response :  {"@class":".LogEventResponse","requestId":"12345767890"}

We will disable SSL and compression for the tests to ensure we can read the data and we’ll also assume the client has already authenticated against GameSparks. For REST, we will include an additional header of “X-Authorization” and set a GUID as the value. For WebSockets, the socket will already be authenticated so there is no need to re-transmit authentication information with every request – another performance benefit!

RESTful – 270 bytes

RESTful Response – 183 bytes

WebSocket Request – 75 + 6 (header) bytes 

WebSocket Response – 57 + 6 (header) bytes

As you can see, the amount of data used within the WebSocket is considerably smaller. With a single byte character set the REST request is 233% larger (81 vs 270 bytes) and the REST response is 190% larger (63 vs 183 bytes).

Does this make a difference? Well, here at GameSparks we really think so. If you are pushing hundreds of messages a minute, which is not uncommon, all of this data has to be processed by your application. We think you’d prefer your CPU cycles to be doing more important stuff, like physics or rendering.

What We’ve Learnt Along the Way

The GameSparks platform offers all the upsides mentioned above and this makes our service pretty compelling for our clients. However, to create it, has required a major engineering effort by the GameSparks team.

WebSockets were an emerging standard when we started building GameSparks. We had to engineer client SDKs and server platforms when support was not readily available. Although this created a significant amount of work, it means our team understands our transport protocols intimately, and we can confidently continue to improve and refine them.

It is easier to build a team to develop RESTful technologies, precisely because you don’t need to find this same level of talent and experience. Every server platform that has been built in this way, has support for HTTP and you can spin up a server pretty quickly. You can have a HTTP REST server up and running without even thinking about it. But you end up with a significantly less performant platform and less intimate understanding of its inner workings.

So, why don’t the REST use Sockets?

We suspect in time they will. A RESTful API is obviously a poor fit for making your game perform as well as it should. API servers for games need to be low latency, low overhead and as performant as they can be. As more native support becomes available for WebSockets in popular platforms, it will become easier to implement and the level of engineering required will drop. Higher level protocols for WebSockets are becoming available, such as WAMP, which will lower the barrier to entry even further.

But using GameSparks allows you to get ahead of the game.

  • Very informative! Thanks for sharing.

  • Joachim Mönch

    Excellent, thoughtful approach.

Who uses GameSparks?