How the RLS works

The Rust Language Server (RLS) is an important part of the plan to provide IDE support for Rust developers. In this blog post I'll try to explain how the RLS works.

The language server architecture for IDEs is a client/server architecture. It separates the concerns of editing and understanding a program. By providing a language server, we provide Rust knowledge over a 'standardised' protocol (the LSP) to editors, thus allowing many editors to support Rust at the cost (theoretically) of supporting a single editor. The LSP does not mandate any particular method of compilation, nor are you restricted to the protocol (it is extensible, although the more you make use of extensions, the more per-client work there is to do). So an IDE based on the LSP can be just as powerful as a monolithic IDE, but it also let's you get up and running quickly across multiple platforms.

The RLS provides the data required for code completion, goto definition, type/docs on hover, refactoring, etc. See my earlier blog post, what the RLS can do for more details.

The best way to try out the RLS is in Visual Studio Code. You can install the extension by opening the command palette and typing ext install rust.

Language Server Protocol

The Language Server Protocol (LSP) is an open, JSON protocol. It specifies how the language server (e.g., the RLS) and client communicate. There is a spec, if you want more details (it's essential reading for working on the RLS).

The LSP is based on JSON-RPC, which basically means we're sending JSON over stdio. The JSON part is pretty nice, the stdio bit has pros and cons. It is nice and simple, but one stray println in the server somewhere (or a dependent crate) and everything blows up. The whole thing is pretty high level and you never have to worry about missing messages, out-of-order messages, etc.

A useful feature is that you can print to the client's console from the server by writing to stderr.

The protocol messages are either requests or notifications - the former has an id and must get a response, the latter has no id and cannot be responded to. Both kinds of messages can be sent in both directions, but the majority of messages are sent from the client to the server.

The messages in the LSP are pretty high level and leave a lot up to the server. You can expect a notification for any significant event on the client (such as opening or changing a file), and a request for most user actions which require language knowledge (e.g., 'goto definition' triggers a request, and it is the same request as triggered by 'peek definition').

Let's look at an example - the hover request. This is triggered when the user hovers their cursor over a token in the editor. The request looks like (in real life there would be less whitespace):

    "jsonrpc": "2.0",
    "id": 3,
    "method": "textDocument/hover",
    "params": {
        "textDocument": {
            "uri": "file:///Users/nick/version-controlled/chefi/src/"
        "position": {
            "line": 110,
            "character": 25

Since this is a request, there is an id. method describes the type of the message, in this case textDocument/hover indicates that the user is hovering the cursor. The params give the file and location where the user is hovering.

(There is actually a little bit more boilerplate to indicate the size of the message).

The response in this case is:

    "jsonrpc": "2.0",
    "id": 3,
    "result": {
        "contents": [{
            "language": "rust",
            "value": "slog::Logger"
        "range": null

The id matches the request. The interesting bit here is the value field which is what actually gets displayed in the tool tip when the client hovers. There is an array for results/contents so that we can provide information from multiple sources.

RLS requirements

From the client's perspective, the most fundamental requirement for the RLS is that it must handle LSP messages, and it must do so quickly. For actions like hover or code completion, the user expects very little latency.

In order to fulfill this requirement, the RLS needs to process Rust programs to get knowledge of those programs, then manage that knowledge and make it available with low latency.

Due to the complexity of the Rust language and resource constraints, the RLS uses the Rust compiler to understand Rust programs. 'Processing Rust programs' then comes down to managing builds.

For other functions, the RLS depends on other tools - Rustfmt for formatting requests and Racer for code completion requests. A key factor which makes this interesting is that the project may be being edited in the client without being saved to disk, so the RLS must also manage changes to files which have not been written to disk and ensure that the compiler, Rustfmt, and Racer see the current state of the project.

RLS architecture

The architectural components of the RLS are roughly:

  • server module, handle the mechanics of being an LSP server (including handling setup and tear-down messages),
  • actions module, handle user actions,
  • build module, manage Cargo and rustc,
  • analysis crate, manage data about a program, as supplied by the compiler,
  • virtual file system (vfs) crate, manage the state of the project files.

There are also some supporting crates:

  • languageserver-types statically typed model of the LSP in Rust,
  • rls-data schema of the data about a program sent from the compiler to the RLS (and other clients, such as the new Rustdoc),
  • rls-span represent locations and ranges in the source code (this is surprisingly complicated and can give rise to a lot of subtle and annoying bugs).

To see how the above all fit together, let's go through a user scenario: the user is typing, then moves the cursor over an identifier to find the type. On every keystroke, the editor will send a textDocument/didChange notification to the RLS; since these are notifications, there are no replies. When the user puts the cursor over the identifier, the editor will send a textDocument/hover request, which the RLS will reply to (see above). The messages will be received in order by the RLS via stdin.

The server module listens to stdin, processes a message and its params, and forwards the message to the action module for handling. This all happens sequentially, the advantage of this is that we don't have to deal with out of order messages. The disadvantage is that an action can block the server indefinitely. Therefore, anything even remotely long-running should be done in its own thread.

The actions module is a collection of action handlers, plus some utility functions. The textDocument/didChange notifications are handled by the DidChange handler. The change is recorded in the VFS (virtual file system), and then the handler requests a build. Any build or long-running VFS action will spawn a new thread, so at this point the RLS is ready to process the next message on the main thread.

If the user is typing fast, then we'll end up with multiple build requests in a very short space of time. It does not make sense to start a build for each of these requests. The build module decides whether to start a build or not, and if it does start a build, it coordinates the building process. As we add support for workspaces, this is getting much more complex, but I'll stick to the single-crate story here.

A goal of the RLS is to finish a build as soon as possible so the user can get up to date information with low latency. However, we don't want to waste CPU time on builds that won't be useful, or burn through too much of the user's battery. Most of the time, information which is a few keystrokes out of date is fine. Therefore, we never do more than one build at a time. We also have no way to cancel a build. So, for most build requests, we wait a short time to see if we get a more up to date request before building. Thus, in this example we will only start a single build, once the user has finished typing.

On startup or if the user changes settings, the RLS does a 'cargo' build, otherwise (such as in the user scenario) the RLS does a 'rustc' build. For a 'cargo' build, the RLS runs Cargo in process, doing the equivalent of cargo check. Rather than calling rustc, we force Cargo to call our own compiler shim, which uses the compiler as a library but sets some options which would otherwise be unstable (the shim is actually the RLS with an env var set). For the primary crate, we intercept the call to rustc, record the arguments and env vars, and schedule a 'rustc' build.

A 'rustc' build compiles a crate by using rustc as a library compiled into the RLS. This lets us pass changed files from the VFS to rustc and to return data from rustc to the RLS, both without writing to disk.

For every crate, the RLS collects data about the compiler's analysis of the crate. For dependent crates we write and read this to/from disk (to avoid rebuilding where possible). For the primary crate, the data is passed in memory. The format of the data is specified by the rls-data and rls-span crates. The RLS passes the raw data to the rls-analysis crate which does some processing (e.g., cross-referencing definitions and references) and saves the data in memory (in a collection of hash tables).

When the RLS receives the textDocument/hover request, it does not wait for the build to complete. If necessary, it will use the existing data from a previous build. The Hover action must use the VFS to compute the correct position in the document, it can then look up information about a token in rls-analysis. This is usually very quick. The action handler produces a response, and the server module serialises the response and sends it to the editor via stdout.

Helping out

Let us know if you encounter any problems by filing issues on the RLS repo.

If you'd like to help by writing code, tests, or docs, then have a look at the repos for the RLS, our VSCode extension, or rustw. Or come ask questions on IRC in #rust-dev-tools or on Gitter.

Useful links: