<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[featherweight musings]]></title><description><![CDATA[Thoughts on Rust and stuff]]></description><link>https://www.ncameron.org/blog/</link><generator>Ghost 3.18</generator><lastBuildDate>Tue, 21 Apr 2026 00:45:26 GMT</lastBuildDate><atom:link href="https://www.ncameron.org/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[KCL part 2: program memory]]></title><description><![CDATA[<p>In this post I'll cover a fun and interesting problem (and two solutions) in the implementation of KCL. This post is part of a series of blog posts on KCL. Previously: <a href="https://www.ncameron.org/blog/kcl-part-0/">part 0: intro</a> and <a href="https://www.ncameron.org/blog/kcl-part-1-units/">part1: units</a>.</p><p>KCL is an interpreted language - it is interpreted directly rather than being</p>]]></description><link>https://www.ncameron.org/blog/kcl-part-2-program-memory/</link><guid isPermaLink="false">6980e17146ff65349b7bf6fb</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Mon, 02 Feb 2026 17:41:58 GMT</pubDate><content:encoded><![CDATA[<p>In this post I'll cover a fun and interesting problem (and two solutions) in the implementation of KCL. This post is part of a series of blog posts on KCL. Previously: <a href="https://www.ncameron.org/blog/kcl-part-0/">part 0: intro</a> and <a href="https://www.ncameron.org/blog/kcl-part-1-units/">part1: units</a>.</p><p>KCL is an interpreted language - it is interpreted directly rather than being compiled to a binary and executed. When running, the current state of the program (i.e., the runtime values of its variables) is stored in the program memory. In KCL, scoping and name resolution are implemented as part of program memory, so rather than being a flat address space, it is structured according to the dynamic scoping rules of the language. Within a scope, it is basically a map from variable names to values.</p><p>Previously, the KCL interpreter made a copy of all of the program memory whenever a function was defined and used that when the function was called (all functions capture their environment, i.e., they're closures, and when executed the environment must reflect values at the time of function definition, not function call). Obviously, that is not very efficient! In programs with lots of functions and significant memory use (KCL also does not have much in the way of garbage collection), this caused dramatic slow-down of execution. I fixed this in two ways inspired by work from databases, concurrency, etc. The first solution used an efficient snapshotting/copy-on-write mechanism, the second used an epoch counter and something similar to MVCC (multi-version concurrency control). These fixed the performance issues with specific problem cases and caused a general performance improvement. The second solution was a simplification of the first and also a step towards another potential performance and engineering improvement (references in program memory).</p><p>KCL is mostly immutable - once a variable is created, it cannot be modified. However, the values can change due to a feature called tags. Tags are a way to refer to parts of objects. Consider a simple cube which we might construct by drawing a square using four lines and then extruding the square into a cube. Tags allow referring to each edge of the square. After the square is extruded, the same tags can be used to refer to the surfaces of the cube extruded from each line. In a sense, tags are names, and these names are updated as the program runs (I won't go into the details here, but for the purpose of this blog post, we just need to know that some properties of values can change). (Tags predate my involvement in KCL and are one of the things which makes KCL challenging to work with. However, they seem to be pretty intuitive for CAD users to use, and are a powerful and essential feature. I think they could be done differently and in a way which would make the rest of KCL nicer, but they're so widely used that the change would be infeasibly huge).</p><p>Mutable program memory is not an issue - it's how a real computer's address space works. However, KCL also has closures (functions which capture they're environment), and for various reasons it is important that they always see the state of program memory when it is created, not state when the closure is executed. This is an intrinsic problem with closures, and there are a few solutions:</p><ul><li>program memory is immutable (the solution used in many functional languages),</li><li>closures always see the latest version of data (acceptable in some languages),</li><li>use a borrow checker or similar to statically avoid the issue (used in Rust),</li><li>make a copy of the memory which the closure might use (kind of common, but requires some static or dynamic analysis of the program state).</li></ul><p>KCL used the last one, but because it didn't have any static analysis, it copied the whole program memory. That might seem really bad, but because programs are small, it's not too much of a problem. However, effectively all functions are closures, so potentially this can happen a lot and in some edge cases this can lead to a blow-up in memory usage and terrible performance. It turned out that this was happening often enough to need addressing.</p><h2 id="background-how-kcl-program-memory-works">Background - how KCL program memory works</h2><p>As a reminder, KCL is an interpreted language with no pre-execution analysis or transformation. Program memory is used to store and lookup variables (of course), and it is also responsible for handling scoping, e.g., in</p><pre><code>a = 42

fn foo(a) {
    return a + 10
}

b = foo(a = 0)
</code></pre><p>Program memory stores values for <code>a</code> and is responsible for making sure that the right <code>a</code> is accessed at any given point in execution (<code>a</code> is not renamed or encoded before execution). Scoping in KCL is non-trivial, in the above example, if <code>a</code> were not overridden in <code>foo</code>, then the outer <code>a</code> would be accessible inside <code>foo</code> (put another way, all functions are closures which capture all of their environment).</p><p>The way this is implemented is that program memory consists of environments (aka, an 'env') which are lexical scopes, they keep a reference to their enclosing environment. The top-level environments are modules. These environments are distinct from the call stack, though they might overlap (as in the above example, where the function <code>foo</code> is called from the same scope where it is declared).</p><p>This scheme of environments handles scoping and name resolution, and if program memory were immutable, it would be all that is needed. Unfortunately, although KCL does not support explicit mutation (e.g., reassignment), values can be mutated as a side-effect of some operations (as described above). So when declaring a function, we need some way to capture its environment at the point in time of declaration. The old way of doing things is that when declaring a function, it would make a copy of all of the program memory. When calling a function a new environment is created which references this copy. Copying all of the program's memory is obviously inefficient.</p><p>I've only mentioned creating and reading values in memory. Values should be deleted when no longer needed. This is complicated by the fact that declaring a function implicitly references values it could access. KCL doesn't have a value-level garbage collector. It's not as critical as it would be in a general purpose language. The work described below added some environment-level GC, but I won't describe it in this post.</p><h2 id="first-iteration-snapshots">First iteration - snapshots</h2><p>My first iteration of improvement was to use copy-on-write (CoW, nothing to do with the Rust library type) snapshots - <a href="https://github.com/KittyCAD/modeling-app/pull/5273">KittyCAD/modeling-app#5273</a>. As well as the core change to CoW snapshots, that PR includes a bunch of engineering improvements to program memory (having a single memory for the whole program, rather than one module, better encapsulation, remove some special cases, separate out most of values in memory (i.e., memory just stores opaque values), adds a little caching, etc.).</p><p>The key ideas of environments and the call stack (as well as the fundamental idea of using program memory for scoping, etc.) are preserved. What changes is what happens when a function is declared. Rather than copying all of program memory, KCL makes a <em>snapshot</em>. Making a snapshot has the same observational semantics as making a copy (specifically, changes to program memory are not observed when reading from a snapshot), but the implementation is optimised. Creating a snapshot is essentially free (precisely, it is O(1) rather than copying memory is O(n) in the size of the memory) and memory use is much, much smaller.</p><p>To show how snapshots work, I'll go through a simple example which isn't KCL. Later I'll show how this works with environments, stacks, and functions. Consider the following pseudocode example:</p><pre><code>a = 1
b = 2
c = 3
snapshot(x)
d = 4
b = 5
snapshot(y)
delete(d)
c = 6
</code></pre><p>Here is what reading each variable from each snapshot looks like:</p><h2 id="-snapshot-a-b-c-d-">| snapshot | a | b | c | d |</h2><p>|x         | 1 | 2 | 3 | - |<br>|y         | 1 | 5 | 3 | 4 |<br>|current   | 1 | 5 | 6 | - |</p><p>The current state of program memory can always be read directly without considering snapshots. Modifying or creating a new variable writes the new value into the current program memory and must also touch the most recent snapshot (if one exists) to make a copy of the old value (this is the copy-on-write element of the scheme). Since reads are more common that writes, this is a good trade-off for most programs.</p><p>So, in the example, the initial assignments into <code>a</code>, <code>b</code>, and <code>c</code> simply write into the current env of program memory. The assignment into <code>d</code> writes <code>4</code> into the current env and a tombstone into snapshot <code>x</code>. The second assignment to <code>b</code> copies the old value (<code>2</code>) into <code>x</code> and writes <code>5</code> into the current env. Deleting <code>d</code> copies <code>4</code> into snapshot <code>y</code> and removes it from the current env. The last assignment to <code>c</code> writes <code>6</code> to the current env and <code>3</code> to <code>y</code>.</p><p>Snapshots are read-only, mutations only ever affect the current state of program memory (we'll see how exactly later). Reading from a snapshot means looking in that snapshot for a variable, then every newer snapshot until the current state is reached or the variable is found.</p><p>In the example, to read from <code>y</code>, <code>c</code> <code>d</code> are found in <code>y</code>, and <code>a</code> and <code>b</code> are found in the current memory. To read from <code>x</code>, <code>b</code> is found in <code>x</code>, <code>c</code> and <code>d</code> are found in <code>y</code>, and <code>a</code> is found in the current memory.</p><p>OK, so that's reading and writing with snapshots in a hypothetical simple memory system. In real KCL, we also have to deal with scoping in the form of environments. Environments are in a tree and since data anywhere in that tree can be modified, when we have a snapshot, we also have to treat parent environments as snapshots. So, snapshots are organised in a tree where the parent of a snapshot is another snapshot, which is the snapshot of the parent environment. When accessing a variable (read or write) we walk up the tree of environments in the same way as without snapshots, but this time we're using a snapshot for each environment. Note that this does not mean we have to combine traversing a list of snapshots for each environment <em>and</em> a list of snapshots representing enclosing environments. For writes, we always write into the current environment (and update the most recent snapshot of it), for reads, we only look at the most recent snapshots to find a variable.</p><p>You can imagine that this would lead to a lot of snapshots, since every function declaration leads to a snapshot created which triggers a snapshot creation in all ancestor environments. However, in practice the tree of environments is pretty shallow (c.f., the call stack). Furthermore, we can make an important optimisation that if we need a snapshot for an environment (either the current one or an ancestor) and one exists already, then it can be reused if it is empty. You can think of this as merging two snapshots if there is no difference in the data they capture. This is the common case for parent snapshots.</p><p>That's reading and writing, now how are snapshots actually used? Every time a function is declared, a snapshot is created. When a function is called a new environment is created and pushed on to the callstack (nothing changes to the callee environment). This new environment has the function's snapshot as it's parent which means when reading variables during the function execution, the state of the variables at the time of the declaration is observed. So a few final observations on this setup:</p><ul><li>The current environment is always 'just an environment', not a snapshot; snapshots can only be ancestors of an environment.</li><li>The parent of a regular environment can be either another regular environment or a snapshot, the parent of a snapshot is always another snapshot.</li><li>Since we only write into the current environment (never a parent environment), we never need to write into a snapshot of memory and snapshots are observably immutable (though note that the implementation of snapshots means that the snapshot Rust object is not immutable).</li></ul><h2 id="second-iteration-epochs">Second iteration - epochs</h2><p>The above works nicely, but it is kinda complex. My <a href="https://github.com/KittyCAD/modeling-app/pull/5764">next iteration</a> tried to simplify the above, be more efficient when writing, and be a step towards implementing references in KCL program memory (referencing a value currently makes a copy which is inefficient).</p><p>A key observation is that even though we need to do all this stuff because KCL values are mutable, mutation is rare and restricted to one class of data within one class of values. Therefore, handling mutation just where needed, rather than as a global property is more efficient and means less code has to take mutability into account.</p><p>This approach removed snapshots (leaving environments and call stacks). Program memory maintains a global counter which is incremented whenever a function is declared, and that counter is saved with the function in memory (rather than a reference to a snapshot). This simple scheme wouldn't be practical in a general purpose language due to concurrency and counter overflow, but in KCL it's fine. The counter is called an epoch counter, where an epoch is the most fine-grained period of time we need to distinguish between memory states.</p><p>Whenever a value is created in memory, we save the current value of the epoch counter with it. Then, looking up a variable means finding it in memory, but ignoring it if it was created more recently than the epoch we are 'looking in'. Values which can be modified must handle multiple versions internally (rather than this being handled universally by program memory). Again, since the common case is immutability, this is a good trade-off!</p><p>Mutation in KCL is limited to tagging of objects. Essentially rather than just keeping a list of tags, an object stores the epoch of writing for each tag and may have multiple different tags for different epochs. Finding the tags for an object at an epoch means searching the epochs and tags for the most recent tag which existed at the specified epoch.</p><hr><!--kg-card-begin: html--><p style="font-size: small;">
  I will have availability for Rust coaching or adoption from March 2026; from a single call to ongoing 2 days/week. I can help your team get things done, adopt Rust and use it more effectively, or to accurately evaluate Rust as a new technology.
</p>
<p style="font-size: small;">
  If you're adopting Rust, I can help make that a success with advice, 1:1 or group mentoring, design and code review, or online support. <a href="https://www.ncameron.org/coaching">Coaching</a>.
</p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[KCL part 1: units]]></title><description><![CDATA[<p>This blog post is about <a href="https://zoo.dev/docs/kcl-lang/numeric">numeric units</a>. Numbers in KCL are not just a number like <code>42</code> they always include units, e.g., <code>42mm</code>. This is not unknown: <a href="https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/units-of-measure">F#</a> has a famous and well-designed system, and more recently Swift added units. It is a bit different (and interesting) in KCL</p>]]></description><link>https://www.ncameron.org/blog/kcl-part-1-units/</link><guid isPermaLink="false">6903b58b46ff65349b7bf6d2</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Thu, 30 Oct 2025 19:00:32 GMT</pubDate><content:encoded><![CDATA[<p>This blog post is about <a href="https://zoo.dev/docs/kcl-lang/numeric">numeric units</a>. Numbers in KCL are not just a number like <code>42</code> they always include units, e.g., <code>42mm</code>. This is not unknown: <a href="https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/units-of-measure">F#</a> has a famous and well-designed system, and more recently Swift added units. It is a bit different (and interesting) in KCL because KCL doesn't have static typing, and ease of use is a high priority.</p><p>Before we dig into things though, we need to quickly go over types in KCL. KCL is not a statically typed language. It has a dynamic type system and optional type annotations in various places (most obviously on function parameters). These types are primarily used for documentation, both for generating the actual docs and for IDE features like hovering the cursor and signature help. Types are checked dynamically where they are present (in the future, it would be good to check types statically too), dynamic checking can include coercion which can change runtime values. The type system is fairly simple: there is limited subtyping (which is mostly structural), no generics, etc.</p><h2 id="motivation">Motivation</h2><p>KCL is a language for CAD and units in CAD are important! When a design is manufactured, that is done in the real world and designs without units make no sense.</p><p>We need multiple units because different users want to use different units. This is not just because the USA (and Liberia and Myanmar) are stuck in a weirdly complex parallel world of measurement, but also because different projects work at different scales and having a <code>0.000</code> prefix or <code>0000</code> suffix on all your numbers is annoying. Reusing components is an important motivator for KCL, so interoperation of components written using different units is important.</p><p>Making units first-class in KCL is motivated by eliminating errors due to unit mismatches (again, important because of the CAD domain), eliminating precision errors (because ZMS is a programming system, not just a CAD tool, precision errors can grow due to arithmetic, etc.), and by user experience (if a user writes a number using one unit, then we want to make sure the system always uses that unit when showing that number (and numbers derived from it) to the user, whether that is in output or tooling).</p><p>As well as the requirements derived from the above motivation, there are multiple requirements for a units system, many of which are in conflict with each other and so we have trade-offs.</p><p>Using units can be boilerplate-heavy, especially in KCL programs which often have many more numeric types and literals than programs in other domains. Furthermore, although knowing the units for a design is important, for most of their work, programmers don't want to think about units. I found more than many features, the trade-off between correctness, expressiveness, and ergonomics was especially apparent with units.</p><p>One issue which I find interesting with units, more so than for most kinds of types, is that there is some intrinsic imprecision in the way that programmers use units. For example, programmers might use a single number as both a length and angle, they might use coordinates, vectors, and normalised vectors (i.e., a direction without a magnitude) without precise conversion, or they might do conversion between units 'manually' using arithmetic (which brings us to the tricky issue of how to handle π, more on that later).</p><p>On top of that, there were some considerations specific to KCL at the time. Historically KCL had units per file with some functions to convert units, but no checking of units. Lots of code existed without unit annotations in the code and we wanted to be as backwards compatible as possible. This also set expectations for the number of annotations and other syntax to be very low. Furthermore, implementation and conversion of existing code was time-constrained by the release date for ZMS 1.0.</p><h2 id="design">Design</h2><p>Numbers in KCL are either lengths (most commonly) or angles or unit-less. Lengths can be either metric (mm, cm, m) or imperial (inches, feet, yards); angles are either degrees or radians. Unit-less numbers are used for ratios or for counting (e.g., the length of an array or the current index when iterating an array). We don't often have areas, volumes, etc. so higher-powered units are not supported. Being perfectly expressive is a non-goal - there are occasions where a programmer might want to express a weight or some complex unit (e.g., a velocity) and these are also not supported. Furthermore, even when using the supported units, perfectly tracking units through arithmetic is a non-goal. In part, that's just because it is hard and there are quickly diminishing returns in benefits, but also because it seems like when the theoretical units get complicated is often where programmers don't want precise unit tracking (consider matrix and vector math, for example).</p><p>Like types, units annotations are optional (however, unlike types, sometimes units annotations are required to avoid errors). Numeric literals can take a unit as a suffix, e.g., <code>42mm</code> or <code>90deg</code> (<code>_</code> is used as a suffix for unit-less numbers). If a suffix is not used (e.g., <code>42</code>), the numeric value has 'default' units. Each file has a default length and angle unit (which can optionally be set with a file-scoped settings annotation, e.g., <code>@settings(defaultLengthUnit = in)</code>). A number with default units could be either a length, angle, or unit-less and KCL tracks the default unit for each possibility.</p><p>Numeric types can include a unit, e.g., <code>number(mm)</code>. Where there is no unit (<code>number</code>), the type has 'default' units (inherited from the file using the setting if present). More recently, we allowed use of units without the <code>number</code> part of the type, e.g., <code>mm</code>. These numeric types can also be used with type ascription (e.g., <code>(4 * x): mm</code>) which asserts that the value of the sub-expression has numeric type and sets the units of the value to those specified. We also have 'partial' units: <code>number(Length)</code> and <code>number(Angle)</code> which are used when a function can accept any length or angle (exactly how these work in some edge cases is currently under-specified, but my preference is that they should be syntactic sugar for universal quantification using a single variable at function scope, so for example, a function <code>fn foo(x: number(Length)): number(Length)</code> would always return the same units as its argument).</p><p>The units of values are tracked as the program is evaluated. As well as the fully concrete units (derived from literal suffixes), the partial units, and the default units, the interpreter has concepts of unknown units (where the interpreter cannot compute a type for a value, e.g., <code>4mm * 2mm</code> has unknown units since the type system has no concept of <code>mm^2</code>) and 'any' units (for a value which could take any units, only used temporarily to implement type ascription).</p><p>Where possible, units are implicitly converted, taking defaults into account. E.g., <code>4mm + 2in</code> will evaluate to <code>54.8mm</code>, <code>4mm + 2</code> would give <code>6mm</code> if the current default is <code>mm</code> or <code>54.8mm</code> if the current default is <code>in</code>. If we have <code>4 + 2</code> where the two values have the same defaults, then the result is <code>6</code> with those defaults, but if the defaults are different, then the units are unknown. Similar rules are applied when passing arguments, etc.</p><p>Using a number with unknown units is an error (for some definition of 'using' - it would be a bit obnoxious to give a separate error for every sub-expression with unknown units, etc.). Such errors require the user to specify the type using ascription. Type ascription does not do any conversion, so <code>2in: mm</code> is <code>2mm</code>, not <code>54.8mm</code>.</p><p>There are also functions in the standard library for explicit conversion between units.</p><h2 id="evaluation">Evaluation</h2><p>I think the overall system works pretty well. The syntactic burden is pretty low, most of the time it feels like it just works without too much friction, and it is mostly correct, catching many kinds of units-related bugs in programs. It is not perfect: there is a bit of an ergonomic cliff - when you do need to specify the units because of the limits of the system, it feels a bit clunky; it would be nice if the system were more expressive and could track units more often; explaining and teaching the system is a bit more complicated than I would like; and because of some of the fuzziness around defaults, it is still possible to write some units bugs. However, I think it has ended up at a good point in the trade-off between correctness and ergonomics.</p><p>The syntactic overhead is kept low by the use of default units (and per-file defaults) and the relatively lightweight syntax (e.g., suffixes for literals, units as types). Default units also help a lot with backwards compatibility. The system is also ergonomic due to implicit conversion of units in most circumstances - programmers rarely have to think about units beyond choosing units for a file or (sometimes) values; the system Just Works. This facilitates easy reuse of KCL components. The system is expressive enough for most practical uses, including nearly all of the standard library, due to having fully specified and partial unit types (with the universal quantification extension, it satisfies all expressiveness requirements of the standard library, but that is not totally trivial to support). The system was also implemented in time for the 1.0 release and can probably be extended without significant breaking changes due to the handling of unknown units (and which also allows for an 'escape hatch' when the system is not expressive enough).</p><p>I think it is worth examining why there is sometimes friction with the system. The context is important: KCL is by design a very low-friction language, there are few type errors and it is designed so that the right thing is usually either obvious or the only thing which can be done. Units are are a pretty novel concept, so if users have experience from other languages, they are used to following rules around types, but not units (I think this is somewhat analogous to the borrow checker in Rust - it can be frustrating for new programmers because they are used to doing whatever they want around ownership rather than following rules). Unit errors are also rare, most of the time units work without the programmer thinking about them. So, when a unit error does occur it is surprising and therefore frustrating. The manner of errors is also frustrating - KCL will tell you that the units are unknown and that this is an error, but not why they are unknown, why that is a problem, or how to fix it (I believe this could be improved, but it is not trivial). Furthermore, often when these errors occur, the code is fine and the system is not good enough to know that, rather than it being a bug in the code, and to a programmer the code can seem <em>obviously</em> correct. The fix (usually adding type ascription or sometimes a units suffix) feels bureaucratic rather than truly improving the quality of the code.</p><p>One corner of the system which doesn't feel right is the <code>PI</code> constant. In an ideal world, <code>PI</code> would be a <code>number(_)</code>, i.e., a unit-less/ratio number. However, in KCL (especially before the introduction of the units system) the majority of uses of <code>PI</code> was to convert between degrees and radians. If <code>PI</code> is typed as a ratio, then this introduces a silent error: after the conversion a number in radians is treated as a number in degrees or vice versa. This was the major source of false positives in the system. The solution is to make <code>PI</code> always have unknown units. This is a bad solution because it meant making unknown units expressible in the surface syntax and it doesn't fit with the mathematical concept of pi. It also means that any use of <code>PI</code> requires type annotation. This solution did avoid the correctness issues around manual conversion, but it is uncomfortable. I'm not sure what a better solution is, perhaps special handling for <code>PI</code> and <code>TAU</code>, but I don't know exactly how that should work.</p><p>In terms of future work or changes I would make in retrospect, I think the following areas could be improved:</p><ul><li>Support areas and volumes in a limited way. Supporting units which are a power of a single unit seems a fairly straightforward extensions (e.g., <code>mm^2</code>, <code>m^3</code>, <code>deg^-1</code>, etc.), much more so than arbitrary combinations of units.</li><li>Disallow default angles, i.e., a number can only be a length or unit-less by default, angle units must always be explicit. This would simplify the system for users and its implementation, and have an acceptable cost in syntactic overhead; it may even improve the readability of code. I implemented this as a warning recently, and I hope it can become the way KCL always works in the future.</li><li>No implicit conversion of default units. I.e., units are only converted if they are known with certainty. I think this makes the system more reliable, however, currently it is impractical. I think that over time, type annotations (including units) will become more common in KCL (especially if KCL gets static typing) and if that happens to a large enough degree then default units will be rare enough that this idea becomes practical (we don't need to eliminate use of defaults entirely, if we have known units for function parameters (but not values), I think that is enough).</li><li>Better error messages and better handling of <code>PI</code> and <code>TAU</code> as described above.</li></ul><p>All in all, I think that the units system in KCL is good. It is a correct and mostly ergonomic solution to the significant problem of mixing components with different units. It also brings other correctness benefits, and has fairly low overhead/cost. The implementation is non-trivial, but not complex by type systems standards.</p><hr><!--kg-card-begin: html--><p style="font-size: small;">
  I currently have availability for Rust coaching, adoption, or development; from a single call to ongoing 3 days/week. I can help your team get things done, adopt Rust and use it more effectively, or to accurately evaluate Rust as a new technology.
</p>
<p style="font-size: small;">
  If you're adopting Rust, I can help make that a success with advice, 1:1 or group mentoring, design and code review, or online support. <a href="https://www.ncameron.org/coaching">Coaching</a>.
</p>
<p style="font-size: small;">
  If you're building with Rust and need a short or medium-term boost, I can join your team, quickly get up to speed, and deliver value. I have expertise with async and unsafe code, database implementation, distributed systems, dev tools, and language implementation. <a href="https://www.ncameron.org/consulting">Consulting</a>.
</p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Recent Rust Changes]]></title><description><![CDATA[<p>In May last year I wrote a <a href="https://www.ncameron.org/blog/rust-through-the-ages/">blog post</a> on how Rust had evolved from the 1.0 release to 1.78. I found it really interesting to group all the changes together by topic, rather than seeing the language evolve one release at a time. We're now at 1.</p>]]></description><link>https://www.ncameron.org/blog/recent-rust-changes/</link><guid isPermaLink="false">68ffd1ad46ff65349b7bf6c1</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Mon, 27 Oct 2025 20:11:32 GMT</pubDate><content:encoded><![CDATA[<p>In May last year I wrote a <a href="https://www.ncameron.org/blog/rust-through-the-ages/">blog post</a> on how Rust had evolved from the 1.0 release to 1.78. I found it really interesting to group all the changes together by topic, rather than seeing the language evolve one release at a time. We're now at 1.90, so I thought it was time for another update. It's only been a year and a half, but quite a lot has happened (much more than I realised before writing this post). As well as a bunch of new features (listed below), there was the 2024 edition (released in February 2025), Rust had it's 10th birthday in May, and Rust now has an official language specification in the form of the <a href="https://blog.rust-lang.org/2025/03/26/adopting-the-fls/">FLS</a>.</p><p>I've picked the most exciting stuff (IMO) and summarised it below. Of course it's only a fraction of everything which has happened; in particular, there's been more std library stabilisations and <code>const</code>-ifications than I could possibly list. Check the release announcements on the <a href="https://blog.rust-lang.org/">Rust blog</a> for full summaries of each version and for future updates.</p><h2 id="language">Language</h2><ul><li>Async closures <code>async |...| { ... }</code> (1.85).</li><li><code>let</code> chains, i.e., using <code>let</code> with any <code>&amp;&amp;</code> clause in an <code>if</code> or <code>while</code> expression (not just the first, as in <code>if let</code>) (1.88 and 2024).</li><li><a href="https://blog.rust-lang.org/2025/07/03/stabilizing-naked-functions/">Naked functions</a> (1.88).</li><li>Changes to rules around lifetimes with <code>impl Trait</code> in return position (2024), and use&lt;..&gt;` syntax for lifetime bounds (see <a href="https://doc.rust-lang.org/stable/edition-guide/rust-2024/rpit-lifetime-capture.html">the chapter in the edition guide</a> or the <a href="https://github.com/rust-lang/rfcs/blob/master/text/3498-lifetime-capture-rules-2024.md">RFC</a> for details) (1.82, 1.87 for trait functions).</li><li><code>&amp;raw ...</code> operator to create a raw pointer (and to take a reference to a mutable or extern static without an unsafe block) (1.82, using non-raw <code>&amp;</code> is an error in 2024).</li><li>Trait object upcasting (implicit coercion) (1.86).</li><li><code>unsafe extern</code> blocks and <code>safe</code> keyword (1.82).</li><li><code>unsafe</code> in attributes (1.82, error to use unsafe attributes without <code>unsafe</code> in 2024).</li><li>Unsafe code inside an unsafe function requires an unsafe block (2024).</li><li><code>_</code> to infer const generics arguments (1.89).</li><li>Inline const blocks, e.g., <code>const { std::mem::size_of::&lt;T&gt;() + 1 }</code> (1.79).</li><li>inline assembly: <code>const</code> immediates (1.82) and jumps to Rust code (1.87).</li><li>Exclusive ranges in patterns, e.g., <code>match i { 0..10 =&gt; println!("9 or under"), ... }</code> (1.80).</li><li><code>expect</code> lint level, e.g., <code>#[expect(unused)]</code> (as a better alternative to <code>#[allow(dead_code)]</code>) and <code>reason</code> in lint attributes (1.81).</li><li>Bounds on associated types in bounds, e.g., <code>trait CopyIterator: Iterator&lt;Item: Copy&gt; {}</code> (1.79).</li><li><code>#[diagnostic::do_not_recommend]</code> attribute (1.85).</li><li>Unwinding inside an <code>extern(C)</code> function now aborts (1.81).</li><li>Dereferencing or reborrowing a null raw pointer will panic rather than being undefined behaviour in debug builds (1.86).</li><li>Lifetime extension from inside <code>if</code> and <code>match</code> blocks, e.g., <code>let x = if ... { &amp;new() } ... ;</code> is legal because the lifetime of the reference is extended to the enclosing scope, rather than being restricted to the block which is part of the <code>if</code> expression (1.79). Changes to rules around temporary lifetimes for <code>if let</code> and the last expression in a block (2024).</li><li>No need for pattern matching branches for impossible variants (e.g, using <code>!</code> or <code>Infallible</code>) (1.82).</li><li>Restrictions on using explicit referencing in patterns with match ergonomics (2024).</li></ul><h2 id="standard-library">Standard library</h2><ul><li><code>LazyCell</code> and <code>LazyLock</code>, alternatives to the lazy_static and once_cell crates (1.80).</li><li><code>io::pipe</code> and associated types (1.87).</li><li><code>File::lock</code>, etc. (1.89).</li><li><code>Error</code> trait moved from std to core (1.81).</li><li>Most of the API for <code>NonNull</code> (1.80, 1.84).</li><li><code>get_disjoint_mut</code> on slices and hashmaps (1.86).</li><li><code>Option::take_if</code> (1.80), <code>Option::is_none_or</code> (1.82), <code>Option::get_or_insert_default</code> (1.83).</li><li><code>collect</code> iterators into tuples (1.85).</li><li><code>Vec::into_flattened</code> and <code>as_flattened</code> on arrays (1.80).</li><li><code>Vec::pop_if</code> (1.86) and <code>Vec::extract_if</code> (1.87, for hashmaps in 1.88).</li><li><code>is_sorted</code> on various types (1.82).</li><li><code>Cell::update</code> (1.88).</li><li><code>str::from_utf8</code> (1.87).</li><li>API for uninitialised memory in <code>Box</code>, <code>Rc</code>, and <code>Arc</code> (1.82).</li><li><code>next_down</code> and <code>next_up</code> on floats (1.86).</li><li>More <code>ControlFlow</code> API (1.83).</li><li><code>Entry::insert_entry</code> (1.83).</li><li><code>hint::assert_unchecked</code> (1.81).</li><li><code>hint::select_unpredictable</code> (1.88).</li><li><code>wait</code> on <code>Once</code> and <code>OnceLock</code> (1.86).</li><li>Rename <code>PanicInfo</code> to <code>PanicHookInfo</code> and some panic message changes (1.81).</li><li>API for deconstructing <code>Waker</code>s (1.83).</li><li>More <code>io::ErrorKind</code> variants (1.83 and 1.85).</li><li>More <code>proc_macro::Span</code> API (1.88).</li><li>Strict <a href="https://doc.rust-lang.org/std/ptr/index.html#provenance">provenance</a> for pointers (1.84).</li><li>Improvements to sort algorithms (1.81).</li><li><code>Future</code> and <code>IntoFuture</code> added to the prelude (2024).</li></ul><h2 id="tooling">Tooling</h2><ul><li><code>cargo info</code> to display information about a crate (1.82).</li><li>Cargo MSRV-aware resolver (1.84, default in 2024).</li><li>Cargo automatically cleans up its cache (1.88).</li><li><code>cargo publish --workspace</code> (1.90).</li><li>Checking of <code>cfg</code> names and values (1.80).</li><li>Apple ARM is a tier 1 target (1.82) and Apple intel demoted to tier 2 (1.90).</li><li>Standard library is built with frame pointers (great for profiling!) (1.79).</li><li>Delete crates from crates.io.</li><li>'Report crate' on crates.io.</li><li><a href="https://crates.io/docs/trusted-publishing">Trusted publishing</a> and alerts in README.md on crates.io.</li><li>rustc on Linux <a href="https://blog.rust-lang.org/2025/09/01/rust-lld-on-1.90.0-stable/">uses lld by default</a> (1.90).</li></ul><hr><!--kg-card-begin: html--><p style="font-size: small;">
  I currently have availability for Rust coaching, adoption, or development; from a single call to ongoing 3 days/week. I can help your team get things done, adopt Rust and use it more effectively, or to accurately evaluate Rust as a new technology.
</p>
<p style="font-size: small;">
  If you're adopting Rust, I can help make that a success with advice, 1:1 or group mentoring, design and code review, or online support. <a href="https://www.ncameron.org/coaching">Coaching</a>.
</p>
<p style="font-size: small;">
  If you're building with Rust and need a short or medium-term boost, I can join your team, quickly get up to speed, and deliver value. I have expertise with async and unsafe code, database implementation, distributed systems, dev tools, and language implementation. <a href="https://www.ncameron.org/consulting">Consulting</a>.
</p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[KCL part 0]]></title><description><![CDATA[<p>For roughly the past year, I've been working with the good folk at <a href="https://zoo.dev/design-studio">Zoo</a> on their CAD product, the Zoo Modelling Studio (aka ZMS, aka KittyCAD). I've mostly been working on the design and implementation of KCL, a domain-specific language for CAD. Since that work has just wrapped up, and</p>]]></description><link>https://www.ncameron.org/blog/kcl-part-0/</link><guid isPermaLink="false">68f7d5b046ff65349b7bf6a0</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Tue, 21 Oct 2025 18:49:49 GMT</pubDate><content:encoded><![CDATA[<p>For roughly the past year, I've been working with the good folk at <a href="https://zoo.dev/design-studio">Zoo</a> on their CAD product, the Zoo Modelling Studio (aka ZMS, aka KittyCAD). I've mostly been working on the design and implementation of KCL, a domain-specific language for CAD. Since that work has just wrapped up, and it was super-interesting work that I thought others might be interested in, I plan to write a few blog posts about KCL and some of the its features that I worked on. In this first post, I'm going to mostly give some context about KCL.</p><p>KCL is a programming language for describing CAD documents. It is part of ZMS, so to understand the constraints on KCL, you need to understand a little about ZMS. ZMS is a new (1.0 was released in April 2025) CAD tool. It is unique for several reasons: it has a client-server model (the client is an Electron app with a TypeScript frontend and Rust/WASM core, the server runs in the cloud and does the core CAD stuff including rendering), it is code-first (the KCL code fully describes the content and is the source of truth), and it can do AI/ML stuff (which is facilitated by being code-first). The client app (including all the KCL stuff) is open source: <a href="https://github.com/KittyCAD/modeling-app">GitHub repo</a>.</p><p>It's worth digging into the 'code-first' aspect since this gives rise to a lot of the uniqueness of KCL. ZMS has a graphical UI which allows all the standard CAD editing operations. These operations edit the KCL source code and the code is interpreted which results in calls to the server API. The server computes the scene and sends video back to the client. Some things (mostly viewing-related, such as the camera position and orientation) are not part of the code and are sent to the server independently. The code can also be edited directly and ZMS functions like an IDE in that mode.</p><p>The above context shapes the 'architecture' of KCL. It is an interpreted language (the interpreter runs client-side and is implemented in Rust compiled to WASM). It can be edited in a regular text editor (and there is a CLI to run it from the command line outside of ZMS, though that is not a primary use case) or via the IDE built into ZMS. It can also be edited indirectly by using the ZMS GUI.</p><p>KCL is very domain-specific; it is not a general purpose programming language. The primary audience is mechanical engineers and other CAD users, not software engineers (more on this below because it is important). KCL is describing a static scene (there is no interactivity or IO, though editing can be interactive), so KCL is technically just data description. However, the domain really benefits from reuse, abstraction, iteration, etc. For example, we often want to use the same component (like a screw) multiple times in different places or to parametrise components (e.g., to make different sizes of screw).</p><p>The performance characteristics of KCL are interesting. We are not using it for anything real-time or interactive, nor do scenes get large enough for performance to be straightforwardly an issue. However, because editing is interactive, performance is important (for example, the user might want to drag a control point and the scene should smoothly transform). Because of the distributed nature of the application, there is latency between the interpreter and the CAD server, so the kind of performance considerations common in distributed systems are relevant (e.g., batching, caching, distributing work).</p><p>The overall ethos of the language is to be high-level, easy to understand for people without much programming background, have a gentle learning curve, and to do one job (CAD) well (as opposed to being generally useful). Although much of the design philosophy is similar to other programming languages (around things like composability, ergonomics, and expressiveness), the over-arching goal is to be effective as part of a CAD tool for an audience of CAD users.</p><p>From a programming languages perspective, KCL is dynamically typed (though not a very dynamic language, e.g., there is no <code>eval</code>, and little polymorphism). It is 'functional', in the sense that first-class and higher-order functions exist, though they have limited utility. KCL is mostly monotonic, though is not strictly referentially transparent. It is fully deterministic. It is more declarative than imperative, though I don't think it has the overall feel of a functional programming language. Evaluation is strict, not lazy.</p><p>So, back to the audience. I think the biggest thing which makes language design for KCL unique is that most users are mechanical engineers, not software engineers. This is both very different to general purpose programming languages, and also much less defined - the level and kinds of programming experience among mechanical engineers is likely much more varied than among software engineers. Some have extensive experience, maybe due to hobbies and interests, or a previous career, or because of some specialisation. Many have experience with tools like Matlab or have done a decent amount of programming at university or for some kind of scripting use case (Python seems to be the most likely language here). But many have done no programming at all. They are likely to have a strong engineering mindset, and experience with software and tools which are programming adjacent. There is a temptation to treat these users like beginner software engineers (since as a software engineer myself, I have been a beginner software engineer, but I have not been a mechanical engineer), or worse as just 'bad' programmers. However, that would be a wildly incorrect model. Our audience have deep experience (and the ingrained patterns of working which go with that), different perspectives, and different expectations.</p><p>As I mentioned earlier, there is also the AI/ML use case. Since KCL is a new language there is not much code to train AI on. So models have to already 'be able to program'. That means they often expect to write KCL which is more like existing languages. So even though the human users might not have experience with many languages, the AI 'users' do. The AI seems to expect some things based on naming (and whatever else) which I wouldn't expect from most users. I don't think I have much insight into language design for AI, so I probably won't touch on this in these blog posts, but I think it is an interesting point which came up in practice.</p><p>For more details about KCL, see <a href="https://zoo.dev/docs/kcl-book/intro.html">the book</a> or <a href="https://zoo.dev/docs/kcl-lang">the language reference</a>.</p><h2 id="an-example">An example</h2><p>Here's a very simple example, from <a href="https://zoo.dev/docs/kcl-samples/pipe">https://zoo.dev/docs/kcl-samples/pipe</a>, a simple pipe:</p><pre><code class="language-kcl">// Define parameters
pipeInnerDiameter = 2.0
pipeOuterDiameter = 2.375
pipeLength = 6

// Create the pipe base
pipeBase = startSketchOn(XZ)
  |&gt; circle(center = [0, 0], radius = pipeOuterDiameter / 2)
  |&gt; extrude(length = pipeLength)

// Extrude a hole through the length of the pipe
pipe = startSketchOn(pipeBase, face = END)
  |&gt; circle(center = [0, 0], radius = pipeInnerDiameter / 2)
  |&gt; extrude(length = -pipeLength)
  |&gt; appearance(color = "#a24ed0")
</code></pre><p>Some things to note:</p><ul><li>All the basics you'd expect from a programming language are there: numbers (there's only one kind of number in KCL, no distinction between floats and integers or different precision), strings (used for a colour in the example), variables (which are immutable and don't require a keyword to declare), arithmetic (there's logical operators too), arrays (used in the example for 2D points), and function calls (note that 'all' arguments are named).</li><li>The most obvious innovation is the pipeline syntax - <code>... |&gt; ...</code> this is pretty handy for creating objects; the result of a function is passed as input to the next function in the pipeline (which is why 'all' is in quotes above for describing named arguments, the input argument is not named; it can also be used outside of a pipeline, but more on that in another post).</li><li>There's a pretty extensive <a href="https://zoo.dev/docs/kcl-std">standard library</a> which covers basic programming stuff like finding the length of an array, but also a lot of CAD-specific stuff (e.g., <code>circle</code> and <code>extrude</code>, and constants like the plane <code>XZ</code>).</li></ul><p>There's lots of sample code online which you can <a href="https://zoo.dev/docs/kcl-samples">browse</a> to see the code and the 3D objects it produces.</p><h2 id="my-contributions">My contributions</h2><p>KCL had been around for a while before I started working on it and the general feel and character of the language and its implementation were well-established. I spent almost a year working on both the design and implementation, adding some features and refining others, and making the implementation more performant and (hopefully) easier to work with and extend. I'll cover some of the more interesting things in following blog posts, but here's a summary of some of the smaller or less blog-able things (of course none of this I did by myself, all of the below and the things I'll blog about were done in collaboration with the rest of the team at Zoo, in particular Adam Chalmers and Jon Tran who worked (and continue to work) on KCL):</p><ul><li>modules/assemblies - this was a key feature for the 1.0 launch. Assemblies are how components are assembled into complete CAD designs and in KCL the module system follows from the concepts of components and assemblies. I designed and implemented a system which was ergonomic for programming as well as fitting into an ergonomic flow for editing via the UI. I also designed an extension for shared libraries of components (which hasn't been implemented yet).</li><li>types - KCL is dynamically typed and mostly uses types for documentation and IDE functionality. I refined and evolved the types and implemented dynamic checking in the interpreter. I implemented a new documentation system which is driven by declared types (an improvement on the previous system which relied on the Rust implementation of the language and standard library).</li><li>Engineering and usability improvements. I worked on numerous smaller issues which together made KCL a better user experience and easier for implementers. I improved errors and warnings, added a system for experimental features, improved performance, refactored the implementation (in particular to be more type-driven, rather than relying on implementation details), started an API design to more cleanly separate the frontend from the interpreter, improved the interface between Rust and KCL (which allowed completely describing the standard library functions using KCL signatures and implementing some standard library functions using KCL), plus fixed bugs, and polished the user experience with numerous minor improvements to IDE features, error messages, documentation, and so forth.</li></ul><p>It's been hugely fun to work on KCL, in part because it is so different from Rust. It's been a great challenge to think of the audience and the specifics of the domain, as well as to consider the very different performance requirements. I think the product and language have great potential and I hope to see them both take off.</p><hr><!--kg-card-begin: html--><p style="font-size: small;">
  I currently have availability for Rust coaching, adoption, or development; from a single call to ongoing 3 days/week. I can help your team get things done, adopt Rust and use it more effectively, or to accurately evaluate Rust as a new technology.
</p>
<p style="font-size: small;">
  If you're adopting Rust, I can help make that a success with advice, 1:1 or group mentoring, design and code review, or online support. <a href="https://www.ncameron.org/coaching">Coaching</a>.
</p>
<p style="font-size: small;">
  If you're building with Rust and need a short or medium-term boost, I can join your team, quickly get up to speed, and deliver value. I have expertise with async and unsafe code, database implementation, distributed systems, dev tools, and language implementation. <a href="https://www.ncameron.org/consulting">Consulting</a>.
</p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[To panic or not to panic]]></title><description><![CDATA[<p>From a user's perspective an uncaught panic in a Rust program is a crash. A panic will terminate the thread and unless the developers have taken some care, that leads to the program terminating. This is not an exploitable crash and Rust usually ensures that destructors are called, but the</p>]]></description><link>https://www.ncameron.org/blog/to-panic-or-not-to-panic/</link><guid isPermaLink="false">68eeb73da78c662b773dd1b9</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Tue, 14 Oct 2025 20:49:38 GMT</pubDate><content:encoded><![CDATA[<p>From a user's perspective an uncaught panic in a Rust program is a crash. A panic will terminate the thread and unless the developers have taken some care, that leads to the program terminating. This is not an exploitable crash and Rust usually ensures that destructors are called, but the program still crashes.</p><p>This might seem fine or really bad, depending on your perspective. But I think we can all agree that an uncaught panic is never a good user experience. As Rust developers, how should we think about panics? How do we write nice code and give our users a nice experience?</p><p>Before getting into the options, it's worth noting that whatever approach you take to panicking, you need a robust error handling strategy. Panicking should never be your primary mechanism for handling errors.</p><h2 id="code-which-never-panics">Code which never panics</h2><p>Writing code that never panics seems like the right thing to do. However, it is very difficult, verging on impossible. Rust has no language features to help do it (such as an effect system to show whether calling a function can cause a panic), and the language, standard library, and many crates are designed around an assumption that panicking is ok (neither of which may be the best decisions in retrospect, but we've got what we've got).</p><p>To get specific, the language itself can panic on integer overflows (only in debug builds) and on out of bounds indexing, the standard library can panic in a bunch of places, I couldn't find an exhaustive list, but my best effort is:</p><ul><li>explicit panicking macros such as <code>panic</code>, <code>unimplemented</code>, etc.</li><li>assertion macros such as <code>assert_eq</code></li><li><code>unwrap</code>, <code>expect</code>, and similar methods on types like <code>Option</code> and <code>Result</code> which panic in the presence of an unexpected variant (it's worth calling out the <code>lock().unwrap()</code> idiom for handling mutex poisoning which is a frequent source of potential panics in many programs),</li><li>methods on <code>RefCell</code>, <code>Cell</code>, etc. such as <code>RefCell::borrow</code> which panic on violations of their borrowing invariants,</li><li><code>push</code> and similar methods on collections when their capacity overflows,</li><li><code>Iterator::step_by(0)</code></li><li>any function which can allocate where allocation fails if the allocator panics (which is usually only in <code>no_std</code> builds; I'm not sure of the exact rules).</li></ul><p>And any dependent crate could panic in any function, potentially (and arguably it's not a semver breaking change for this behaviour to change with new releases).</p><p>It is possible to write code that avoids all of the above, but it's not very much fun, and the result is unlikely to be idiomatic Rust for anything non-trivial.</p><p>Unfortunately there is not much to help eliminate, reduce, or contain panics. Rust doesn't have effect checking for panics; there are Clippy lints but they only do a shallow check, they don't cover panics in nested function calls (nor do they cover all sources of panics). There are some tricks with the linker which can be used to ensure a program doesn't panic (see <a href="https://blog.aheymans.xyz/post/don_t_panic_rust/">https://blog.aheymans.xyz/post/don_t_panic_rust/</a>), but they're a pain to use.</p><p>If you are writing small programs with high-priority requirements for not panicking, it is possible to write non-panicking code, and if the cost is justified then this is a feasible approach. However, the costs are high and for most programs this is not worthwhile. <strong>Pretending you are writing panic-free code by avoiding explicit panics is just wishful thinking.</strong></p><h2 id="code-which-only-panics-on-bugs">Code which only panics on bugs</h2><p>The <a href="https://doc.rust-lang.org/std/macro.panic.html#when-to-use-panic-vs-result">official advice</a> from the Rust project is that panics should never occur unless there is a bug. Unfortunately, bugs are not impossible, so following this advice will inevitably lead to production code panicking, which is (without any further mitigation) a bad experience for your users.</p><p>To put it another way, it is very difficult to know (even more difficult to <em>prove</em>) that a potential panic in your code will never be triggered. At the very least, this requires high-quality programming and extensive testing. That is still only going to improve things, not solve them completely.</p><p>A useful distinction is to make a distinction between relying on local vs non-local invariants. Using panics which rely on local invariants to demonstrate their impossibility is acceptable, but relying on non-local invariants is probably too risky.</p><p>For example, this kind of thing is OK (from the perspective of not panicking, it is unlikely to be idiomatic Rust code in general):</p><pre><code class="language-rust">if i &lt; arr.len() {
  // arr[i] could panic, but the check above ensures that it won't.
  println!("{}", arr[i]);
}
</code></pre><p>But I would try to avoid this sort of thing (although at least it's documented):</p><pre><code class="language-rust">/// Caller must ensure `i &lt; arr.len()` (otherwise will panic)
pub fn foo&lt;T&gt;(arr: &amp;[T], i: usize) -&gt; &amp;T {
  &amp;arr[i]
}
</code></pre><p>I think that striving for only impossible panics is a good start, but it still requires handling the 'impossible' panics when they do happen.</p><h2 id="handling-panics">Handling panics</h2><p>An alternative to not panicking is to assume your program might panic and ensure that those panics are handled in a way that they don't end up as a bad user experience. Panics can be handled in various places - at thread boundaries, at process boundaries (i.e., in your <code>main</code> function or similar), or outside the process (by running your program in some kind of supervised environment). You can then either panic liberally or follow the above advice to only panic on bugs. I would strongly recommend against using panics as a general exception mechanism though.</p><p>The big drawback is that your program has to be able to recover from panics. Since panicking runs destructors, you should (in theory) be able to keep program state consistent, but often that doesn't work, which makes recovery harder, possibly impossible. A particular hazard is that your program recovers but panics for the same reason as before and you get into an infinite loop of panicking and recovering.</p><p>A few other potential issues are that you have to be aware of panics at FFI boundaries (you must use the <code>-unwind</code> flavours of ABI and the interaction between panicking and other languages exception mechanisms is undefined), you must be aware that double panics (panicking while unwinding a panic) will cause the process to abort, and that if you build your program with <code>panic=abort</code> then its behaviour will change.</p><h2 id="conclusion">Conclusion</h2><p>There is no perfect answer. Making code strictly panic-free is possible, but hard work and only feasible in certain situations. For most code, minimising panics in the code and handling panics is a good solution, but is a bit more work than people usually expect. Letting the program panic is OK in some situations, but make sure that it really is ok and you're not just telling yourself that (and also that you have realistic expectations about how often panics will happen, i.e., not 'never').</p><p>There is no solution where you can forget about panicking and just think about the happy path. Although panics are 'safe', you do still need to think about panics when programming with Rust. All serious projects should always have a strategy for panics as part of their high-level design. Panicking is an implicit edge case which you should always keep in mind when writing or reviewing Rust code.</p><hr><!--kg-card-begin: html--><p style="font-size: small;">
  I currently have availability for Rust coaching, adoption, or development; from a single call to ongoing 3 days/week. I can help your team get things done, adopt Rust and use it more effectively, or to accurately evaluate Rust as a new technology.
</p>
<p style="font-size: small;">
  If you're adopting Rust, I can help make that a success with advice, 1:1 or group mentoring, design and code review, or online support. <a href="https://www.ncameron.org/coaching">Coaching</a>.
</p>
<p style="font-size: small;">
  If you're building with Rust and need a short or medium-term boost, I can join your team, quickly get up to speed, and deliver value. I have expertise with async and unsafe code, database implementation, distributed systems, dev tools, and language implementation. <a href="https://www.ncameron.org/consulting">Consulting</a>.
</p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Back to work]]></title><description><![CDATA[<p>Hi ho! Hi ho! It's back to work I go!</p><p>After taking a bit of a <a href="https://www.ncameron.org/blog/status-update/">break</a> from work, I am officially back into it (I've actually been back into it since July, but been too busy to do a proper blog post). I've decided to try out being an</p>]]></description><link>https://www.ncameron.org/blog/back-to-work/</link><guid isPermaLink="false">66d7ef43a78c662b773dd16d</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Wed, 04 Sep 2024 05:25:59 GMT</pubDate><content:encoded><![CDATA[<p>Hi ho! Hi ho! It's back to work I go!</p><p>After taking a bit of a <a href="https://www.ncameron.org/blog/status-update/">break</a> from work, I am officially back into it (I've actually been back into it since July, but been too busy to do a proper blog post). I've decided to try out being an independent contractor/consultant. I'm offering <a href="https://www.ncameron.org/consulting">software engineering services/team augmentation</a>, advice and assistance on how to adopt Rust or use Rust more effectively, and <a href="https://www.ncameron.org/training">training</a> (courses and coaching) on Rust-related topics. I have immediate availability for coaching and advisory consulting, and availability for larger projects from early October.</p><p>If you're interested in any of my services or courses, or would like to hire me, get in touch via <a href="mailto:nrc@ncameron.org">email</a>. If you would like to get updates on new courses, availability, etc. you can join my (very low volume) <a href="https://www.ncameron.org/contact/#mailing-list">mailing list</a>.</p><p>I'm not actively looking for a permanent role at the moment, but if you're happy to work with a flexible schedule, are doing interesting work in the databases or dev tools domains (using mostly Rust), and have a role you think might be a good fit for me, I'd love to talk to you about it!</p><h2 id="consulting">Consulting</h2><p>I have a lot of <a href="https://www.ncameron.org/about/">Rust experience</a> and I'd love to share it (I've been heavily involved with language design, dev tools, the compiler, the libraries, and governance since before 1.0, I've worked on large real-world code bases, and I've helped Rust adoption at companies from tiny startups to tech giants). I can help organisations to adopt Rust and to use Rust more effectively, and I can do that in the way which best suits the org: <a href="https://www.ncameron.org/consulting/">advice/Q&amp;A</a>, <a href="https://www.ncameron.org/consulting/">design and code review</a>, <a href="https://www.ncameron.org/training/">training</a>, <a href="https://www.ncameron.org/coaching/">coaching</a>, <a href="https://www.ncameron.org/coaching/">mentoring</a>, <a href="https://www.ncameron.org/consulting/">team augmentation</a>, or some combination of those.</p><p>I'm available for more straightforward <a href="https://www.ncameron.org/consulting/">software engineering</a> too. I work well with remote and hybrid teams, and can get up to speed quickly with large, complex code bases. I have most experience with database implementation, distributed systems, and developer tools. I'm also happy to work on more general development projects, particularly core components and libraries where API design, multi-context performance, and reliability are requirements. I can also add value to your team by maintaining or contributing to up-stream open source projects.</p><h2 id="courses">Courses</h2><p>I'm developing <a href="https://www.ncameron.org/training/">two courses</a>, hopefully more in the future. I'm offering these as open-to-anyone (paid), remote courses, and as private courses for a team or organisation. In the private case, I'm happy to customise in any reasonable way (and probably a few unreasonable ways, to be honest).</p><p>The first course I'll be offering (hopefully in early October, more news very soon) is <a href="https://www.ncameron.org/training/#performance">an introduction to performance engineering using Rust</a>. This is for anyone who knows some Rust and wants to write fast Rust code or to make Rust programs faster. It is particularly aimed at engineers with experience in higher level languages who want to learn about performance in the context of systems programming. It will cover the fundamentals of performance engineering and the specifics of writing performant Rust code, including profiling, concurrency, memory allocation, data layouts, common performance pitfalls, etc.</p><p>The second course will be a <a href="https://www.ncameron.org/training/#beginners">beginners Rust</a> course. However, rather than being a short, intensive course of active instruction, this will be a kind of guided learning. There will be a curriculum, pointers to resources, and some short talks, but the onus will be on the student to teach themselves. The primary benefit of the course will be the support available - small-group office hour sessions (via Zoom or similar), online support (via Slack), assessed exercises, and a cohort of motivated students.</p><h2 id="rust">Rust</h2><p>I'm still figuring out how to contribute to the Rust project. I love Rust. I think it is an important project for computer science/software engineering and humanity in general (urgh, that sounds pretentious but I honestly think it is true). There are lots of interesting problems to solve and great people to work with and learn from. BUT, I've found it hard to work on without getting too emotionally attached to the work and burning out. It's also hard for me to figure out how to contribute in a way which is useful for the project, interesting for me, and balances making use of my experience with learning new things. I've been away from things (or semi-away) for long enough that I feel a bit rusty (ha!), and getting from rusty to useful is hard work. To be honest, there are aspects of the project's culture and governance which I find dysfunctional and I don't want to be part of (nor do I have the energy or influence to change). I think I have a different vision for the evolution of the language to many people in the project, and swimming against that current is not something I want to do (I don't think I'm definitely right and they're definitely wrong, but I do believe in my opinions).</p><p>Anyway, I want to figure out some way to be involved and useful. I don't think I want working on Rust to be my full-time job or my primary focus. My hope is that I can work in such a way that I have the time, energy, and financial stability to work on Rust in my own time and on my own terms. But that is pretty hopeful, and at the least will take some time.</p>]]></content:encoded></item><item><title><![CDATA[New website]]></title><description><![CDATA[<p>Over the last few months I've been putting together a new website. It is now all online at <a href="https://www.ncameron.org/">https://www.ncameron.org/</a>. The website is mostly about work - I'm available now for <a href="https://www.ncameron.org/consulting">consulting</a> and <a href="https://www.ncameron.org/training">training</a>, and will have some <a href="https://www.ncameron.org/training/#open">courses</a> coming soon; more detail coming in another blog post.</p>]]></description><link>https://www.ncameron.org/blog/new-website/</link><guid isPermaLink="false">667c8dcda78c662b773dd14c</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Wed, 26 Jun 2024 21:53:29 GMT</pubDate><content:encoded><![CDATA[<p>Over the last few months I've been putting together a new website. It is now all online at <a href="https://www.ncameron.org/">https://www.ncameron.org/</a>. The website is mostly about work - I'm available now for <a href="https://www.ncameron.org/consulting">consulting</a> and <a href="https://www.ncameron.org/training">training</a>, and will have some <a href="https://www.ncameron.org/training/#open">courses</a> coming soon; more detail coming in another blog post.</p><p>I've tried to go for a minimalist look for the website, using a small colour palette, few design elements, lots of whitespace, and deliberate use of images. It may have ended up too simple, it's certainly a bit more low-tech than is fashionable, but I'm quite happy with the look and general practicality. It should render well on any device and in most browsers, with or without Javascript (though JS will make it a little prettier).</p><p>Primarily the site advertises my services. I've written about <a href="https://www.ncameron.org/consulting">consulting</a>, <a href="https://www.ncameron.org/training">training</a>, and <a href="https://www.ncameron.org/coaching/">coaching/mentoring</a> that I offer. The training section includes a bit of detail on courses I will teach (one on <a href="https://www.ncameron.org/training/#performance">performance with Rust</a> and one <a href="https://www.ncameron.org/training/#beginners">beginners' Rust</a> course). I expect there will be much more in that section in the future. There's also a page about <a href="https://www.ncameron.org/freediving/">freediving</a> which is mostly a hobby, but which I offer fairly informal coaching in.</p><p>The website also has a bunch of stuff about me. Not just for my vanity, but because I expect most folk will want to know who I am before they throw money at me (you are going to throw money at me, right?). As well as <a href="https://www.ncameron.org/about">a bio page</a>, there are some of my <a href="https://www.ncameron.org/about/talks.html">talks</a> and <a href="https://www.ncameron.org/papers/index.html">publications</a>, my <a href="https://www.ncameron.org/about/#resume">resume</a>, and this <a href="https://www.ncameron.org/blog">blog</a> (which hasn't changed recently).</p><p>I hope you like the new site! Please let me know via <a href="https://www.ncameron.org/contact/">email</a> or the socials if you find any bugs or have any feedback.</p>]]></content:encoded></item><item><title><![CDATA[Eternal Sunshine of the Rustfmt'ed Mind]]></title><description><![CDATA[<p>I will be giving a talk at Rustconf this year about Rustfmt and code formatters. The abstract is:</p><blockquote>How does Rustfmt work? How could it work better? (Demonstrated by a working prototype). Or worse? How did we persuade the Rust community to stop arguing about tabs vs spaces (and other</blockquote>]]></description><link>https://www.ncameron.org/blog/eternal-sunshine-of-the-rustfmted-mind/</link><guid isPermaLink="false">664678b3cbff2f0ecb2dd58f</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Thu, 16 May 2024 21:21:40 GMT</pubDate><content:encoded><![CDATA[<p>I will be giving a talk at Rustconf this year about Rustfmt and code formatters. The abstract is:</p><blockquote>How does Rustfmt work? How could it work better? (Demonstrated by a working prototype). Or worse? How did we persuade the Rust community to stop arguing about tabs vs spaces (and other more contentious topics) and start using a consistent code style across nearly every crate in the ecosystem?</blockquote><p>I'm really excited about giving this talk because code formatters are kind of cool and their design is surprisingly interesting. Since I stopped working on Rustfmt, I've been playing around with different designs and I'm looking forward to sharing some of my thoughts and lessons learned.</p><p>Code formatters are useful for individual developers, but I think the most important benefit is having a consistently enforced style across the ecosystem. Therefore, the social issues around code formatting are also really interesting. I'm excited to talk a bit about these issues (and their intersection with the technical issues) and the work of the style team in making Rustfmt a success.</p><p>I'm really looking forward to attending Rustconf in person (it will be my first in-person conference or Rust meetup since before Covid), hopefully I'll catch up with lots of old friends from the community and meet lots of new and cool people. Please find me and say 'hi' if you'll also be there!</p><h2 id="appendix-talk-title">Appendix: talk title</h2><p>I had no idea what to title this talk. I like the title I came up with, but I also liked 'Formatted Away' and 'My Neighbour Rustfmt'. I prefer those films, but I think the 'Eternal Sunshine' title is more appropriate for the talk.</p>]]></content:encoded></item><item><title><![CDATA[Rust through the ages]]></title><description><![CDATA[<p>How has Rust changed over the years? It's been nine years since 1.0 was released (well, next week, technically). In that time, there have been 78 major releases and two editions, with a third due later this year. Quite a lot has changed! Those changes have been fairly incremental,</p>]]></description><link>https://www.ncameron.org/blog/rust-through-the-ages/</link><guid isPermaLink="false">663bd609cbff2f0ecb2dd586</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Wed, 08 May 2024 19:45:08 GMT</pubDate><content:encoded><![CDATA[<p>How has Rust changed over the years? It's been nine years since 1.0 was released (well, next week, technically). In that time, there have been 78 major releases and two editions, with a third due later this year. Quite a lot has changed! Those changes have been fairly incremental, so if you've been using Rust all that time, it probably doesn't feel like a lot. But comparing the 1.0 flavour of Rust with today's is pretty startling!</p><p>I wondered what it would look like to write down all the big changes (just the big ones, none of the small ones, though some of them are not so small). It turned into a long blog post! I've only covered changes that made it to the stable channel and I've missed a lot (so many performance enhancements, so many bug fixes, so many small additions, especially to the standard libraries which feel so much more complete now, but it's hard to pick out headline changes). I've organised the changes by edition, but within each edition I've grouped them by theme. I've had to be very concise in my descriptions, sorry. But you can have a fun time searching for all the features you missed!</p><p>Let me know what I've missed!</p><h1 id="1-0-to-2018-edition">1.0 to 2018 edition</h1><p>The first part of this period felt like finishing the job of the 1.0 release. There were lots of really fundamental language changes which I can't imagine not being part of Rust, lots of library stabilisations, lots of performance improvements. Later, in the run up to the 2018 edition release, were some big changes which changed the feel of Rust in small but significant ways. There were also a lot of tooling improvements which made Rust feel much more production-ready and less like a new, risky language.</p><h2 id="language">Language</h2><p><code>?</code> operator (1.13). Works with <code>Option</code> (1.22). Return values from loops using <code>break value</code> (1.19). Inclusive ranges, e.g., <code>0..=10</code> (1.26). Return <code>Result</code> from <code>main</code> (1.26) and tests (1.28).</p><p><code>pub(path)</code> including <code>pub(crate)</code> and <code>pub(super)</code> (1.18). Nested groups of imports, e.g., <code>use std::{fs::File, io::Read, path::{Path, PathBuf}};</code> (1.25). Renaming in imports, e.g., <code>use foo::bar as baz;</code> (1.4). <code>crate</code> in paths (nbot just imports) (1.30). Don't require prefix <code>::</code> to import from an external crate (1.30). Import an reexport macros from external crates using <code>use</code> (1.30). No more need for <code>extern crate</code> (2018). Unification of paths in imports and other paths (2018).</p><p><code>#[derive]</code> procedural macros (1.15), other procedural macros (1.30 and 1.45). Macros in type position (1.13).</p><p><code>const</code> functions (1.31). 128 bit integers (1.26). Raw identifiers, e.g., <code>r#type</code> (1.30). Overriding <code>+=</code>, <code>-=</code>, etc. (1.8). Smart pointers can contain dynamically sized types, allowing types like <code>Rc&lt;[T]&gt;</code> (1.2).</p><p>Unions (1.19). Numeric fields can be used to name fields of tuple structs, e.g., <code>Foo("hello", 42)</code> and <code>Foo { 0: "hello", 1: 42 }</code> are equivalent (1.19). Field initialiser shorthand, e.g., <code>Foo { foo }</code> instead of requiring <code>Foo { foo: foo }</code> (1.17). Empty structs with braces, e.g., <code>struct Foo {}</code> (1.8). Allow empty tuple structs, e.g., <code>struct Foo();</code> (1.15). Use <code>Self</code> in struct initialisers, e.g., <code>Self { foo: 42 }</code> (1.16).</p><p><code>impl Trait</code> as return type and argument type (1.26). <code>dyn Trait</code> as an alternative to just <code>Trait</code> for trait objects (available in 1.27, mandatory in 2021). Associated constants (1.20). <code>?Sized</code> in where clauses (1.15). Remove anonymous arguments in trait definitions, e.g., <code>fn foo(&amp;self, u8);</code> (2018).</p><p><code>#![no_std]</code> (1.6).</p><p>Attributes on statements (1.13); attributes on generics (1.27). Non-string arguments in attributes (1.30).<br><code>#[deprecated]</code> attribute (1.9). <code>#[repr(transparent)]</code> (1.28).</p><p>Automatic dereferencing in patterns, this is not very visible but a massive ergonomic win (1.26).</p><p><code>'_</code> (1.26). Lifetime elision in <code>impl</code> headers (1.31). Default lifetimes for static and const values (<code>'static</code>) (1.17). Non-lexical lifetimes (2018).</p><h2 id="libraries">Libraries</h2><p>SIMD intrinsics (1.27). <code>Duration</code> (1.3). <code>Instant</code> and <code>SystemTime</code> (1.8). <code>thread::sleep</code> (1.4). <code>ManuallyDrop</code> (1.30). <code>std::panic</code>, <code>catch_unwind</code> etc. (1.9 and 1.10). <code>ptr::NonNull</code> (1.25). <code>Box::leak</code> (1.26).</p><p><code>{:#?}</code> pretty printing version of the debug formatter (announced in 1.2, but apparently was there before 1.0, though I could swear this came later).</p><p><code>eprint</code> and <code>eprintln</code> macros (1.19). <code>println!()</code> rather than requiring <code>println!("")</code> (1.14).</p><p>Specifying a global allocator and the <code>std::alloc</code> module (1.28).</p><h2 id="tooling">Tooling</h2><p>Rustup 1.0 (1.14). Clippy and Rustfmt 1.0 (1.31). RLS initial release (first generation IDE support) (1.31).</p><p>New error format (1.12); short error format (<code>--error-format=short</code>) (1.28).</p><p><code>cargo install</code> (1.6), <code>cargo check</code> (1.16), <code>cargo fix</code> (1.29), <code>cargo init</code> (1.8).</p><p>Attributes for tools, e.g., <code>#[rustfmt::skip]</code> (1.30); lints for tools, e.g., `#[allow(clippy::filter_map)]`` (1.31).</p><p><code>--explain</code> flag added to the compiler to give more detail on error messages (1.1). MIR added to rustc (1.12, yeah it's an implementation detail, but it's a pretty important one).</p><p>MSVC toolchain for Windows (1.2). Non-official Windows XP support (1.3).</p><p>crates.io no longer supports wildcard versions (1.6).</p><p>rustc builds with Cargo (1.8) and Rustbuild (no makefiles) (1.15).</p><p><code>rust-gdb</code> and <code>rust-lldb</code> scripts (1.10).</p><h1 id="2018-edition-to-2021-edition">2018 edition to 2021 edition</h1><p>This period inevitably felt a lot slower than 2015-2018. In retrospect, it looks like a real maturation phase for Rust (one aspect of this is the increase in compiler targets in this time). There were a few big new features and a bunch of everyday-life improvements to the language. One of those big features was const generics and over this period (and continuing after it), a big trend is converting functions into being <code>const</code>. Similarly on the language side of things, there has been a steady stream of work making more and more features work in <code>const</code> context.</p><h2 id="language-1">Language</h2><p><code>async</code> functions and blocks, <code>.await</code> (1.39).</p><p>const generics using integers, <code>char</code>, or <code>bool</code> (1.51); unsafe const functions (1.33). Use <code>Self</code> in type <code>where</code> clauses, constructors for tuple structs, etc. (1.32). Some types (<code>Rc</code>, <code>Arc</code>, <code>Pin</code>) can be used as the method receiver type (1.33).</p><p><code>?</code> in macro definitions to indicate zero or one repetitions (1.32). Literal specifier in macro definitions (1.32). Use macros in type position (1.40).</p><p>Multiple patterns using <code>|</code> in <code>if let</code> and <code>while let</code> (1.33). Able to use <code>|</code> in nested fashion in patterns, e.g., <code>Some(1 | 2)</code> (1.53). Removed <code>...</code> range syntax in favour of <code>..=</code> (warning in 1.37, error in 2021).</p><p><code>#[non_exhaustive]</code> (1.40), <code>#[track_caller]</code> (1.46). Multiple attribute arguments in <code>cfg_attr</code> (1.33).</p><p>Import traits anonymously, e.g., <code>use std::io::Read as _;</code> (1.33).</p><p>Unicode identifiers (1.53).</p><p>Cast uninhabited enums to integers (1.49).</p><h2 id="libraries-1">Libraries</h2><p><code>dbg</code> macro (1.32), <code>todo</code> macro (1.40), <code>matches</code> macros (1.42), <code>try</code> macro is deprecated (1.39).</p><p><code>Pin</code>, <code>Unpin</code>, etc. (1.33). Sized atomic integers, e.g., <code>AtomicU16</code> (1.34). <code>mem::MaybeUninit</code> (1.36). <code>ops::ControlFlow</code> (1.55).</p><p>Factored out the <code>alloc</code> crate from <code>std</code> (1.36).</p><h2 id="tooling-1">Tooling</h2><p>New feature resolver in Cargo (2021). Alternate Cargo registries (1.34). <code>--offline</code> for Cargo (1.36); <code>-all</code> changed to <code>--workspace</code> (1.39). <code>cargo tree</code> (1.44). Minimum supported Rust version in Cargo.toml (1.56).</p><p>Default allocator changed from jemalloc to the system allocator (1.32).</p><p>Profile-guided optimisation (1.37).</p><h1 id="2021-edition-to-today">2021 edition to today</h1><p>It's been an interesting few years, and the next few months leading up to the 2024 edition will undoubtedly bring some more big changes too. There have been a few pretty big additions or visible additions, but as you might expect for a nine-year old language, there are a lot of pretty deep and subtle changes, and compared to previous years, the pace of big changes has slowed down. However, one thing I noticed is that the big changes are getting bigger and consequently, there are a lot of smaller, incremental changes landing, and then later the big feature gets released. I think this speaks to the increasing experience and confidence of the Rust project in managing and implementing these big, multi-year changes.</p><p>There's a bunch of trends I noticed which don't have any headline features to highlight below. As in previous years, there are a lot of new methods on library types. The integer types seem especially blessed, recently. There's also a load more compilation targets available, and the range of hardware which Rust supports now is huge. It would be mind-blowing to consider this list at the 1.0 release. Another thing happening is that Clippy lints are moving into the compiler proper. Not sure how visible that is (you all run Clippy, right?), but it's great to see.</p><h2 id="language-2">Language</h2><p>Generic associated types (1.65).</p><p><code>let else</code> (1.65). Break from a labelled block returning a value (1.65).</p><p>Async functions in traits and trait methods returning <code>impl Trait</code> (1.75).</p><p>Use variable names directly in format strings, e.g., <code>"hello {name}!"</code> (1.58). C string literals, e.g., <code>c"hello, world"</code> (1.77).</p><p>Inline assembly (<code>asm</code> and <code>global_asm</code> macros) (1.59).</p><p>Destructuring assignment, lets the left hand side of assignment be a pattern just like in <code>let</code> statements (1.59).</p><p>Default arguments for const parameters (1.59).</p><p><code>#[derive(default)]</code> for enums using <code>#[default]</code> (1.62).</p><h2 id="libraries-2">Libraries</h2><p>Scoped threads (1.63).</p><p><code>OnceCell</code> and <code>OnceLock</code> (1.70).</p><p><code>Saturating</code> type (1.74).</p><p><code>try_reserve</code> on many collections (1.57 and 1.63).</p><p><code>new_cyclic</code> for <code>Rc</code> and <code>Arc</code> (1.60).</p><p><code>std::process::{ExitCode, Termination}</code>, facilitating custom exit codes (1.61).</p><p><code>Backtrace</code> (1.65).</p><h2 id="tooling-2">Tooling</h2><p><code>cargo add</code> (1.62), <code>cargo remove</code> (1.66), <code>cargo logout</code> (1.70). <code>[lints]</code> section in Cargo.toml (1.74). Custom profiles in Cargo (1.57). Inherit package settings from the workspace (1.64).</p><p><code>#[diagnostic]</code> attributes for letting libraries influence compiler error messages (1.78).</p><p>Rust Analyzer distributed by rustup (1.64), RLS removed (1.65).</p>]]></content:encoded></item><item><title><![CDATA[Work news!]]></title><description><![CDATA[<p>(This blog post is an edited version of posts on social media, slightly edited and posted here for completeness/further reach).</p><p>I'm preparing a training course to teach performance engineering to Rust programmers. It's aimed at engineers with a background in high-level languages (Go, Python, Java, ...) moving to Rust to</p>]]></description><link>https://www.ncameron.org/blog/work-news/</link><guid isPermaLink="false">662ef84acbff2f0ecb2dd57b</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Mon, 29 Apr 2024 01:31:23 GMT</pubDate><content:encoded><![CDATA[<p>(This blog post is an edited version of posts on social media, slightly edited and posted here for completeness/further reach).</p><p>I'm preparing a training course to teach performance engineering to Rust programmers. It's aimed at engineers with a background in high-level languages (Go, Python, Java, ...) moving to Rust to write high performance, systems-level code. It will quickly get you up to speed with profiling and optimisation, give you an understanding of how system architecture and Rust's design affect performance, and get you writing blazing-fast code. First dates should be in August. More details soon...</p><p>In the mean-time, I have some availability for 1:1 or small-group Rust coaching or advisory consulting. Level-up your team's Rust expertise or learn more about adopting Rust in your org. I bring over ten years of Rust experience, deep understanding from being part of multiple Rust teams, and experience of fostering adoption of Rust at Microsoft.</p><p>Finally, I'll be looking for software development/team augmentation/library maintenance contracts from June-ish. Ideally using Rust for database implementation, distributed systems, or developer tools. Short or medium term.</p><p>If you're interested in any of these, please reach out via <a>nrc@ncameron.org</a>.</p>]]></content:encoded></item><item><title><![CDATA[Notes on personal productivity]]></title><description><![CDATA[<p>Over the years I've found it pretty important to have a good system for self-organisation. This has evolved over time, with a major influence being the book <a href="https://en.wikipedia.org/wiki/Getting_Things_Done">Getting Things Done</a> by David Allen (which iirc was recommended to me by Aaron Turon and/or Dave Herman).</p><p>I thought I'd write</p>]]></description><link>https://www.ncameron.org/blog/notes-on-personal-productivity/</link><guid isPermaLink="false">66219efacbff2f0ecb2dd555</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Thu, 18 Apr 2024 22:34:03 GMT</pubDate><content:encoded><![CDATA[<p>Over the years I've found it pretty important to have a good system for self-organisation. This has evolved over time, with a major influence being the book <a href="https://en.wikipedia.org/wiki/Getting_Things_Done">Getting Things Done</a> by David Allen (which iirc was recommended to me by Aaron Turon and/or Dave Herman).</p><p>I thought I'd write down what I do in case its useful for others. I'm pretty happy with my system, it works well for me. However, I think this kind of thing is very personal so it might not work for you. Still, hopefully some of it will be useful or inspire you to think about what would work for you.</p><h2 id="principles">Principles</h2><p>I subscribe to the hypothesis that having 'stuff' floating around in my head causes stress and makes it difficult to get work done. I.e., there is cognitive overhead in keeping things in my head which are not directly related to my current task; even if I don't think that I'm thinking about them. So, my number one rule is to get stuff out of my head and on to a piece of paper. Like, everything. I often think I've done that, then think on it and realise there are a bunch more 'todo's that I didn't think of as todos that I need to write down to. E.g., I used to write down the work stuff I needed to do on a given day, but then have other stuff that I wouldn't (e.g, call the plumber). Then I realised there was a bunch of stuff, like 'I would like to get out the house for a coffee' which I didn't think of as a todo, but it is! The more I look for things the more I found, and the more I wrote them down, the better I felt.</p><p>I like to be organised. I've always been pretty organised, but I found that being more organised than that was worthwhile. I have found that you can get more and more organised to a really surprising extent before you get diminishing returns.</p><p>I've experimented with things like the Pomodoro technique, and I think there is a kernel of truth that time-boxing is good, and short chunks are easier to deal with. But I've found that a one-size-fits-all time chunk does not work for me. So I try and give the things the time they need. That might be 30 minutes for dealing with email, two hours for focused work, 15 minutes for a tea break, whatever. I've tried scheduling, but found that doesn't work for me. Instead I just try to be conscious of a time allowance per task, and stop when I hit it (unless I have an appointment or meeting or something, I'm not super-strict about this, but I find it useful to have a 'goal time' to work towards so that I don't need to think about when to stop, or spend too long on a task which isn't so important).</p><p>Finish small things, leave large things very unfinished. For small tasks, I always aim to get them done, rather than have half a task hanging around in my subconscious. Large tasks (i.e., any which require more than one time chunk) are more interesting. I find that if I stop at a nice, tidy place then it is harder to get back into it. Instead, I try and stop in the middle of a task where it is obvious how to carry on. E.g., say I have to write a bunch of tests, more than I can write in one go. Then I prefer to break in the middle of writing a test, rather than at the end of one test/start of another. Then I feel motivated to start the work (I have an itch to finish what I started) and it is easy to do so, because the next step is obvious. Doing this requires that leaving detailed notes on where I got up to and what needs to be done next.</p><h2 id="practical-things">Practical things</h2><p>Todo lists are a big thing for me and I'll give them their own section, next.</p><p>Buy a filing cabinet. It felt kind of weird buying an office filing cabinet for my home, and it is pretty ugly (but its under my desk, so I don't really notice it). But it is incredibly useful. So much stuff goes in it. I no longer have miscellaneous piles of paper around the office and I always know where to find things. I keep everything in it - important documents, random work notes, receipts, manuals, odd bits of Lego, cute conference stickers, etc., etc. Once you have one, organise it really well. It's important you can find anything you need quickly and can file things away quickly, don't let either of these simple tasks become a big enough thing that you have to think about it.</p><p>I have an in-tray on my desk. It is useful for keeping things tidy, but I feel that using it is a bit of a balance - it's good to have a place to put things which need doing, but easy to let it become another todo list. So I try and avoid using it for objects representing tasks. I try to keep it mostly empty and move things out of it quickly (which might be onto a todo list or into a 'revisit' folder in my filing cabinet).</p><p>I use a calendar app and put things in it obsessively. Even tiny little things, or things which don't feel like they should be in the calendar (e.g., date night). Anything which should happen on a given date or at a specific time goes in the calendar (<em>not</em> on a todo list). As with the todo list, the goal is to have minimal cognitive overhead and not to miss anything.</p><p>The problem I've found with calendars is they multiply. I usually have a personal one, a work one, and a family one. The last is on paper, the others online but often in distinct places. I've not found a good way to reliably sync across calendars (yes, I know calendars can be shared across apps, but I've not managed to make this work well). My solution is to make a daily plan from all calendars at the start of each day (part of today's todo list), and rely on notifications working from all calendars. This works but is pretty unsatisfactory.</p><p>Email, sigh. My number one goal is to avoid each inbox becoming an implicit todo list. This inevitably happens to some extent. I try to follow an inbox zero philosophy, though not strictly. I try to triage email quickly and delete it, store it, or add a todo list item and move it to my 'revisit' folder. In practice, I often just mark an email as unread if I want to get back to it on the same day. I have a lot of folders to avoid my inbox (or a single 'saved' folder) filling up with saved email. Filters are great, and you should have loads. I generally have filters for mailing lists, notifications, etc. I don't use a filter if I ever have to double-check that it did the right thing - that just means having an extra recurring task, I'd rather triage the emails manually as they come in (obviously that doesn't work if you have a huge number of true positives and a small number of false positives, but I've managed to avoid that situation).</p><p>Relatedly, notifications are bad for productivity. They interrupt my thinking and distract me from the task in hand, which means that both the task and the notification take longer to deal with. I like to make my notifications as passive as possible. Anything non-essential or which can be postponed gets turned off; I prefer to pull updates rather than have them pushed to me. That covers social media, email, most software, etc. Some notifications I can't justify turning off, e.g., text messages and phone calls. I make these silent so I can ignore them until it is convenient for me to check, whether that is the end of a sentence or a coffee break, depending on urgency (a quick glance is usually enough to determine urgency without disrupting my work).</p><p>I try to deal with stuff which turns up quickly as soon as I'm aware of it, without interrupting my current task. For very small things (less than three minutes) I'll try and do the task immediately. For anything longer I'll try and do a very quick mental triage and put it on a todo list, or make a concious decision that I'm not going to do anything. I try to never, ever 'think about it later' - it'll either stress me out without me noticing or I'll forget it.</p><p>I have start and end of the week check-ins with myself on Monday and Friday mornings. I find Friday evenings don't work since I often miss them. On Monday I make a todo list for the week (starting by revisting anything left over from last week). I scan my calendars for anything important, tidy up by email inboxes and physical in-tray, and then deal with anything which happened over the weekend (due to timezones and working on open source projects, this happens a lot). On Friday I re-organise and tidy my todo lists and reflect on the week (in particular anything that could have gone better if I'd had less cognitive overhead or some better system of organisation), and my systems (I've got to admit, that I don't actually do this anywhere near every week, but this is my goal and I manage some weeks).</p><h2 id="todo-lists">Todo lists</h2><p>I'm a big believer in well-organised todo lists. There are a few points which make for a good system of todo lists, IMO:</p><ul><li>it must be complete in both breadth and depth; no todo items should be outside of your list (e.g., in your head or your email inbox),</li><li>it must be very easy to know precisely what to do next,</li><li>it must be realistic - it is demoralising and unproductive if the list is full of stuff you have to skip over because you're not really going to do it (at least in the time frame of the list).</li></ul><p>To give a bit more detail on the above, it is very easy to have 'implicit todo' items around. Keeping things in your head is obvious, but a few other places they might be hiding are:</p><ul><li>email inbox,</li><li>pending notifications,</li><li>scraps of paper which are not tracked in your todo list system,</li><li>icons on your desktop,</li><li>files in a download folder,</li><li>a physical in tray,</li><li>comments in code,</li></ul><p>etc.</p><p>Keeping these tasks hidden means they are easy to lose track of. Subconsciously knowing these things exist and therefore that your todo list system is incomplete, leads to stress.</p><p>Part of the point of the system is to avoid procrastination due to not knowing what the next task is. I find that if I have a vague idea of a task, but need to figure out the exact thing I need to do, then that requirement for thought blocks me starting the task. Therefore, the todo list system needs to have a trivially easy way to find out what to do next. I need to be able to glance at a list and immediately start the task, without having to think about what the task actually is.</p><p>A system like this will never be perfect, and we shouldn't let 'perfect be the enemy of good'. Sometimes tasks stick around for longer than they should or aren't as atomic as I thought when I wrote them down, and that's fine. But I need to be able to trust the system so that I can rely on it. That means that most of the time it is an honest reflection of the tasks I'll get done in a given time period. Having a daily list which turns into an ongoing list every day, is bad.</p><p>A system which satisfies those requirements is probably going to work quite well. The way I organsise mine is:</p><p>I have a list of 'project's and postponed tasks, which I keep in a notes app on my phone.</p><p>A 'project' is any non-trivial task which should be broken down into sub-tasks, e.g., painting my gate is a project (which I just finished, it looks nice, thanks for asking!), the tasks were: clean the gate, sand the gate, buy paint, put on masking tape, undercoat the gate, paint the gate, take off the masking tape.</p><p>Projects can be a bit hierarchical. I find this is mostly only useful for work stuff. When I do this, I usually make the project/task lists outside of the notes app and use something project-specific. However, there is a reference in the notes app so I don't lose projects.</p><p>A postponed task is one which I need to do, but I don't need to do urgently or at a known time. E.g., I want to buy some new climbing shoes, but the old ones are just about OK and I'm not climbing much, so I can't really justify the cost right now. I could just forget about this, but I find it reassuring to have this kind of thing written down so I can glance at the list and see what kind of things are coming up in the future.</p><p>A task with a specific date or time goes in my calendar, not in the todo list (unless the specific date is today).</p><p>I also use the notes app for recording tasks temporarily while I'm away from my desk.</p><p>I keep task lists for today, tomorrow, and this week. Today and tomorrow are on paper; the tomorrow list turns into the today list. Having them on paper is flexible and convenient. The 'this week' list lives in my notes app on my phone, I move things from it to the today or tomorrow list when I think I'll have time that day.</p><p>If think there will be time left in a day, then I'll scan the postponed tasks list for things I could add. Otherwise, I'll scan that list once per week and move some things to the weekly list.</p><p>I try and keep the today and tomorrow tasks very small and very do-able. There might be projects on the 'this week' list, but not on the daily lists. If I haven't previously identified the tasks in a project, then I do so when I'm scheduling them by adding them to my today list.</p><p>I cross things off the various lists as soon as I know I won't do them, and I try hard not to feel bad about it.</p><p>My lists are each in rough priority order, but I don't worry too much about this.</p><p>This sounds like a lot when you write it down in a blog post, but I find it is all pretty easy and lightweight in real life. I also bend the rules a lot, it doesn't matter too much as long as the system is working.</p>]]></content:encoded></item><item><title><![CDATA[Status update]]></title><description><![CDATA[<p>I left Microsoft at the beginning of July (after two years) and also stopped working on Rust at the same time. I intended to finish up some work and hand stuff over etc. on the Rust side, but I found that I simply did not have the energy. I've been</p>]]></description><link>https://www.ncameron.org/blog/status-update/</link><guid isPermaLink="false">656e8c67cbff2f0ecb2dd524</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Tue, 05 Dec 2023 02:53:35 GMT</pubDate><content:encoded><![CDATA[<p>I left Microsoft at the beginning of July (after two years) and also stopped working on Rust at the same time. I intended to finish up some work and hand stuff over etc. on the Rust side, but I found that I simply did not have the energy. I've been resting since then and it has been fantastic. I don't really have much to report on, since nothing I have been doing is of much interest from a tech perspective, and this blog is mostly about that. But I feel like I should make some kind of announcement that I've made a change, rather than just disappearing, so here it is.</p><h2 id="leaving-rust-and-microsoft">Leaving Rust and Microsoft</h2><p>Short story: I was burnt-out on Rust development and work/life more generally.</p><p>Longer story, when I left Rust and Mozilla for PingCAP a few years ago, I thought I just wanted a change and to do some more technical work. After I left, I realised I was pretty burnt-out on the whole situation. Working at PingCAP was great - it was everything I wanted in terms of being a big change, interesting technical work, and good people. However, being remote and dealing with the pandemic was rough (lack of international travel and thus face-to-face time with colleagues, primarily). At the time that was happening I was feeling very motivated to work directly on Rust again. I love Rust. Working on it has been a privilege, and a huge part of my career and development as an engineer. Having an opportunity to use Rust in a large, real-life application was great, but got me thinking again about how to make Rust better. An opportunity at Microsoft came up to do exactly that, as well as help Microsoft adopt Rust internally; that sounded like a dream job.</p><p>Unfortunately, I was not as recovered from my burn out as I thought I was. Being so passionate about the project (and therefore tying my identity to my work more than I should) has made me more susceptible to some of the causes of burnout than I would like. It was also a difficult couple of years in my personal life. As well as the general pandemic conditions, I caught Covid, had a second child and thus a baby/toddler to care for, moved literally across the globe, then did it again, and supported my wife through a challenging and time-intense period of her career. Navigating the duality of the work role (its relation to the Rust project and to Microsoft), as well as a step-up in responsibility and my first time working at a mega-scale corporate, added to the stress. I found it very difficult to navigate my return to the Rust community and how I wanted to fit into it. I found myself in disagreement with more people, more often than I had in the past. Due (I think) to its increased size, the multiple new interests and employers, and my own decreased social energy, I found it hard to deal with the various people/culture/political issues. It was all <em>a lot</em>. I decided I needed a long break and probably another big change.</p><p>After I stopped working, I realised just how burnt-out I had become. Very luckily, I am able to take a decent break. I'm feeling much better now (five months in)! I can read books again! I'm feeling motivated about tech! I'm also really enjoying having the extra time to do non-work stuff, and catch up on all the various chores and general maintenance stuff that I've been putting off for years. I still feel far off my 100%; I hope that more time and a gradual reintroduction of tech stuff (focussing on learning interesting things and the things which bring me joy) will help with that.</p><p>It's kind of a shame. I still feel motivated to work on and improve Rust; I still feel the passion for a project which feels so well-motivated and well-timed, that is technically interesting in so many ways, and has such a great community and culture. Working at Microsoft was great. I was learning a lot and was working with a fantastic team (I miss you all!). Despite being a huge corporate and very business-focussed, I liked the culture. It was great to work at a place where things were turning around and getting better, rare for an established and mature organisation. I was impressed by the competency and communication of my leaders. In short, it was a good place to work and I would recommend it. But the timing for me was terrible :-(</p><h2 id="vague-plans-for-the-future">Vague plans for the future</h2><p>I plan to take more time off work, probably another six months. I'm starting to read some of the books and papers I've been meaning to read, and I plan to work on some of the side projects that I haven't had time for. Mostly these will be private, learning things, but one or two might be of broader interest. If so, I'll write about them here because I would also like to write (and perhaps talk) more.</p><p>I would love to work on Rust again, but I really have to find a way I can do that without getting burnt-out again before I do that. That is probably more of an emotional balance issue than a work-load issue for me.</p><p>I'm not thinking too much about returning to work right now, and I'm definitely not looking for anything right now. But I will probably be after employment or contracting opportunities in about six months, preferably in the realm of distributed systems and databases, preferably using Rust; if you have any interesting leads, please get in touch! I'm also thinking that I would like to share more of my experience with Rust. I love teaching (one of the few things I miss from academia) and mentoring. Perhaps I will look at offering some kind of training and/or consultancy work; if that sounds interesting, again, please get in touch!</p>]]></content:encoded></item><item><title><![CDATA[A response to 'A decade of developing a programming language']]></title><description><![CDATA[<p>I recently read the blog post <a href="https://yorickpeterse.com/articles/a-decade-of-developing-a-programming-language/">A decade of developing a programming language</a> by Yorick Peterse (found via Steve Klabnik). I thought it was an interesting blog post which got me thinking, and I have opinions on programming language design from Rust (it is almost exactly a decade since I</p>]]></description><link>https://www.ncameron.org/blog/a-response-to-a-decade-of-developing-a-programming-language/</link><guid isPermaLink="false">656915eccbff2f0ecb2dd51b</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Thu, 30 Nov 2023 23:08:55 GMT</pubDate><content:encoded><![CDATA[<p>I recently read the blog post <a href="https://yorickpeterse.com/articles/a-decade-of-developing-a-programming-language/">A decade of developing a programming language</a> by Yorick Peterse (found via Steve Klabnik). I thought it was an interesting blog post which got me thinking, and I have opinions on programming language design from Rust (it is almost exactly a decade since I got involved with Rust too), so I have written a response of sorts. This is all unsubstantiated opinion, so don't hold me to all this too hard, it just felt like a fun thing to write.</p><h2 id="avoid-gradual-typing">Avoid gradual typing</h2><p>Yes. Back in my PhD days when gradual typing was an emerging thing in academia, I really bought into the hype. But I think Yorick is spot-on with his observation that by using gradual typing, you lose the benefits of static typing and you don't really get the benefits of dynamic typing.</p><p>The motivation for gradual typing is that you can use a language to prototype your project (or to sketch initial architectures, etc.), then evolve that using the same language into high-quality code with static types. But this is flawed in three ways:</p><ul><li>there is huge benefit in throwing away the prototype, rather than evolving it into production code.</li><li>When writing 'quick and dirty' code, it's mostly not the types which slow you down (assuming the programmer is experienced, and the language has good inference and tooling). You move fast by leaving out edge cases, error handling, UI design, integrations, etc., etc. In fact, types can speed you up by facilitating better tooling and type-based design.</li><li>Static typing is not just bureaucratic error checking, types help define the 'culture' and flavour of the language. If you're writing code in a language minus its type system, either you follow the flavour of the language (where you're effectively constrained by its type system, you just don't get the automatic checking) or you ignore it and you're effectively writing in a different language and adding static types is going to require rewriting your code.</li></ul><p>I also think that using different languages for different tasks is not a bad thing, and nobody should expect to use just one language any more. However, there are benefits to using a fewer languages, so this isn't a strong counter-argument.</p><p>I think there <em>is</em> a kernel of truth to the vision of gradual typing, which is that with writing and thinking about types can be a pain (either because they are not expressive enough or because they are too complex). I think 80% of the solution is good type inference (and other ways to elide types), and type inference should be table stakes for any new language. The other 20% comes down to good language design: balancing simplicity and expressivity as design goals of your type system.</p><p>The other element of truth from gradual typing is that different users of a language will have different constraints and requirements, and thus will use your language in different <a href="https://without.boats/blog/the-registers-of-rust/">registers</a>. E.g., in a Rust-like language, some users will want the level of detail we currently have, others will want <code>clone</code> to be implicit for <code>Rc</code> (and similar cases), still others will want more precision, e.g., guarantees that code doesn't panic or about memory usage. There are a few possible solutions:</p><ul><li>you try to find a solution which makes everyone happy,</li><li>you accept that your language has a small niche and do what makes sense in that niche,</li><li>you support different registers/dialects either explicitly or by using some kind of gradual typing,</li><li>you fork the language into several dialects.</li></ul><p>I don't think any of these are good solutions. The first is nice if it works, but it is limited. The others suck. I don't have an answer. I think it is one of the hard challenges in language design.</p><p>The killer app for gradual typing is something like Type Script - adding static types to a language which is dynamically typed. This is great, but I don't think this means gradual typing is good for a <em>new</em> language (Yorick also makes this point).</p><h2 id="avoid-self-hosting-your-compiler">Avoid self-hosting your compiler</h2><p>Yes! From a technical perspective, writing your compiler in your target language is just plain bad (Yorick covers this). The real selling point of self-hosting is social: people who want to work on your new language will want to write that language and they will get frustrated if they have to write the compiler in an older, worse language. It is also a bit of a milestone in the PL community when you can self-host, but this strikes me as unjustified nerdery.</p><p>The big downside I see with self-hosting is that it incentivises you to design a language optimised for writing compilers. Most code isn't compilers, so this incentive is probably not well-aligned with your goals.</p><h2 id="avoid-writing-your-own-code-generator-linker-etc">Avoid writing your own code generator, linker, etc</h2><p>I think this is mostly true. But I think that the bigger point is to know your vision and goals for your language and prioritise those. Perhaps a new linking model is one of the core goals and the primary selling point of your language, if so then you should definitely write your own linker. But for most languages, that won't be the case. Focus on your core features, and use existing tools for peripheral stuff, basically.</p><h2 id="avoid-bike-shedding-about-syntax">Avoid bike shedding about syntax</h2><p>Disagree. This is a common sentiment among language designers, but I think it is wrong. Its tempting because the semantics are the really deep, interesting things, and the syntax is easy to argue about because it is fairly subjective and there is a lower barrier to entry in terms of required knowledge. But this doesn't mean syntax isn't important, it just means its hard to have a good discussion about it. Syntax is the interface between the user and your language, and like all user interfaces, good design is super, super important.</p><p>As a language designer, you need to figure out the syntax. Its hard work which is very different from designing the semantics, its more like product design than compiler hacking. As a community leader, you need to figure out how to have good discussions about syntax, which is probably even harder!</p><h2 id="cross-platform-support-is-a-challenge">Cross-platform support is a challenge</h2><p>Yeeeeessss, but! This one is very true, cross-platform is hard and often frustrating. However, my experience from Rust is that supporting multiple platforms often helps you make good design decisions (it helps you to determine what is essential vs what is accidental about a concept), and that adding platforms later is much, much harder than supporting them from the start.</p><p>For example, if we'd only supported 64bit platforms we might have used <code>u64</code> instead of <code>usize</code> for things like array length. But making that separation was a good design decision from the perspective of types as documentation. On the other hand, <code>usize</code> conflates several concepts: addresses, data size, etc. This is more apparent now when considering platforms like Cheri.</p><h2 id="compiler-books-aren-t-worth-the-money">Compiler books aren't worth the money</h2><p>Mostly true. They certainly do seem to focus too much on parsing and not enough on the important stuff, especially difficult engineering concepts like error handling/recovery. My recommendation is 'Engineering a Compiler' by Cooper and Torczon; it covers the usual parsing stuff, but also type checking, intermediate representations, optimisation, etc. (not much on error handling, though).</p><h2 id="growing-a-language-is-hard">Growing a language is hard</h2><p>Yeah, really hard! Nothing to add to this one.</p><h2 id="the-best-test-suite-is-a-real-application">The best test suite is a real application</h2><p>Sort of. One of the best things about working on the Rust compiler is its excellent test suite. I can't imagine writing a compiler without such a thing. BUT, that is a test suite for the <em>compiler</em>, not the <em>language</em>. The unit tests do help add some clarity in the details, but in terms of driving the language design, a large application is more useful. However, you do need to be careful about over-matching on a single application. Ideally you want, several large applications and a huge test suite, but that is a lot to ask for a language which is in active development.</p><h2 id="don-t-prioritize-performance-over-functionality">Don't prioritize performance over functionality</h2><p>This is good advice, but I think only as far as the usual 'avoid premature optimisation' goes. Performance is a feature; having a fast compiler is nice and having a slow one sucks. Making a slow compiler fast is really, really hard. So there is a trade-off rather than being black and white.</p><h2 id="building-a-language-takes-time">Building a language takes time</h2><p>Yep, so much time!</p><h2 id="some-of-my-own-lessons">Some of my own lessons</h2><p>I think it would be fun and interesting to write down some of my own lessons learnt from Rust. That deserves a fair bit of thought and its own post. But just off the top of my head, here are a few presented without detail:</p><ul><li>community is important and difficult.</li><li>Be very clear about your audience (potential users) and goals, use these to drive a strong vision (including saying 'no' a lot).</li><li>BUT these <em>will</em> change over time and that is ok.</li><li>A language is a dynamic project, not a journey toward a static goal. Your language and tooling must have mechanisms to evolve and you must design with backwards and forwards compatibility in mind.</li><li>The entropy of languages is towards complexity. You must dedicate effort to minimising complexity as your language evolves.</li><li>There's a lot more existing code in existing languages than new code in your new language: its important to consider how the two can interact (FFI, sharing VMs, etc.).</li><li>The culture of your language is important and must be built earlier than you expect (e.g., things like writing docs or tests, or how much to focus on performance). Doubly so for the culture of your community (being welcoming, valuing diversity, etc., but this comes back to the first point: community is hard).</li><li>Libraries and tooling matter just as much as the design of the language.</li><li>Don't try and do too much. Limit the number of areas where you innovate and rely mostly on ideas proven in existing languages (but that includes research/academic languages, not just widely used ones).</li><li>There's a lot of research out there and it can be really useful. You often have to do a lot of work to apply research ideas to your context though.</li></ul><p>I could come up with more of these, and probably the above aren't the most useful or important, but they're what came to mind this evening.</p>]]></content:encoded></item><item><title><![CDATA[Social media update]]></title><description><![CDATA[<p>I've been a bit quiet on social media the past few months (more on what I've been up to in an upcoming post). I would like to write a bit more in the near future, mostly here on this blog, but also on various social media platforms.</p><p>Twitter used to</p>]]></description><link>https://www.ncameron.org/blog/social-media-update/</link><guid isPermaLink="false">655a89eacbff2f0ecb2dd4c5</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Sun, 19 Nov 2023 22:39:58 GMT</pubDate><content:encoded><![CDATA[<p>I've been a bit quiet on social media the past few months (more on what I've been up to in an upcoming post). I would like to write a bit more in the near future, mostly here on this blog, but also on various social media platforms.</p><p>Twitter used to be my go-to spot for work-related stuff, but since the acquisition, I want to use it less and find it a less useful and fun place to be. Since there has been a bit of a diversification of social media, I thought I'd enumerate the places where I might post:</p><ul><li>Twitter (X): <a href="https://twitter.com/nick_r_cameron">nick_r_cameron</a>. I'll continue to read occasionally and post announcements or stuff I feel needs reach.</li><li>Mastodon: <a href="https://hachyderm.io/@nrc">nrc@hachyderm.io</a></li><li>Blue Sky: <a href="https://bsky.app/profile/ncameron.org">ncameron.org</a></li><li>LinkedIn: <a href="https://www.linkedin.com/in/nicholas-cameron-197212159/">Nicholas Cameron</a>. I don't really use LinkedIn, though I maintain my network there. I might start using it in the future when I'm looking for work again.</li><li>GitHub: <a href="https://github.com/nrc">nrc</a> I don't believe GitHub is social media, but it seems to be adding more and more social features, so for the sake of completeness, it's here too.</li></ul>]]></content:encoded></item><item><title><![CDATA[Some thoughts on open collaboration]]></title><description><![CDATA[<p>Rust is an open source project. More than just the source code for the compiler being available, that means the project works in the open, inclusive of, and collaborating with, the wider community. That is sometimes difficult, especially when working on things which evoke strong feelings from a large group</p>]]></description><link>https://www.ncameron.org/blog/some-thoughts-on-open-collaboration/</link><guid isPermaLink="false">64514f980417db04da726379</guid><dc:creator><![CDATA[Nick Cameron]]></dc:creator><pubDate>Tue, 02 May 2023 18:01:52 GMT</pubDate><content:encoded><![CDATA[<p>Rust is an open source project. More than just the source code for the compiler being available, that means the project works in the open, inclusive of, and collaborating with, the wider community. That is sometimes difficult, especially when working on things which evoke strong feelings from a large group of people or where the work falls outside most people's scope of experience. Policy and governance work is both of those things, and doing open work on policy and governance is super-hard! This kind of work is difficult to begin with (especially when the folk doing it have a primarily technical background), doing it in an open way is really, really hard (I know, I've tried many times, and mostly not done well at it).</p><p>Recently, I think that the work on project governance and trademark policy has fallen short of the standards of open collaboration we expect from the Rust project. It's been frustrating for me to try and explain why exactly, while also participating in these processes (and I've fallen short of my own standards for kindness and empathy a few times along the way). So, I wanted to try to step back and write up how I think these processes should be done, and some ideas for how to make them successful. Talking the talk here is much easier than walking the walk, but even knowing what to aim for is difficult. I hope this post helps there. I think I've failed more than I've succeeded when I've tried to do this stuff, but hopefully you can learn from my mistakes.</p><h2 id="before-we-start">Before we start</h2><p>Doing the right thing here is hard! Developing policy in the open takes a lot of effort. More effort than if you were doing it privately. But you have to do it! If you don't you will get sub-optimal results and you will piss people off and the community will be worse off. It is part of the work, it is non-optional. Yes, it will take longer and it will be emotionally more difficult, but it will be better in the long run. Just like writing tests or docs, or brushing your teeth every morning, you have to do it!</p><p>And there are no shortcuts. If you work in private and just share more, you are doing it wrong. You have to communicate intentionally and you have to communicate well. In fact, you have to do more than communicate, you have to <em>collaborate</em>.</p><p>I often see people framing these processes as a dichotomy between 'working totally in the open', i.e., working in a public repo and the whole world is your working group, and working in private with occasional announcements or requests for limited feedback. The former does not work at scale and it is reasonable to avoid. It leads to chaos and is overwhelming. But the latter approach is also bad: it is not in the spirit of open work - it is the policy equivalent of coding in private, throwing it over the wall, and calling it 'open source'. The community cannot effectively contribute to the work and so it is not open.</p><p>I believe the right approach is to find the right amount of private to do the work, and involve the right amount of the community in a structured and intentional way at <em>every stage of the work</em>.</p><p>To start with, I'll cover a couple of useful concepts for thinking about this stuff and some principles which I believe are axiomatic.</p><h2 id="wwic">WWIC?</h2><p>"Why wasn't I consulted?".</p><p>Apparently, <a href="https://www.ftrain.com/wwic">Paul Ford</a> came up with this as the 'fundamental question of the web'. <a href="https://youtu.be/rdmpOktHLmM?t=405">Aaron Turon</a> (the linked talk is really good, you should watch it all) popularised it in the Rust project as a way of thinking about community collaboration in the Rust community, and I think it is a really important lens through which to consider this kind of work.</p><p>People want to feel empowered and that they are working together on development. They don't want to have a feeling that they are being ignored in favour of a select few. They don't want to be thinking 'why wasn't I consulted?'.</p><p>The philosophy behind this slogan gave rise to the RFC process and much of its early evolution, and more generally to the way in which Rust is developed in the open. It's not that everyone gets an equal vote, but that everyone feels that we're working together.</p><h2 id="raci">RACI</h2><p>RACI is an acronym of responsible, accountable, consulted, informed. It is a framework for identifying and thinking about the groups who should be involved in a process. Like many helpful concepts, it can be over-done and become dogmatic and cringey. But when interpreted sensibly, I think it is really useful. (I'll describe it here, but I'm giving my interpretation which might not be exactly the same as they teach in business school or whatever). There are a bunch of variations too, if you're interested in this sort of thing.</p><p>The groups are:</p><ul><li>responsible - the people doing the work and responsible for delivering it</li><li>accountable - the people (often a manager or exec in the corporate world) who makes sure the work gets done and that is the right work to do. Sometimes this is the leader of the R group or a subset of the R group, but I think it is better if they are not. The Rs and the As will work closely together, but the As should still have an 'outsider' perspective on the work.</li><li>consulted - people who are stakeholders in the work and can give useful input and feedback. This group should be relatively small so that they can be effectively consulted. Consultation means being significantly involved in the direction and details. I think it is an anti-patten to make the Cs your end users or customers; you might want input from these people but you don't want to <em>collaborate</em> with them.</li><li>informed - people who should be kept informed about the process. Note that this is not just informed about the result of the process, but about the process itself. In the corporate setting this might be peer teams or execs or the legal department, etc. (not the general public or even the whole company, who might only be informed at product launch or whatever). You don't explicitly seek feedback from this group, but if feedback happens, you should probably listen. You might occasionally seek a +1/-1 vote of confidence to check you're on the right track.</li></ul><p>Some common mistakes are:</p><ul><li>including too many people (or everyone) in the Rs, meaning that work doesn't get done due to the overhead of involving everyone.</li><li>Not separating the Cs and Is meaning that you get too much noise in your feedback and/or you can't properly involve the people who should be Cs.</li><li>Making the Is too big, so conflating people who want to be well-informed about a project (who should be Is) with anyone who is vaguely interested (who should not, but should still get an occasional announcement or something).</li></ul><h2 id="some-other-principles">Some other principles</h2><p>Different people care about different things. People will say that they want to be "involved in the work" or that they should be "consulted" or "informed". But those words can mean a lot of different things.</p><p>Some people care about the details and process, not just the outcome (some people at least). Don't assume that people don't want to know the boring details, or that only the output of the process matters. People care that things are done in the right way.</p><p>Accountability matters and trust is earned. To be accountable in an open source project means that the people involved are known (pseudonymous is fine) and outsiders can know who argued for what, rather than just giving the opinion of the group once a decision is made. It means mistakes are acknowledged. It means being clear about who has authority and why.</p><p>Often, there will be people who want more than you should give. People will want to be more involved than they should be, or want to be consulted more often, or want to have a veto when they are just one voice of many. It is ok to push back and say 'no'. Open collaboration does not mean everyone gets to be as involved as they want to be. Be firm and explicit, and explain why (efficacy can be a fine reason, as long as it is not overused). People will default to wanting to be included more rather than less, this can be (somewhat) countered by building trust that they can be involved less and still have the input they want.</p><h2 id="ok-so-what-should-we-actually-do">OK, so what should we actually do?</h2><p>Communication is key! Do lots of intentional communication, both one-way and two-way. Some formats for communication in an open project:</p><ul><li>announcements (in a blog post, forum post, mailing list, press release, etc.), good for getting the widest possible circulation of news and therefore ensuring work is widely known;</li><li>newsletters and release notes, good for formal high-level summaries for those who want to keep tabs on overall progress;</li><li>blog posts, good for giving extra context and for keeping people informed of possible solutions/random ideas/folks' thought processes;</li><li>meeting notes/minutes/recordings<sup><a>[1]</a></sup>, good for those who want to keep close track of a project and/or need detailed context in order to contribute, useful for demonstrating transparency and building trust;</li><li>working in public</li></ul><p>The last item is doing the work in public. For code, this is the default for open source work. For policy and similar, this can often still be done. If you can do this, you still need to do other communication too (because not everyone wants to follow a repo, and even for those who do, they will not get an optimal picture of the work just by following commits and discussion). If you can't do this, then you need to do much more of the rest.</p><p>Early in the project you need to work out, precisely, what needs to be private vs what can be public. Work which concerns private information or secrets needs to be private (this might include a lot of financial and legal stuff if it overlaps with corporate policies, but not necessarily if it is financial or legal stuff which only concerns 'open' entities). Some sensitive work (moderation, mediation, some mentoring, etc.) should be done privately. Much other work might feel like it ought to be done mostly in private (e.g., governance work, legal policies), but it rarely actually does. While your lawyers won't want to make PRs, you can still do a lot of work in public. The friction from doing this is usually much less than you'd expect, especially if you are very clear and explicit about the interactions you want, communicating intentionally so that people feel safe to abstain, and if you are happy to moderate discussion. IMO, it is better to work in public and make liberal use of moderation tools than to work in private and have to make up for that some how. It is more emotional labour in the short term, but less stress in the long term and leads to better results.</p><p>Whether you're working in public or private, you still need to reach out to the wider community to consult on the work. This must happen at all stages. The biggest mistake I see people make is to only get input and then present a draft output for feedback. I.e., the development process is a black box with input and output. This is not open collaboration! To be open, the community must be able to help set the direction and goals of the work, influence the design, and contribute to the work, not just polish the output.</p><p>An allegory: you and a group of friends are going on a road trip. The group agrees that you will organise the trip and do the driving. Here's a good way for that to work:</p><ul><li>you (on your own) look up a few destinations within a sensible driving distance and narrow that down to a shortlist based on the attractions at each,</li><li>you send a text message to the group letting them know the above,</li><li>you all meet up for a coffee, you tell them the short list of destinations, the cost for petrol, and the kind of things you can do at each, you have a bit of a chat and agree on a destination,</li><li>you fuel up the car and give a shopping list to Alice to pick up some essentials,</li><li>you send a text message to let everyone know you're ready to go,</li><li>you all meet up and start driving,</li><li>at some point you get a bit lost and ask Bob to check the map,</li><li>a bit later you're tired, so you ask Charlie to drive for a bit,</li><li>when you get to the destination, the camp site is closed, you find some alternatives and the group has a discussion about where to stay.</li></ul><p>And here are some ways this process could be less good:</p><ul><li>the group tell you the kind of things they want, you choose a destination but don't tell anyone where you're going, everyone buys the things they think they'll need and you set off. One mile from the destination you tell everyone where you're going and ask people if they'd like to ski or snowboard. Some people have brought beach gear to a ski mountain. You have brought along a meat feast barbecue even though half the group are vegetarian because you asked about activities but not diets.</li><li>You start off by having a meeting to choose a destination. You sit around with a world atlas and wikipedia, the whole group suggest destinations with no idea how to get there or how much it might cost. Strangers keep walking past your open window and yelling about destinations which are too far away or which nobody in the group wants to go to, but which are apparently better.</li><li>You each take your own car, spend a bunch more money, and end up at different destinations.</li><li>You make all the decisions yourself. You send beautifully crafted text messages to the group and spend the road trip giving well-thought out justifications for the choice of destination. On arrival, you hand out a schedule to everyone which plans for their time down to the minute, who will hang out with who, and when they're allowed toilet breaks. Every meal is planned, the dietary choices are correct and the food is delicious, but nobody gets to choose anything for the whole trip. You hand-picked wine to match each meal; when one of the group says they'd rather have a beer that evening you break down in tears and kick them off the trip.</li></ul><p>OK, sorry, that was a bit of a digression. Back to concrete stuff you should do.</p><h3 id="a-timeline">A timeline</h3><h4 id="pre-start">Pre-start</h4><p>At the start of the project, announce what you want to do and why you want to do it. State your ideas around goals and scope. Depending on your position within the community and the kind of work you want to do, such an announcement might be official or unofficial. It is likely that the announcement should be public but directed to established contributors and maintainers, not the whole world. Ask for feedback to ensure that the work is worth doing, and that the goals and general direction are correct.</p><p>Ask for volunteers. Who does the work and how you recruit these people is critically important. If you only invite people rather than ask for contribution, you will perpetuate nepotism and sacrifice diversity. Your work will be less open and it will feel less open to the community. Unless it is obvious, you should also ask for volunteers to be in the C and I groups (though often these groups are obvious, the I group in particular is often the whole community).</p><p>Just because you are asking for volunteers does not mean that everyone who is asked or everyone who replies is suitable for the work. You should make your ask explicit about the requirements for volunteers and about the work. In particular, what the work will entail (the kind of work, the expected time commitment, etc.), the number of people you want to work with, the skills and experience required, and if volunteers must have some level of trust within the project. Be clear that you might say 'no thanks' to volunteers. Be clear that being part of the R group is not the only way to be involved and about how the C group will be consulted (to avoid a fear of missing out).</p><h4 id="the-start">The start</h4><p>Once you have a group of people ready to do the work and have got the required official blessings (which will involve identifying the A group), then you should announce that the work is starting. This announcement is informational (you're not asking for feedback here) and is an anchor to link back to when you need to talk about the project. You should state the goals and scope of the project. Either these are settled after previous discussion, or they might need some iteration as the project starts up. Be clear which; if the former, let folk know how they can contribute to the ongoing discussion.</p><p>You should announce the RACI groups (especially the R group who are tasked to do the work), not necessarily using the RACI terminology. You should be clear about how the I group (which may well be open) can follow along and when (and how) the C group will be consulted. It is important to be explicit about when folk will be consulted or have other opportunity for feedback.</p><p>You should describe how the R group will work. In particular which aspects of the work will be done publicly and privately (and why).</p><h4 id="ongoing">Ongoing</h4><p>When you're heads down working with few natural milestones, it can be hard to pause and communicate. But this is essential for open collaboration. The work is a process, not just a result, so if you want real open-ness you must share the process, not just the result. Even if you don't think there is anything interesting happening, share it anyway! It might be interesting to somebody, and the demonstrated absence of interesting things can be interesting in itself.</p><p>Your group should set milestones. These should be small and regular to make your work as iterative as possible. These should be real milestones for the work, not just artificial dates for communication (to keep work and communication congruent). Like software releases, these can be either time-based or feature-based, but not both. At each milestone you should give an update on work and request feedback as appropriate. Include any changes to the timeline or personnel, state when and what the next milestone will be, announce any changes in goals or scope, and announce any opportunities for participation. The primary goal here is keeping people informed; it's also a good time to check-in that you are consulting the right people in an ongoing way.</p><p>You should communicate outside of milestones. Informal blog posts by group members giving their current thinking and what they're working on are great for this. The Rust project has been great at these for technical topics, but not so much for non-technical ones. It is ok to over-communicate and have overlap between communications. You might end up saying the same thing in a personal blog post, an official blog post, a newsletter, and meeting minutes. That's fine! Different people like things in different formats and with different levels of detail.</p><p>You should actively share drafts of output documents as they're ready. Share them when they're rougher than you'd like and be clear how much iteration you expect to come after. Make sure you're asking the right people (your C group) for feedback. That means asking in the right way in the right places. Every time you share work, you should re-state the goals and scope of the work, <em>why</em> it is being done, and why (and how) this shared artefact contributes to the eventual goal. It might also be worth stating what other artefacts will come later so people don't expect the current one to do everything. It can feel a bit obvious, but that's because you're doing the work. People might be seeing this for the first time (even if the group you're talking to is not new, individuals might have newly joined the group). People are (on aggregate) very lazy, they won't even click a link for context a lot of the time, so you have to keep re-stating this stuff.</p><p>Be clear about the kind of feedback you want: sometimes a +1/-1 is enough, sometimes you want detail. If people are frustrated by this (i.e., they want to give a different kind of feedback) then you might not have a shared understanding of which group they are in (e.g., C vs I) or of the context of the work, or they might just be wrong.</p><p>Before sharing stuff, it is worth having someone outside the R group look over the 'package' being shared to make sure it reads well to an outsider. An A person might be good for this, or just ask a friend or colleague who you trust. Note that if you keep using the same person for this, then they will stop having an outsider's perspective, so you need to ask new people.</p><h4 id="delivery">Delivery</h4><p>Ideally, getting towards the end of the process is not a big deal. You've been working iteratively and openly and so there is no big step in terms of more people being involved in the process. When delivering the final outputs, the only people who haven't seen the work are outside the community or legitimately only have a passing interest. Enough eyes have been on the process that the end result is good and everyone feels consulted.</p><p>You cannot have a release of work in an open collaboration process (that is a release to collaborators; obviously you can release to customers/users, but make sure your have a shared understanding with your community on exactly who is a collaborator and who is a customer. If a person thinks they are a collaborator but you think they're just a user, that is a recipe for disaster). You cannot develop a 'draft' privately and hope to be so collaborative in gathering feedback that people feel consulted and involved. That is doomed to failure.</p><p>If you find that you are working towards a release and holding off till then to get feedback. Stop! You must iterate and collaborate instead.</p><p>If the end of the process is a decision, then you need a small group of trusted people to make that decision (and it might need to be done in private). Importantly the development process must be complete <em>before</em> a decision is made. You can't combine development and decision making in an open process. This is the 'no new rationale' principle of Rust's RFC process. If decision making <em>can</em> be made open and iterative, then you should do that.</p><p>I think that one thing to avoid is 'growing circles' of feedback. If somebody's feedback is worth hearing at the end of a process, then it is worth hearing at the beginning too when it can actually make a difference. I don't believe you can accurately match the size of the circle to the magnitude of the changes required to address feedback. In other words, useful contribution comes from surprising places. It's better to identify groups of people to be consulted and consult them from the start, keeping a larger group informed. Otherwise, the group of people who are properly consulted is too small, but the effort of consulting is still large.</p><p>One more way to make delivery easier/more effective is to be clear about how things will evolve after delivery/be maintained/get to v2.0. By advertising that open collaboration will continue, there is less pressure to reach perfection and to continue iterating. This works best if you've built trust in your open-ness during the previous phases of development.</p><h2 id="conclusion">Conclusion</h2><p>That's a lot. And again, this is hard to do; much harder to do than it is to talk about. But hopefully some of it was helpful.</p><p>To be specific to the Rust project, I think it is important to move towards the ethos of the RFC process for non-technical work (not the procedural details necessarily, but the spirit of open collaboration). Policy and governance work is not so different from technical work. Much of the time when we want to be less open, it is not for necessary legal or privacy reasons, but because it feels easier; and that is often just because we are less familiar and confident with this kind of work.</p><p>To try and summarise the important points from the post, I think that open collaboration works best when:</p><ul><li>the community is considered as a partner to work with, not a customer to work for,</li><li>work, communication, and feedback are iterative,</li><li>feedback is sought from appropriate people from the very start of the work, in particular on the goals and scope of the work,</li><li>communication happens a lot and via many different mediums,</li><li>groups to consult and inform are explicitly identified,</li><li>the group doing the work is transparent: rather than being a black box which takes community input and produces results, the community has visibility into the process of doing the work. The community can see the 'messy bits' of the work being done as well as the polished output. Individual members of the group are accountable because their opinions are known and they communicate as individuals, rather than only as a group.</li></ul><hr><p>Some people resist publishing meeting recordings or transcripts because they feel it means they can't talk freely or they don't like sharing their image or voice online. I find these poor arguments. Being open in this way is excellent accountability and transparency. You should have confidence in your own words and faith in your audience that they won't be misinterpreted. Part of being a leader (and if you're in meetings where decisions are made, then you are being a leader) is embracing discomfort for the sake of better results and a better process. A few people have genuine safety concerns or severe anxiety around voice dysphoria, but for many, resistance to meeting recordings is just nerd security theatre. <a>↩︎</a></p>]]></content:encoded></item></channel></rss>