Elementary Elixir

In a previous post I talked about pattern matching in Elixir, a new language built by core Rails member, José Valim. Elixir runs on Erlang and V1.0 came out just a few months ago. There are many perks to Elixir including high speed, concurrency, and fault-tolerance, to name a few.

In this blog post I’ll highlight some of the core capabilities of Elixir.


With Elixir, processes don’t share memory. This means that functions can’t mess with variables and garbage collection doesn’t stop the world. Being a functional language makes concurrency a whole lot easier because you don’t have to worry about shared memory and shared state.  It’s a different way of programming, for sure, but it’s worth the initial hurdle in order to reap the benefits of what Elixir has to offer.


Tiny lightweight processes use more cores and utilize more machines. Elixir spawns a lot of little processes, each has its own memory, and they communicate by passing messages to one another.

These are not heavy operating system processes. These are Elixir processes and they are extremely light weight. In the talk above, we spawned 1 million processes in only about 8.3 seconds. These processes use all cores of your machine too, which makes concurrent operations really efficient in the context of your application. No more passing work off to third-party background workers. In Elixir, your background workers are built in!

A Demonstration

Chris McCord, creator of Phoenix (a web framework for Elixir), tested the effectiveness of Elixir by seeing how many concurrent connections they could get on a single server. To test this, they created a chat room to see how quickly they could distribute a message to all of the people in the room. Granted it was a big server (40 cores), but Phoenix and Elixir were able to utilize all cores of the machine and show great performance.

2 million virtual users connected to the chatroom and Phoenix sent a message out to all of the users. It only took a second or two for every user to receive the message, all 2 million of them. That’s pretty remarkable and a great demonstration of what this framework is capable of.


You can monitor processes with Supervisors. This part of Elixir (really from Erlang’s OTP) lets you specify strategies for restarting crashed processes. For instance, if one of your processes crashes Elixir will simply restart them. This lets your applications be more fault-tolerant and have more uptime.

The Language

Pattern Matching

Elixir works by matching patterns instead of assigning variables. This is a subtle difference in thinking, but it enables some pretty powerful styles of programming. If you’re not familiar with Elixir’s pattern matching, I suggest reading my blog post specifically about pattern matching: Pattern Matching In Elixir.

Pipe Character

Pipe characters can help to clean up your code and make its intent more explicit. Similar to the Unix pipe character, it’s a way of saying “take the return value from this function and pass it to this function”. Instead of writing code like this:


You can write code like this:

args |> parse_args |> process

This difference seems subtle, but becomes more apparent when working with larger operations.



Let’s start with lists. Every list in Elixir is a linked list and can be split into the first element and rest of the list. With pattern matching you can assign these to variables very easily:

iex> [head | tail] = [1, 2, 3, 4] 
[1, 2, 3, 4] 
iex> head 
iex> tail 
[2, 3, 4]

Also, the last element of every list in Elixir is an empty list. Let’s see that in action:

iex> [head | tail] = [1] 
iex> head 
iex> tail 

Recursive Functions

Because of pattern matching in function definitions, we can actually specify recursive functions as multiple separate functions. Our first function specifies the base case and the following function specifies how to handle the rest of the operations:

defmodule MyList do 
  def square([]), do: [] 
  def square([head | tail]) do 
    [head * head | square(tail) ] 


So when we call MyList.square([]), we match the first function and then get [] as a result. When we call MyList.square([1]), the first function does not match, but the second one does. We assign head = 1 and tail = []. Then we create a new list containing 1 squared and the result of calling square with the variable tail, i.e.: [1 * 1 | MyList.square([])]. This second call to square matches the first function which returns [], then the original function can return the value [1 | []] which is just [1].

Similarly, when we call MyList.square([1, 2, 3, 4]), we are returned with [1, 4, 9, 16, []], which is just [1, 4, 9, 16].

This sort of technique forms the basis of recursion in Elixir.

Enum Library

Elixir’s Enum library works using recursion and is fantastic. This means you don’t really have to write your own recursive functions for most of your basic list operations:

iex> Enum.sum([1, 2, 3, 4])

Built In Microservices (Applications)

Microservices are all the rage right now. They let you deploy and scale parts of your app independently and keep the code base separate. With Elixir, we have a concept of “Applications”. Applications let you package code to be executed separately under a supervision tree. In other words, they are basically a microservice. You can split applications into new repositories or keep them all in one. You can specify how to start, stop, and monitor your applications. Best of all, these applications run concurrently and don’t share any state or memory, just like a microservice. So you get all the benefits of a microservice architecture but you can manage all applications from the same place. Pretty nifty.


Because of the magic of OTP, Elixir ships with an easy way to distribute your code. Need to scale horizontally? Cool, fire up some more servers with your code, connect them, and then they can magically talk to each other. In practice it’s a little more complicated than that, but that’s the gist of it. I show a great example of distribution in this Engineering Lunch. During the presentation I connect to a co-workers laptop via WiFi and execute code on her machine. It’s deliberately slow code, and we can actually watch her processors max out at 100% trying to do the work. I would expect something that complicated to take tons of time to set up, but we get up and running in about 5 minutes.