Rails Performance Tips

Low hanging fruits to get your application faster

We know Ruby improved a lot regarding performance since version 1.9 and it proved to be a scalable option for web applications (even after all Twitter fail whales).

However, it’s still not that fast and it won’t change very soon.

So, in this post, I will cover the most frequent performance issues I’ve seen in web applications and how to solve them, from the very basics to the not too obvious solutions.

Database Access

However, due to all abstractions for the database that we have nowadays, it may be difficult to see at the first sight how your database operations are being done.

So let’s see some examples and tips.

Accessing associations, avoid N + 1 query problem.

Consider the models below.

class Player < ApplicationRecord
has_many :achievements
class Achievement < ApplicationRecord
belongs_to :player

So we want to iterate over a list of players and then iterate over each achievement of these players.

Player.all.each do |player|
player.achievements.each do |achievement|

Doing it, we will end up making a new query for each player to get its achievements. That’s a N + 1 query problem.

Fortunately, ActiveRecord provides a very useful function called #includes so you can specify associations to be included in the result set.

Active Record lets you specify in advance all the associations that are going to be loaded. This is possible by specifying the includes method of the Model.find call. With includes, Active Record ensures that all of the specified associations are loaded using the minimum possible number of queries.

Changing the method to this:

Player.all.includes(:achievements).each do |player|
player.achievements.each do |achievement|

The total time decreased from 5 seconds to 1.2 in my benchmark created with 5000 players and more than 14000 achievements.

Use the aggregation functions in the database.

Although it’s very cool and easy to use “functional programming like” methods to manipulate Array and Hash (like #map, #min, #max, #reject, etc., where you pass a block as a parameter), the databases are way faster to run aggregate functions than Ruby or many other languages.

So, filter your data as much as you can directly in the database, and use aggregate functions directly instead of working with too large Enumerable.

Pay attention to the transactions.

Another thing that is under the abstractions is the transaction control.

When you save an ActiveRecord, Rails automatically open a transaction and COMMIT after the INSERT.

So, if you do something like this:

(1..5000).each do
Player.create(name: 'Lorem Ipsum', email: 'lorem@ipsum.br')

It will open and commit 5k transactions. In my benchmark, it took ~14 seconds to be done.

Depending on your needs, it would be important to open a transaction in each iteration, but for other cases, you could put all operations in only one transaction, like this:

ActiveRecord::Base.transaction do
(1..5000).each do
Player.create(name: 'Homer', email: 'lorem@ipsum.br')

That little change reduced the total time to ~3.5 seconds.

What Makes Ruby Slow?

As Alexander Dymo showed in his book called Ruby Performance Optimization, Why Ruby Is Slow and How to Fix It, memory consumption and garbage collector are the major reasons why Ruby is slow.

For example:

data = Array.new(1024) { Array.new(512) { 'x' * 2048 } }Benchmark.realtime do
data.map do |row|
row.map { |col| col.upcase }

It takes ~3.33 seconds to change all these words to uppercase.

However, if we disable the garbage collector (calling GC.disable before the Benchmark), the total time reduces to ~2.68 seconds.

So almost 20% of the total time in this example is the garbage collector working. It gets worse the more memory we use (and that’s tough because everything in Ruby is an object).

Of course, we can’t disable the GC for obvious reasons, so the plan to get our programs faster in Ruby becomes to use less memory.

In this example, instead of calling the map and upcase methods which build a new Array and String respectively, we could use map! and upcase!. They manipulate the object instead of creating new objects, reducing the average time to run this program to ~2.21 seconds.

data.map! do |row|
row.map! { |col| col.upcase! }

Save Memory from ActiveRecord

For example, if you are retrieving a list of 15k models that have 15 string attributes, but using only two of them for a report, the simplest way (getting all attributes) takes ~224 ms:

Thing.all.each { |thing| thing }

Selecting only the fields you need takes 138 ms.

Thing.select(:field_1, :field_2).each { |thing| thing }

And using #pluck, which returns an Array instead of ActiveRecord, takes 43ms.

Thing.pluck(:field_1, :field_2).each { |thing| thing }

So, the last option is 80% faster than the first.

It’s good to remember as well, that you can make queries without instantiating any model, by calling ActiveRecord::Base.connection.execute.

Making Your View Rendering Faster

index.html.erb<% @things.each do |thing| %>
<%= render 'thing', thing: thing %>
<% end %>
_thing.html.erb<%= thing.field_1 %>

This simple view, for a result of one thousand ActiveRecord takes a huge time to complete:

Completed 200 OK in 627ms (Views: 624.3ms | ActiveRecord: 0.8ms)

In this scenario, if we call therender passing the collection like this:

<%= render partial: 'thing', collection: @things, as: :thing %>

The total time falls to ~33 ms. It’s more than 90% of performance improvement.

Completed 200 OK in 33ms (Views: 30.7ms | ActiveRecord: 0.7ms)

As Alexander Dymo explained in his book, the reason is:

The reason rendering a collection is faster is that it initializes the template only once. Then it reuses the same template to render all objects from the collection. Rendering 10,000 partials in a loop will have to repeat the initialization 10,000 times.

Again, it relies on better memory usage.


Sometimes you will have to reduce the complexity of your algorithm, use cache or change the architecture of your solution (like moving things to background processes or micro services) in order to get things faster.

But make some effort before just adding more application instances or increasing memory in your servers.

There’s no silver bullet. You have to measure, identify bottlenecks, make changes and do it over and over until you get an acceptable time.