Tag Archives: ruby

CanCan (CanCanCan)

CanCan was written by Ryan Bates to make role based security simple and fast for Ruby on Rails applications. It was first published in late 2009 and was the go to role based security gem for rails applications. There were several reasons why so many developers wanted to use it. Some of those reasons were because it was fast, easy to setup and worked well. Support for the original CanCan gem ended in mid 2013. Ryan Bates decided to take a break from development all together so the gem was forked by Bryan Rite and is being maintained by him today. This allowed the gem to work with the latest version of rails and is able to keep working when new versions of rails are released. As one of the developers that use’s CanCanCan I can say that it’s just as stable as it’s ever been. It’s a mature gem by this point in time so you won’t see too many breaking changes if any.

Features

One thing that I really love about CanCan is that it gives you an ability class that allows you to define all of your roles and what they can do in one place. You can define as many roles as you want and have them share operations with little effort. Also you are able to assign multiple roles to a user. This will give you the flexibility to only have one role for one purpose. Instead of mixing roles together to fit some custom role you need. You are able to just give the user two roles instead. Another thing that I found really useful is you can pass in a set of criteria that can be applied to the role check. For example a user creates a new record. That record belongs to that user and should only be able to be viewed or edited by that user. With CanCan you can simply pass in a scope like can :edit, ModelName, user_id: user.id. This will apply this scope to the check so only the user who created the record will be able to view or change it. This is really powerful because you can do it all in one place. If you want to setup multiple roles per user simply following this guide (Role-Based-Authorization). Here is how I implemented it based on that how to.


class User < ActiveRecord::Base
  ROLES = %i[role_1 role_2 role_3]

  attr_accessor :roles

  def ability
    @ability ||= Ability.new(self)
  end
  delegate :can?, :cannot?, to: :ability

  def roles=(roles)
    roles = [*roles].map { |r| r.to_sym }
    self.roles_mask = (roles & ROLES).map { |r| 2**ROLES.index(r) }.inject(0, :+)
  end

  def roles
    ROLES.reject do |r|
      ((roles_mask.to_i || 0) & 2**ROLES.index(r)).zero?
    end
  end

  def has_role?(role)
    roles.include?(role)
  end
end

Lessons Learned

One thing I learned while working on my project using CanCan is you need to setup the scopes for the can statements. Originally when I had developed this project I did not add these scopes on. This gave any user the ability to edit any record they had access to. So for example a user was able to view and edit a record that they did not create. This would have caused major issues if it wouldn’t have been caught before going live. Here is an example on how you can provide a scope to your can statement.


can :show, Vehicle, id: user.vehicles.map { |vehicle| vehicle.id }

This will allow a user who’s user_id is in the vehicle table and only that user to view that record. So if a user decided to change the id in the url they will get the access denied error message. You can do this with any active record call as long as it ties back to a user.

Alternatives

When originally choosing CanCan I took the time to look into a couple of alternatives. One of the major players today in rails is Pundit. When Ryan decided to take a leave from the development world a lot of people started looking for an alternative to CanCan. Since it was no longer going to be maintained. A lot of people chose Pundit. I have never used or setup Pundit fully but while doing my research I couldn’t figure out a good way to assign multiple roles to one user. Leave a comment if this is actually simple and I just missed it in the Pundit documentation. I’m sure this is possible but I didn’t want to invest the time digging through the code to find out. CanCan had good documentation in how to do so. At the end of the day I encourage everybody to do their research and determine the solution that best fits their application. I’ve seen CanCan used in multiple applications and so far it’s been able to handle everything thrown at it.

Wallproductions on Rails

Wallproductions is a portal like site that will contain many applications. Currently it contains a couple of applications Gas Tracker and Budget Tracker. Both of these projects are live and are in production use. Over the years it’s gone through a couple of transformations. When it was first created it was a side project. It was something to work on during the weekends to see what was possible outside of what I was doing at work. The reason why the project started was due to a lack of existing applications that did what I wanted. So I set out to create my own.

When it was originally written it was done using PHP. It started out without an off the shel framework. This was a pretty good solution for a while until the project started to get larger. The larger it got the more I thought about it. I was thinking that a framework would be needed. Unless I wanted to spend all my weekends doing tedious things that were already done in the various frameworks. As the only developer working on the project it was important to be able to complete tasks with speed. So instead of creating everything myself I could use some shared code from the framework to handle all of the standard things. Things like ActiveRecord and Routing. So I looked around and I found a couple of frameworks. At the time the Yii framework seemed the most appropriate for my situation. The reason why I choose Yii instead of the other frameworks was simply because I was more familiar with how their ActiveRecord implementation worked. Also it had a pretty good extensions library so I could reuse code shared by others in the community. So I went with that and build the second version of Wallproductions off of it. This worked well and it stayed in the Yii framework for about two years.

During this period of time I change directions in my career and instead of doing PHP development I started doing Ruby development. Specifically working with Ruby on Rails. I did not know much about Ruby or Rails when I started. But after working with it for a short amount of time I was able to be more productive than ever before. Over the course of the next year Wallproductions stayed in the Yii framework. Making additional features and fixing existing bugs. Then the Yii framework introduced their new framework Yii 2. This was a complete rewrite of the framework. So I had a decision to make. Do I want to stay on the first version of Yii for a long time, do I want to rewrite the majority of my PHP code or just move to Ruby on Rails. Seeing that I work on Ruby on Rails on a daily basis it was an option. After carefully considering it I decided that I wanted to go with Ruby on Rails. So I set off on the path of rewriting my entire site using Ruby on Rails.

The first step was to gather all of the old requirements for the system. Then I needed to make sure that I met them. This was not very hard as I was the only developer on this project. Also since I wrote the entire application myself I didn’t need much documentation to understand it. Also I needed to set milestones for myself so I knew I was progressing. I didn’t want to get stuck and then have to go back. But I also didn’t want to keep spinning my wheels if it wasn’t going to work out. As I am sure you realize we all have had side projects that we start and then never finished. I decided early on that if I was going to put a ton of effort into converting Wallproductions I would finish.

The conversion itself actually went faster than I expected. I knew Ruby was very powerful and Rails added even more power to it. What I did not realize is how much power it added. I was able to convert something that I worked up for over three years in under three months. This was not your typical conversion either since it was going cross languages. Also I was not working on this conversion process full time. I was doing so like always on the weekends or nights after regular working hours. Now that I am on Ruby on Rails I am able to complete new tasks at greater speeds. At the end of the day I am glad that it’s on Ruby on Rails. I am a huge fan of the framework and the language. Also the community around it is really good.

Why use Phusion Passenger for your Rails server

Phusion Passenger is one of three big players in the rails server game. The other two players are Unicorn and Puma. No matter which one you choose if you can configure it correctly and get through the setup process all of them work. All three of them are considered viable and able to handle the job. In my opinion though I feel that Passenger is the best out of the three.

Support

The first reason why I really like Passenger is because they have really great documentation. If you are unable to resolve your issue after reading through their documentation you didn’t look close enough. For example this is Passengers documentation for their nginx module. Yeah it has a ton of configuration options that you can just stick into your nginx configuration file and go. Also if you are wondering why something is not working you will be able to find the answer easily either by going through the documentation yourself or going to Google. If you Google for an issue using Unicorn you will probably end up finding it but it could take you a while. When I consider choosing a technology to use support is the first place I look at like am I able to get support and is the tool so easy to use that I most likely won’t need that support. If the answer is yes to both of them to me that is a good place to start. If the answer is yes to documentation but not to suppose I think that is an okay starting point. Sometimes you will not be able to find both.

Popularity

This one is a little controversial as the saying goes just because it’s popular doesn’t mean it’s great. That saying does hold true in some cases but when it comes to technology it usually doesn’t mean that it usually means a good sign. I have noticed that people in technology usually don’t stick with something too long if it’s not satisfying their needs. Passenger has great support including the rails core team. It also has support from some major companies including Basecamp, The New York Times, AirBnb, and Apple to name a few of them. If you would like to see a bigger list check out builtwith. Now the argument could be made that all of these companies have no clue what they are doing using Passenger but they all have very high levels of traffic and are well known as reputable companies. There is a reason why and passenger is a tool that helps them get there.

 Ease of Use

To be totally honest I only have experience in running Passenger and Unicorn. I have never had the experience setting up Puma so I will only be comparing Unicorn and passenger here. Although I have looked at Puma’s documentation and it doesn’t look so bad. Unicorn seems to be pretty easy to install on a single instance rails server. You can get your rails application up and running in a short amount of time. There are a lot of Unicorn scripts out there to get you started and up and running. The problem with Unicorn is when you have to do anything outside of running one application on the server. Dealing with multiple environments on one machine is possible (I think) but it seems like it wasn’t made for that. It also feels like Unicorn isn’t fully polished yet. Small things kept coming up like shutting down a Unicorn process without using the kill command. Like that example it just feels like commands that are suppose to work don’t. Now to be honest I am no server administrator so the problem may be obvious to someone else. So if you are a experience Unicorn professional these arguments could easily be debunked. The problem is most rails developers are not and therefore that argument really doesn’t matter. A lot of companies are rolling with developers being the server administrators also so the easier the tool is the better.

Install passenger is also a very simple process. The reason why it’s so simple is it ships with an installer that installs both passenger and nginx. You also install passenger and apache if that is your goal. In my case it was nginx. How cool is it that you get both nginx and passenger with one install. If you already installed nginx it’s recommended that you remove it and then run the passenger install. Also once it’s installed and started passenger handles everything else. This holds true even when you have multiple environments running on the same server. You don’t have to do any extra configuration to set this up either. You basically create another nginx file point it to the directory and go. Of course if you are adding another application on you will have to restart nginx but not passenger. Also passenger comes with a couple of pretty cool tools to monitor memory and performance. You simply type in the following command to get the memory usage.


rvmsudo passenger-memory-stats

Yes it’s that easy to monitor your rails processes. Unicorn you have to manually so a ps -ef | grep ‘unicorn’ to see the process. I suppose there are some tools like this for Unicorn but they don’t see as obvious to use. This tool is built into passenger so you can run it from anywhere you have an application running.

At the end of the day you have to use a tool that works best for your situation. Is passenger that tool probably but if you are an experience server administrator tools like Unicorn may be better. We all have our reason for choosing our toolset but you should have justification for those reason. At the end do whatever makes your life easier. In my case Passenger made my life easier as a non server administrator.

Here are some resources I used to determine the content in this post

  • https://github.com/phusion/passenger/wiki/Unicorn-vs-Phusion-Passenger
  • https://www.engineyard.com/articles/rails-server
  • https://www.digitalocean.com/community/articles/a-comparison-of-rack-web-servers-for-ruby-web-applications

Test Driven Development

In theory you should always write tests while you are developing your code. In the software industry this is called test driven development . This practice will usually generate some good thoughts about how you can make the code more robust and more reusable. It’s very difficult to change architecture after you have coded yourself into a hole. It’s a whole lot easier to do it right the first time than going back a second time and refactoring what you did the first time. When you have to go back and write your tests after you wrote the code it will tend to cause more rework versus writing the tests before you actual develop the code. Sometimes it’s hard to write tests as you go as requirements are changing all the time. This can cause frustration because you feel your tests are wasted time. A common excuse I hear is requirements are always changing so I have to go back there anyways. But if your developing the code the requirements are usually pretty set at that point. At least the overall idea. Maybe the guts of the code will change but the public interface shall remain the same (at least close to the same). I had an experience on a project that I am working on that made this point pretty clear. My task within the project was the write some tests around existing code. Basically adding code coverage to an existing model. Seems simply enough until I started looking into the model and discovered a lot of very tightly coupled code. A lot of tightly coupled public interface code.

Starting this process I was able to test a lot of the public interface. The parts that were not tightly coupled. Maybe about half of the public methods were in a testable state. A unit testable state to be exact. Then I ran into a major road block. I ran into some public interface methods that required a chain of methods to be called first in order to be able to test them. If TDD was used during development process this would have been avoided. The developer would have realized that the code was to coupled to be tested. So instead of saving that time I spent even more time refactoring and trying to decouple the methods. The end solution ended up a lot better but it required a lot more effort than just writing the test before the code or at least right after you have written your code.

It’s a good practice to get into even though at first it may be difficult to get into the habit. It’s even harder to do it on a regular basis when things are moving so fast. I believe that once you get good at it though it become’s natural and really part of your process. Also the more familiar you are with architecture in general and architecture within the project the easier it will become. You can see your vision ahead of time and predict what the code should look like. To sum it up always write your tests as you go. If you need to write them after your code that’s better than leaving them and letting them get behind.

Tests using Git Concurrency Issue

I was recently working on a Gem I created called Toolshed. The purpose of this tool is to turn everyday tasks into automated ones. At least to a certain degree. This will allow you more time to do the heavy lifting tasks. For example some of the functionality that this tool provides is creating a Github pull request, updating Pivotal ticket status as well as many Git more tasks. All of this would normally require you to go up into the sites interface click around and enter the data. As we all know that takes time and time is valuable. So down to the part that I had trouble with when I was creating tests for this tool.

I first started to create all of the tests locally and verify that they worked there. I had no trouble with this and everything appeared to be working. I was not getting any test failures. After creating a couple of them I wanted to see if I could get it working up on Travis CI. Travis works well with a ruby project plus since the Gem is open source it’s free. So I setup Travis and everything seemed to be working right. The first couple of runs seemed to pass without issue. Over the next few days I noticed that Travis was failing randomly. I would push up a small change and it would fail. I would run it again and it would pass using the same code base. It seemed random and a CI that has random failures is useless. The theory behind that is if errors happening randomly there they will also happen within your application. So I needed to investigate this further. Since I was unable to recreate the issues on my local machine I decided to try it out on a different machine. I created an ubuntu environment and started fresh. Once I got to that environment I was able to reproduce the problems I was having on Travis pretty easily. I was able to debug real time which helps instead of pushing new commits up to Travis.

What I noticed was that my Git commands were not finishing before it started the next one. So it would be doing a git remote update and then the next command would run before that finished. This would cause it to fail because it depend on remote update finishing first. So I had to create a solution for this and here is that solution.

until (system("git remote update"))
  sleep 1
end

This will sleep and wait until it hears a response. This will now wait for each command to run in it’s specific order. I have now tested it out several times on Travis and am no longer having this issue.