Latest Posts from csaunders

Phoenix: Cross-Platform Theme Utilities for Shopify built with Go

When it comes to programming, my go to project has been Shopify theme synchronization. As the active maintainer of the shopify_theme gem, I’ve been dealing with some interesting and exhausting issues, some of which are easier to solve than others.

The design of the theme gem leaves a lot to be desired and adding new features has become harder and harder. Another problem I’ve noticed with many of the people trying to use it has been how difficult it is to install across various platforms. Believe it or not, but there’s a decent number of designers on the Windows platform, and getting the Ruby and shopify_theme working on their system is a journey.

In my spare time I’ve been learning the Go programming language over the last year where I’ve built some toys and tried my hand a crappy rogue-like. Along the way I made lots of mistakes and the designs were terrible, but it was a learning experience.

With the lessons I’ve learned from those projects and from reading some good articles on structuring Go projects I dove into building what I hope will be the theme development tool.  Currently the project isn’t fully in line with the Ruby version, though I do feel that it has enough features to be useful for someone looking to build a theme. Those features are:

If you are a Go programmer and are interested in sharing your insights, I’d love to hear from you. I’m very green and would love some feedback on how to improve the project.

If you are a theme designer, please let me know that you are interested in the project. I’ll work on getting builds out for the various systems as soon as possible!

Check out the project on Github!

Asynchronous Programming with Goroutines in Ruby


I've been programming on and off in Go for a little while now, and I came across a gem that makes it possible to use channels and threaded functions for sequential processing. The gem is called "agent" which was originally written Ilya Grigorik. It provides a number of extensions to the Kernel such as go! and channel!.


What is a channel?

Channels aren't really a new idea, they came out of a research paper1 called Communicating Sequential Processes. They became really popular when the Go programming language came out in which channels are a core construct of the language and really the way most of your work is done.

Channels are able to hold units of data, called messages which allow other relevant parties to listen on. A channel has two "ports" one for pushing data into and one for pulling data out of. They are also atomic, so there is no need to worry about a read or write resulting in something inconsistent, such as getting the same message twice.

In agent a channel can optionally be buffered. Using a buffered channel allows you to write a message to it without having to worry about immediately blocking. Otherwise a channel has a buffer size of zero, which means that writing to the channel blocks until someone decides to read from it. Knowing how your channels work is extremely important, which will be covered in "Ugh Deadlock".

require 'agent'

# Create an unbuffered channel
unbuff = channel!(String)

# Create a buffered channel
buff = channel!(String, 1)

# Write to the channels
buff << "Hello Unblocking World"
# Execution blocks until someone reads from it
unbuff << "Hello Blocking World"

What does a goroutine do?

In the Go programming language a goroutine is "the execution of a function call as an independent concurrent thread of control, [...], within the same address space". You can think of them as threads but don't need to worry too much about the details. Also if the function provided is a lambda, you get the benefits of closures.

In Ruby the benefits are very much the same, we are passing in a block of code to the go! method call. This method takes care of all the gross bits of threading for us, though if you grab onto the result of the go! call you'll see that we aren't working with any magic language construct, just a thread.

goroutine = go! { sleep(5); }
puts goroutine.inspect
# #<Thread:0x007fbf66ed5508 sleep>

In practice though, you don't need to worry about this. You'll invoke your goroutine and carry on your merry way.

How can the two be used together?

Goroutines and channels become very powerful because they allow us to communicate asynchronously. There are lots of ways to handle work that might be slow, such as jobs that create other jobs in a system like Sidekiq. While this approach works, it results in systems that are difficult to understand because of the separation.

Instead we can use channels as a tool to let other aspects of our pipeline know when they can start the next stage of processing. Since the channel becomes the method of communication, we can write our code in a way that makes it look sequential even though when we actually execute it, it will be running asynchronously. This takes a huge load off of our brains, which get really fuzzy when dealing with asynchronicity.

Consuming a channel

Let's say we are building a system that makes strawberry rhubarb pies. If we were to make a pie on our own it would look something like this:

  1. Cut together flour and shortening to make the dough
  2. Clean strawberries and rhubarb removing all dirt
  3. Quarter strawberries and chop rhubarb into cubes
  4. Roll out dough and place enough to cover the bottom of your pie tin. Trim the edges
  5. Measure out 2 cups of strawberries, 2 cups of rhubarb and 1 cup of white sugar
  6. Mix measured strawberries, rhubarb and sugar together until even
  7. Pour mixture on top of dough
  8. Cover pie with other piece of dough and make some cuts to let steam out
  9. Place in oven

For a family friendly baking day, doing this synchronously is no big deal. You have a good time, make some pie and all is fabulous. But if we are looking to crank out pies as quickly as possible, this just won't do. We need to split this up into a bunch of separate pieces. Looking at the process we can make a few optimizations:

  1. A person makes the dough, and prepares the tins as well as the pie cover
  2. A person cleans, cuts and mixes the fruits into the required portions. Let's say they place them in bowls that someone can just grab.
  3. A person that prepares the pies and places them in the ovens. For simplicity we are going to assume that we have unlimited space in our ovens.
PreparedTin =, :top)
MixedFruits =
RawPie      =, :mixture, :top)

class Pie
  def initialize(slices=6)
    @slices = slices

  def take_slice
    if slices?
      @slices -= 1
      "slice o pie"

  def slices?
    @slices > 0

class Oven
  def initialize(chan)
    @completed = chan

  def bake(pie)
    go! do
      @completed <<

prepared_tins  = channel!(PreparedTin)
mixed_fruits   = channel!(MixedFruits)
completed_pies = channel!(Pie)
oven           =

# Prepare our Pie Tins
go! do
  loop do
    dough = mix_flour_and_shortening
    dough = roll_out_dough(dough)
    top, bottom = cut_dough(dough)
    prepared_tins << PreparedTin(top, bottom)

# Prepare our Mixed Fruits
go! do
  loop do
    strawberries   = clean(
    rhubarb        = clean(
    sliced_berries = quarter(strawberries)
    cubed_rhubarb  = cube(rhubarb)
    sugar = measure(
    mixture = mix(sliced_berries, cubed_rhubarb, sugar)
    mixed_fruits <<

# Bake our pies
go! do
  loop do
    # Receiving from an agent channel returns a pair
    # so if we were to receive and not call first
    # we'd get something like [#PreparedTin, true]
    tin = prepared_tins.receive.first
    fruits = mixed_fruits.receive.first
    raw_pie =, fruits.mixtures,

# Eat our pies
go! do
  pie = nil
  loop do
    pie ||= completed_pies.receive.first
    if pie.slices?
      puts "Om nom nom a #{pie.take_slice}"
      # Throw out the tin, this pie is dead to us
      pie = nil

So here we've gone and created 4 workers. One prepares the pie tins, another prepares our pies contents, another who puts the pies together and bakes them and finally, someone to consume all those pies. The code can be read from top to bottom, and we really don't need to worry about concurrency all that much.

Ugh Deadlock

We need to be very careful when working with channels because we run the risk of deadlocking our system. Luckily the library has built in protections that will raise exceptions when a deadlock is detected.

f = channel!(String)
go! { f << "Hello World" }
f.receive # ["Hello World", true]
f.receive # Raises Exception
# #<Class:0x007fefc38dc768>: No live threads left. Deadlock?

Sometimes these can catch us off guard but do not fret, there are plenty of ways to get around this. We can use select! which is a based off a similar function for the Go language. select acts very similar to a case statement, though the big difference is that they are used for communication operations and this allows us to conditionally act on messages sent to different channels.

In agent we are also able to use a timeout which allows our select! to stop blocking on channels. For example, we can use select! with a timeout that will ensure we don't deadlock.

select! do |s|, :receive) { |result| puts result }
  s.timeout(0.1) { puts "Exceeded 100ms timeout!" }


In the Go programming language you can put whatever you want on a channel. Because the system is statically typed it knows how big each message is going to be. In agent this isn't the case and there appears to be some marshalling going on under the covers. I came across this problem while trying to send objects that wrapped Nokogiri on some channels.

require 'agent'
require 'nokogiri'
require 'rest-client'

pages = ["",

class Result
  def initialize(data)
    @doc = Nokogiri::HTML(data)

  def title

results = channel!(Result, 3)

pages.each do |page|
  response = RestClient.get(page)
  results << # Raises error
  # agent/push.rb:8:in `dump': no _dump_data is defined for class Nokogiri::HTML::Document (TypeError)

processing = true
while processing do
  select! do |s|, :receive) { |r| puts r }
    s.timeout(5) { processing = false }
puts "done"

After looking around a bit I figured that it was just the objects cannot be marshalled. I was caught off guard by the issue, but I was able to come up with a work around by simply using a Hash instead. I haven't tried doing anything with value objects yet, but I don't see why those would cause any problems either.

In Conclusion

The agent library was an excellent way to approach the asynchronous programming problem in Ruby. It's got some rough edges that might cut you, but with the right understanding of the limitations you can easily work around them. Other than that it's a fabulous way to learn about this alternative approach to programming. I'd even go as far to say this could be a gateway into the Go programming language since it helps you establish the basics of using goroutines and channels to communicate.

Building a Test Suite in Scheme

Although I don’t get to write in them a lot, I love LISPs. The language is amazingly simple and effectively a white canvas that you can build anything in.

This weekend I hopped back into reading The Structure and Interpretation of Computer Programs (SICP) and started implementing some of the problems. I was constantly copying and pasting (or worse, retyping) my test code. This was getting a little annoying so I figured I could implement a simple test suite to verify my algorithms are working.

Building a test suite in a language isn’t hard; heck, the Ruby testing library Minitest is a good example in the simplicity. There are lots of features you can build out, but for academic purposes, building a test suite shouldn’t be too tricky.

For my Scheme test suite the only thing I needed to do was ensure that each test returned true or false. If it’s false, then I know the test failed and I could act appropriately. The simplest way we can define our suite of tests would be a list of function names that we evaluate.

;; I am using the Racket Scheme Implementation
(define (suite cases)
    (if (eq? cases null)
        ("Test Suite passed Successfully!")
        (let ([testcase (car cases)])
            (if (testcase)
                (suite (cdr cases))
                (failure testcase)))))
(suite (cons test1 (cons test2 (cons test3 null))))

(define *tests* null)
(define (test code)
    (set! *tests* (cons code *tests*)))

While the above implementation works, it is pretty nasty to look at. It would be cool if we had a nicer way to define our tests. My first stab at the implementation simply used a global tests variable that I would append my test functions to. I had to major problems with this implementation: unnecessary test noise and tricky reporting. The noisy tests meant I needed to do two things in order for a test to be registered:

(define (pascal row col) -1)
(define (pascals-triangle-0-0) (= 1 (pascal 0 0)))
(test pascals-triangle-0-0) ;; ಠ_ಠ

I could’ve tried getting around it by simply passing in lambdas, but my suite didn’t have a good way of reporting which test had failed. Running a failing suite with a lambda in it would result in the following:

(suite (cons (lambda () #f) null))
;; "Test failed #<procedure>"

Well that’s not very handy so what if we were to take a different approach to building out these tests. Instead of our test simply taking a function, let’s pass in the name of the test and the code to run:

(define (test name code)
        (set! *tests* (cons (cons name code) *tests*)))

(define (suite cases)
    (if (eq? cases null)
        (let ([testcase (car cases)])
            (if ((cdr testcase))
                (suite (cdr cases))
                (fail (car testcase))))))
    "that 1+1 is 2"
    (lambda () (= 2 (+ 1 1))))
(suite *tests*)
;; "Test Suite Passed sucessfully"
    "that 2+2 is 5"
    (lambda () (= 5 (+ 2 2))))
(suite *tests*)
;; "Test failed that 2 + 2 is 5"

And with that is a super simple test suite. For more complicated tests it probably wouldn’t stand up, but for the kinds of functional programming I’ve been doing along with SICP it should work out alright.

Mental Health and Toxic Communities

From Model View Culture about Toxic OS Communities.

We need more stories of people leaving toxic communities so that people know it’s ok to prioritize their mental health over their community. We need stories of successfully switching jobs, of abandoning positions of power to protect our mental health or get a better work-life balance.

Curious Cat


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.
Permissions beyond the scope of this license may be available at


Landing in a Storm

Landing in a Storm

I wouldn’t say this is the best photography I’ve done and it was super hard to get the focus right on the spot, but I wanted to share a shot from a pretty cool light show I got to see that made me decide to stay up a little bit longer.

I need to spend some more time using my equipment so I can get some better in the moment shots like this in the future.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.
Permissions beyond the scope of this license may be available at

Learning to Deploy Servers with Ansible


I’ve been wanting to learn how to use a configuration management tool for a long time. In the past I’ve attempted to learn Chef though I found myself getting lost amidst the domain language. Trying to use these tools made me feel extremely stupid simply because I was having a hard enough time grasping the concepts of the system.

A couple of weeks ago during our Dev Show and Tells, Nate Smith showed off Ansible. Ansible a configuration management tool but it’s goals are to be easier to use and understand. It’s written in python and uses YAML files for defining your provisioning tasks, called Playbooks in Ansible.

Before we can start using a Playbook we need to do a tiny bit of setup first. You’ll need to define your Server Inventory which is required before Ansible can go out and provision your servers for you.

# hosts


Now let’s verify that our host configuration is correct:

/Users/csaunders/development/ansible [csaunders@outerhaven] [23:09]
> ansible all -i hosts -m ping | success >> {
    "changed": false,
    "ping": "pong"
} | success >> {
    "changed": false,
    "ping": "pong"

Now that we know that we are able to connect to our servers, let’s build a Playbook. A Playbook is simply a definition as to what kinds of tasks you’ll be running to setup the server. The simplest playbook you could have would be one where you install apache2 on a bunch of servers:

# Here we define the hosts we are going to run them on
- hosts: all
  user: root
    - name: Install Apache with php
      apt: pkg=apache2 state=present
    - name: Install mod PHP on Apache
      apt: pkg=libapache2-mod-php5 state=present
    - name: Install PHP
      apt: pkg=php5 state=present
    - name: Install the PHP MySQL Bindings
      apt: pkg=php5-mysql state=present
    - name: Restart Apache2
      service: name=apache2 state=restarted

What I really like about this is it’s very easy to understand what is going on. We have a little description that describes what the task is and the command that will be run. The name of the task will also be echoed into the terminal when you run the playbook so you have an idea of what task to look at if something goes wrong. And while you might be thinking that the above is pretty verbose, Ansible also provides the tools such that you can run the same command on a list of things. So you can shorten the above example into a single command.

- hosts: all
  user: root
    - name: Install Apache2 and the necessary PHP modules
      apt: pkg={{ item }} state=present
        - apache2
        - libapache2-mod-php5
        - php5
        - php5-mysql
    - name: Restart Apache2
      service: name=apache2 state=restarted

Often we want to be able to re-use a bunch of the code we write because many different kinds of servers use many of the same things. This is where we can define Roles which we can assign to various hosts or groups of hosts. A Role consists of a number of things though the major components are the tasks to run, templates to use and handlers that fire after a task.

Now with a few roles defined you can create Playbooks that leverage them:

- hosts: webservers
  user: ansible
  sudo: yes

    - common
    - mysql
    - apache2

All the tasks contained in each role will be executed, and if all goes well you will have a server that’s ready to serve up some MySQL powered sites!

So far I have to say that I’m quite happy with Ansible. I’ve been meaning to migrate my systems around because they were somewhat out of date and I wanted to bring some costs down. The idea of doing it all manually was a massive deterrent from doing it whatsoever simply because of all the work that’s involved. While it might have taken me more time to get what I needed to do in Ansible; I know that if I ever need to do something like upgrade WordPress, it just requires changing a configuration value and running a command. It also makes it a lot easier for me to do things like change providers if I ever wanted. Learning to use this tool has also laid a base from which I can build upon. The process for creating a configuration to prepare a server for a Go or Ruby app should be far simpler! If you are looking to level up your dev ops knowledge, then I’d definitely consider looking into Ansible as a way to do that.

The Problem with shopify_app’s Session usage

An issue was recently opened on the Shopify App project because of a change in what Rails allowed you to set inside a session. This caused the project to break for people looking to play with the API simply because they couldn’t set their session up correctly.

I’ve rarely used the gem simply because its generators are very noisy, but after talking to a friend about this I became extremely aware of what the problem was. The following is the offending code:

  sess =
  session[:shopify] = sess        
  flash[:notice] = "Logged in"
  redirect_to return_address

Now why is this a problem? By default, Rails uses cookies for storing sessions and cookies are sent off to your client. Thankfully the cookies are run through an HMAC so they cannot be tampered with, but the data is still raw and available for anyone to read.

Reading the Session

This can easily be shown by grabbing the contents of our cookie from chrome:

broken cookies yo

There are a few things that need to be massaged before we can read the data, that primarily being the trailing –ae63169593d0e2e68a72977324f5791d6d31b8b8 which I believe is the HMAC. Also the %3D needs to be converted back into a = for us to work with the data correctly. You can use a utility like Rack::Utils#unescape to take care of this if you don’t want to do it by hand. The contents of the cookie are simply Base64 encoded, so let’s decode the message and see what’s there:

require 'base64'


raw_contents = Base64.strict_decode64(msg)

raw contents

Woah… so that’s looking pretty hairy and by just looking over it visually we can see some sensitive data in there. But let’s make things even easier on ourselves. We can use the Marshal module to actually convert that data into objects that are super easy to read and use.

# require the libraries so we can unmarshal the data
require 'action_dispatch'
require 'shopify_api'

session = Marshal.load(raw_contents)
#> {
        @flashes={:notice=>"Logged in"},
shopify = session['shopify']
puts "Domain: #{shopify.url}, Access Token: #{shopify.token}"
> Domain:, Access Token: b8bc16422e281a83708a7ce6b1c4aefb

Now it’s plain as day! We were storing raw Shopify session objects in the cookies that we are sending back to our clients and we were able to decode them. Depending on your configuration this could mean that malicious third parties could hijack your access token and do some really bad things to a merchants shop.

How do I avoid this?

The rails security guidelines suggest against storing any kind of sensitive data in the session and this style of vulnerability is the #2 in the OWASP Top 10 vulnerabilities of 2013. The default way that it was done in shopify_app was definitely something you’d want avoid in a production environment. Unfortunately nothing was put in place to help provide the education needed to ensure people moved away from this practice.

When building an application you want to ensure that sensitive information never goes anywhere you cannot trust. While it is convenient to store the data in the session, the client is about as untrustworthy as you can get. What should be stored in the session is something that you can use to identify the data you need.

How to Fix it

In a patch that has been merged into master, you instead configure a ShopifySessionRepository such that it knows how to store and retrieve sessions from your server. For example, you could simply have your Shop model respond to those two methods and you’d be set. When a session is stored you return the ID for the model such that you can retrieve a ShopifyAPI::Sesssion when you need it at a later date.

A new version of the shopify_app gem was just released, so you can easily upgrade your application by simply performing gem update shopify_app which will upgrade you to version 5.0.0. If you are starting a new app you’ll want to ensure that your Gemfile is using the newest version such that you are able to use the safer session implementation.

gem 'shopify_app', ">= 5.0.0"

If you are using the shopify_app in your project, take a look at your sessions controller to verify that your show action doesn’t have the offending code as mentioned above. If it does, you should remove that code and replace it with a safer way to access the credentials associated with a shop. Consider looking into what was added to shopify_app for one approach to this problem.

Building Games for Fun, Feedback and F…. Learning

Last week I did a presentation at the Ottawa Ruby group. Luckily we were running a live stream of it and I was able to grab a copy of the raw video. You can check it out below:

Using Bresenham’s Line Algorithm for Raycasting

Last week I wrote about my first stab and implementing a proper field of vision for my rogue-like. The implementation left me somewhat disappointed because it didn't really work for what I wanted.

After sitting down with the code and studying it a bunch I realized why the code wasn't working and what would need to be done to improve it. The problem rested in the fact that it simply iterated over these octants but was completely ignorant of any state around in one of the directions. The outer loop had no context as to when we should stop incrementing. The result was being able to see through eastern and western doors.

Another problem I had with this implementation is it didn't give me that really nice viewing region that I've grown to love in rogue-likes. The algorithm was too aggressive and would allow me to see through thin walls and so fourth. It had to be rewritten. One thing I liked about using this suboptimal algorithm (for my purpose) was that it gave me a better understanding of what I was doing and how to solve it.

I went back to the wikipedia article and started implementing various versions of Brasenham's Line Algorithm.

The first version was the Optimization of the algorithm. I went with this one first because I found the code was easier to understand and would help ease me into the simpler/faster ones. After implementing the algorithm one of the flaws in it became somewhat apparent.

hmm something isn't quite right

The first optimization doesn't do anything to establish the direction of where the rays are coming from. The result was that rays would be cast inwards toward the point instead of outwards. I needed them to be cast outwards since that would give me the wonderful fields of view we get when entering rooms or corridors that we see in our rogue-likes.

The second optimization does exactly what I wanted. It determines how to move outward from starting point towards the end of the line. During the line-building process, if I encountered an opaque object, I'd kill the line drawing then and there.

The one thing with the line algorithm is that it just does a line, we are still responsible for telling it where all those lines should be cast. I saw a couple of guides on how to do it but they felt really awkward. It involved using macros to do things a bunch of times which just felt wrong to me. Since we are casting rays from a point in a circle, instead of using octants or whatever, why don't we just us polar co-ordinates?

The nice thing about polar coordinates it they give us a perfect circle around our character and the calculations from polar to cartesian are extremely easy to understand.

oh yeah I need to add the original point to my calculations

After a tiny bit of head scratching because I forgot to add my starting point to the calculations I had my rays!

eugor in action