Funding Gates

Automatically Generate Ember Models From Rails Serializers

by Matt Rogish (@MattRogish)

When working with EmberJS in a Rails context, we noticed that keeping our Rails and Ember models in sync was a time-consuming and error-prone process.

As Jo mentioned:

In smaller projects you can repeat your database columns in the Ember-side
model definitions. For our more complex app, we found that this doesn’t scale.
We ended up going with code generation for the ember-data model definitions,
generating a schema.js file using a rake task.

I’ve gotten some requests to expand on that and include some code. Note: this is for pre-1.0 Ember so I wouldn’t go copying-and-pasting this into your project. It’s not likely to work and you’ll get strange errors. Still, since it’s not much code you could start with this and tweak it based on the latest version of Ember.

Consider the following Rails model, the proverbial “User”:

app/models/user.rb
1
2
3
4
5
6
7
8
9
class User < ActiveRecord::Base
  attr_accessible :email, :first_name, :last_name, :phone, :birthdate

  belongs_to :organization

  def full_name
    "#{first_name} #{last_name}"
  end
end

And the associated ActiveModelSerializer:

app/serializers/user_serializer.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
class UserSerializer < ApplicationSerializer
  attributes :organization_id
  attributes :email, :first_name, :last_name, :phone

  has_one :organization

  # Computed attributes
  attributes :full_name, :ember_birthdate

  def ember_birthdate
    object.birthdate.strftime("%m/%d/%Y")
  end
end

In order to consume this data, you’ll need an Ember model that looks something like this (Coffeescript for brevity):

app/assets/javascripts/ember/models/user.js.coffee
1
2
3
4
5
6
7
8
App.User = DS.Model.extend
  email: DS.attr('string')
  first_name: DS.attr('string')
  last_name: DS.attr('string')
  phone: DS.attr('string')
  full_name: DS.attr('string')
  ember_birthdate: DS.attr('string')
  organization: DS.belongsTo('App.Organization')

Kind of tedious to write all of that out, to remember to keep it up to date should you add, change, or delete something on the serializer side, and it just feels very un-DRY. Why should we have to hand-edit multiple files when we make a change? That’s why we have convention over configuration!

We created a simple rake task that uses the schema capability of ActiveModelSerializers to convert it to JSON, then is processed by the compiler to generate the model data.

Note: This may not work with the latest version of AMS.

The rake task is simple (you can shim it on rake db:migrate and elsewhere if you want):

lib/tasks/ember_schema.rake
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
namespace :db do
  namespace :schema do
    desc 'Regenerate the Ember schema.js based on the serializers'
    task :ember => :environment do
      schema_hash = {}
      Rails.application.eager_load! # populate descendants
      ApplicationSerializer.descendants.sort_by(&:name).each do |serializer_class|
        schema = serializer_class.schema
        schema_hash[serializer_class.model_class.name] = schema
      end

      schema_json = JSON.pretty_generate(schema_hash)
      File.open 'app/assets/javascripts/ember/models/schema.js', 'w' do |f|
        f << "// Model schema, auto-generated from serializers.\n"
        f << "// This file should be checked in like db/schema.rb.\n"
        f << "// Check lib/tasks/ember_schema.rake for documentation.\n"
        f << "window.serializerSchema = #{schema_json}\n"
      end
    end
  end
end

Note that AMS does not include, nor care about, the Rails model validators, so you’ll need to handle that on your own. We wrote a small helper to output a few basic validations but since Ember lacks built-in validators, you’d have to write your own validator library.

ember-validations looks like a great library that supports all current (Rails3) validations. You would just need to export the validations as JSON, and then write an appropriate converter.

So, great! We now have the JSON definition for the user:

app/assets/javascripts/ember/models/schema.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Model schema, auto-generated from serializers.
// This file should be checked in like db/schema.rb.
// Check lib/tasks/ember_schema.rake for documentation.
window.serializerSchema = {
  "User": {
    "attributes": {
      "id": "integer",
      "organization_id": "integer",
      "email": "string",
      "first_name": "string",
      "last_name": "string",
      "full_name": "string",
      "ember_birthdate": "string",
      "phone": "string"
    },
    "associations": {
      "organizations": {
        "belongs_to": "organization"
      }
    }
  }
}

How do we get it into Ember?

app/assets/javascripts/ember/models/schema_parser.js.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#= require frontend/models/schema

# Check lib/tasks/ember_schema.rake for documentation about the schema.

dsTypes =
  string: 'string'
  text: 'string'
  decimal: 'number'
  integer: 'number'
  boolean: 'boolean'
  date: 'date'
  # There is no time type in ember-data yet

# Define base classes like App.UserBase with attribute and
# association definitions based on schema data.
App.defineModelBaseClassesFromSchema = ->
  for className, schema of serializerSchema
    properties = {}

    for underscoredAttr, type of schema.attributes
      attr = underscoredAttr.camelize()
      if dsTypes[type]?
        if attr.match(/Id$/) and dsTypes[type] == 'number'
          # On the serializer side, we serialize belongs_to relationships as
          # integer _id fields, since AMS doesn't support belongs_to yet, and
          # has_one sideloads the association, causing infinite recursion.
          # Because of that, we infer a belongsTo relationship when we see _id
          # attributes in the schema.
          assoc = attr.replace(/Id$/, '')
          properties[assoc] = DS.belongsTo('App.' + assoc.capitalize())
        else
          properties[attr] = DS.attr(dsTypes[type])
      else
        # Ember.required doesn't quite do what we want it to yet, but maybe it
        # will be fixed. https://github.com/emberjs/ember.js/issues/1299
        properties[attr] = Ember.required()

    for assoc, info of schema.associations
      assoc = assoc.camelize()
      if tableName = info?.belongs_to
        properties[assoc] = DS.belongsTo('App.' + tableName.classify().capitalize())
      else if tableName = info?.has_many
        properties[assoc] = DS.hasMany('App.' + tableName.classify().capitalize())
      else if tableName = info?.has_one
        properties[assoc] = DS.belongsTo('App.' + tableName.classify().capitalize())

    # Do validator stuff here, if you so desire

    App["#{className}Base"] = App.Model.extend properties

This will create the Ember model definition as above, except with a “Base” suffix (UserBase). You can then extend it with Ember-only attributes:

app/assets/javascripts/ember/models/definitions.js.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#= require frontend/models/schema_parser

App.Model = DS.Model.extend()

# Define base classes like App.UserBase based on the schema, which in
# turn is generated based on the serializers. Below, we only add server-side
# associations, because the schema has their types as
# `null`, as well as client-side computed properties.
#
# Check lib/tasks/ember_schema.rake for more documentation about the schema.
App.defineModelBaseClassesFromSchema()

App.User = App.UserBase.extend
  syncing: DS.attr('boolean')
  hasOrganizationBinding: 'organization.length'

That’s it. Now your models will be autogenerated and you only have to worry about anything not included in the schema!

Getting OpenID Working on Heroku

by Matt Rogish (@MattRogish)

I just spent the last few days wrestling with OpenID intermittently failing on production, but not test, development, or staging.

It took me a bit of time to fix, so I thought I’d enumerate the steps.

  1. Use Unicorn
  2. Use MemCachier
  3. Use Dalli
  4. Use ruby-openid
  5. Configure OpenID to use Dalli:

(set :expires_in to taste)

1
2
3
4
5
    ::OpenID::Consumer.new(session,
        OpenID::Store::Memcache.new(Dalli::Client.new(ENV['MEMCACHIER_SERVERS'],
                               username: ENV['MEMCACHIER_USERNAME'],
                               password: ENV['MEMCACHIER_PASSWORD'],
                               expires_in: 300)))
  1. If you are using rack-openid

(set :expires_in to taste)

1
2
3
4
5
    config.middleware.use "Rack::OpenID",
      OpenID::Store::Memcache.new(Dalli::Client.new(ENV['MEMCACHIER_SERVERS'],
                             username: ENV['MEMCACHIER_USERNAME'],
                             password: ENV['MEMCACHIER_PASSWORD'],
                             expires_in: 300))

That’s it!

P.S.

The reason why it worked on development and test: we only had a single Unicorn running, so memory storage (the default) worked fine. Staging is running more than one dyno, but since the load was so small it hit the same dyno more often than not, causing it to appear to work when it wasn’t really.

P.P.S.

You may see guides on the internet that are a few years old suggesting to use filesystem storage:

OpenID::Store::Filesystem.new('./tmp')

This would only work if you use a single dyno as the filesystem is not shared amongst dynos. Stick with memcached!

Deploying at Funding Gates

by Matt Rogish (@MattRogish)

Deploying, it seems, is hard. Folks have written very complex mechanisms to maintain different releases and branches:

Wow! That seems like a lot of work. You need to keep track of releases, hotfixes, and make sure that any last-minute work is propogated to all the relevant branches.

This feels very un-agile and not very lean. Moreover, it requires a gatekeeper (or more!) who decide if/when a release is to go out, appropriate documentation (here’s version 1.x release notes), and individual developers scrambling to get feature X or bug Y into the current release.

That’s a lot of headaches! Luckily, we’re working on web application software so we don’t have to worry about cutting releases and delivering them to end-users. Press a button and everyone gets the new feature!

As we’re optimizing for cycle time (how long it takes for a feature to hit production after starting) any hand-offs or waiting introduces costly delays. Code that has been written that is not deployed to users is wasteful undelivered inventory.

Process

  1. Developer works on feature
  2. Developer commits and pushes to master
  3. The system automatically deploys to production

There is no next step. That’s too simple, right? Right. There are some pre-requisites that make this possible:

  • We require high unit and functional test coverage to ensure nothing gets broken.
  • At every push to a remote branch, a Continous Integration server runs and rejects a build if a test fails, notifying the entire team of the failure.
  • Instead of working on long-running feature branches, we prefer to work with short running local branches or directly on master (up to each developer to decide)
  • All features that aren’t “ready” (either not finished yet or not ready for “marketing” purposes) are disabled via feature flags on staging/production.
  • Features we’re currently working on are enabled in development and QA/test
  • Any server errors are immediately delivered to the team chat/email and we have other monitoring systems in place that would identify “something bad” happening in production

Benefits

This gives us the ability to work on master, have master always deployable, and be confident that code that isn’t ready for public consumption won’t appear. Someone identifies a bug? I can add a test to catch it, fix it, and have it deployed to production within minutes - all without needing to get permission from anyone or perform git gymnastics.

Deployment is now a non-issue. No one needs to be “taught” how to do it. There isn’t even a button that needs pressed.

Continuous deployment isn’t crazy, and it absolutely scales with larger teams. Plenty of big companies like Github, Etsy, and the poster-child, IMVU are probably delivering as you read this.

You can add Funding Gates to that list!

Thoughts on Moving Ember.js Forward

by Jo Liss (@jo_liss)

In this post, I’ll talk about the technical challenges we’ve encountered as we’ve used Ember for a medium-sized project, as compared to the smaller apps I’d written before.

The target audience is other developers that are interested in moving the Ember.js project forward. The post is mostly intended as a conversation starter. My hope is that through discussion and code, we will be in a better place a few months from now.

Note 1: If you are just trying to decide on a framework for your app, then after our experiences, I can wholeheartedly recommend Ember. I’ll blog about framework choices some other time. In the meantime, check out Peter’s talk.

Note 2: I believe that the problems I describe tend to arise with other frameworks, like Angular or Backbone, as well. (Backbone in particular doesn’t even try to address many of these things.) If you have seen or come up with solutions for other frameworks, please share them!

1. Debugging and Inspectability

This is a surprising issue, so I’ll start with this one.

Ember has great top-down documentation. However, as my team-mates dove into our Ember app, one complaint was this: The documentation gives simple, self-contained examples. Dropping from president.get('fullName') into a fully-featured Ember app is pretty brutal. It’s very hard to know what’s actually going on.

Unfortunately, Ember doesn’t make it very easy to inspect app state (check out this gist for a demonstration).

I think we need several pieces to solve this puzzle:

  • Integration with Web Inspector and Firebug: When showing Ember objects, Firebug exposes the type (through toString), but not the properties. Chrome doesn’t even show the type – just a generic ▶ Class.

    I’m guessing that we’ll need a “list all properties” function. Ember should already have all the necessary infrastructure for this (Ember.meta, etc.). Secondly, I wonder if we should link up with the Web Inspector and Firebug folks to see if there’s a way we might be able to get custom UI for inspecting Ember objects.

  • I wish it was possible to click into the page to see the view hierarchy. I’m not sure how to implement this, but my pie-in-sky dream is something like Firebug’s DOM tree, except for nested Ember views instead of DOM nodes. This might be a couple notches too ambitious, but maybe a “light” version of this will go a long way.

    Apparently the Illuminations addon does something like this, just not for Ember (yet?). Thanks to @wagenet for the pointer!

2. Testing

In my smaller Ember apps, I was able to get by with Selenium, but this is too slow for more complex apps.

Client-side testing in JavaScript is the only viable option for a fast and reliable test suite, in my opinion. QUnit is old and sturdy, but I personally like Konacha (Mocha for Rails). In addition to unit tests, it allows you to run your Ember app in an iframe, and then use jQuery to click on links and inspect the DOM. This essentially gives you a very fast synchronous client-side integration test.

Ember already brings some helpful things for testing, like the FixtureAdapter in ember-data. Still, the whole setup doesn’t feel very mature to me yet. There is no pre-made testing environment set up for you, like with Rails, so instead every project ends up with their own test setup at the moment.

So where do we go from here? It seems that there are two big issues:

  • How do we reliably reset an app between tests? I’m suggesting a solution in #1318 (Application#reset), but it seems fraught with issues. I suspect that we may need to come up with a better way. Much of the complexity of this problem comes from global state, like the App.router object.

  • How do we give people a pre-made working JavaScript test setup, like Rails does for the server side? I believe that this would belong in packages like ember-rails. For instance, perhaps ember-rails could in the future generate a Konacha setup.

Test Fixtures

There is a subtle but painful issue with fixtures for JavaScript tests: As long as you are passing down raw database columns (as tends to be the case in smaller apps), you can easily define fixtures or factories on the client side.

But if many of the JSON attributes are cooked or computed, you’ll want your fixtures guaranteed to be in sync with what the server generates. We ended up defining our fixtures in Ruby, and dumping them out as JSON into a fixtures.js file, using a generator rake task. This works OK, but it doesn’t feel very clean.

I think we’re caught between a rock and a hard place here:

On the one hand, doing all computation on the client side doesn’t seem practical yet. Even if the performance is good enough, at the moment JavaScript is still too awful a platform to implement major chunks of logic. Compare for instance this Ruby method and the equivalent Ember/CoffeeScript code.

But on the other hand, if you perform computation on the server, it becomes harder to test the JavaScript in isolation. Accordingly, you end up resorting to workarounds like generated fixture definitions.

I’d love to hear how other people approach this issue.

3. DRY Model Definitions

In smaller projects you can repeat your database columns in the Ember-side model definitions. For our more complex app, we found that this doesn’t scale. We ended up going with code generation for the ember-data model definitions, generating a schema.js file using a rake task.

I really wish there was a nicer way to get DRY model definitions.

The information for the model definitions needs to come from the server, so this cannot be solved in Ember core.

In Rails’s case, the authoritative source might be the serializer classes used by active_model_serializers. Getting that information into the client is surprisingly non-trivial: You can’t dump complete model definitions at precompilation time, along the lines of App.Blog = <%= BlogSerializer.ember_definition %>;. This is because the type of each attribute is stored in the database, and during precompilation, you don’t generally have a database.

I’m not sure how this will be solved. I do think however that we need a real, DRY solution, not just more generators in ember-rails.


These are my thoughts so far. As I said, this post was intended as a conversation starter. Once we figure out how to approach some of these things, you’ll hopefully see me writing code and not just complaining idly on our blog. ;-)

Leave a comment, open an issue, or find me on the #emberjs channel.

Capybara 2.0 Upgrade Guide

by Jo Liss (@jo_liss)

The Capybara 2.0.0 beta is out. I’ll walk you through the most important changes, and show you how to upgrade.

The bad news: If you upgrade to Capybara 2.0.0, you may have to make some changes to your test suite to get it passing.

The good news: Once you’re compatible with Capybara 2.0.0, you can probably go back and forth between 1.1.2 and 2.0.0 without any changes, should you decide that 2.0.0 is not for you (yet).

Compatibility Notes

Third-party drivers like WebKit or Poltergeist are not yet compatible with Capybara 2.0. Use the default :selenium driver in the meantime.

Also, Capybara 2.0 will likely drop Ruby 1.8.7 compatibility.

How to Upgrade

The latest 2.0.0 beta release is two months old. I recommend you use Capybara master, since it has some fixes, and is generally in better shape than the beta:

Gemfile
1
2
3
group :test do
  gem 'capybara', git: 'https://github.com/jnicklas/capybara', ref: '7fa75e55420e'
end

Update: Capybara master is having some changes that still need to be synchronized with rspec-rails (#809). If you are using RSpec, specify the ref: as above in the meantime.

There is one major change that will likely cause breakage in your test suite, and that is how Capybara handles ambiguous matches:

Ambiguous Matches

The find method, as well as most actions like click_on, fill_in, etc., now raise an error if more than one element is found. While in Capybara 1.1.2, it would simply select the first matching element, now the matches have to be unambiguous.

Here is a common way this can break your test suite:

1
2
fill_in 'Password', with: 'secret'
fill_in 'Password confirmation', with: 'secret'

The first fill_in will fail now, because searching for “Password” will match both the “Password” label, and the “Password confirmation” label (as a sub-string), so it’s not unambiguous.

The best way to fix this is to match against the name or id attribute – such as fill_in 'password', with: 'secret' – or, when there’s no good name or id, add auxiliary .js-password and .js-password-confirmation classes. (The js- prefix is for behavioral classes as recommended in the GitHub styleguide.)

1
2
find('.js-password').set 'secret'
find('.js-password-confirmation').set 'secret'

I find that using .js- classes instead of matching against English text is actually a good practice in general to keep your tests from getting brittle.

Should you absolutely need to get the old behavior, you can use the first method:

1
2
click_on 'ambiguous' # old
first(:link, 'ambiguous').click # new

Minor changes

You can assume that these don’t affect you unless something breaks:

  • The RackTest driver – that’s the fast default driver, when you’re not using js: true – no longer respects Rails’s data-method attribute unless you tell it to. Update: The behavior matches Capybara 1.1.2 again (#793), so long as you have require 'capybara/rails' (like you should in any case).

  • The find(:my_id) symbol syntax is no longer supported (#783). Write find('#my_id') instead, as recommended in the documentation.

  • has_content? checks for substrings in text, rather than using XPath contains(...) expressions. This means improved whitespace normalization, and suppression of invisible elements, like head, script, etc.

  • select and unselect don’t allow for substring matches anymore.

  • Capybara.server_boot_timeout and Capybara.prefer_visible_elements are no longer needed and have been removed.

  • Capybara.timeout and wait_until have been removed, as well as the Selenium driver’s :resynchronize option. In general, if you have to wait for Ajax requests to come back, like before you should try using page.should have_content or page.should have_css to search for some change on the page that indicates that the request has completed. The check will essentially act as a gate for the Ajax request, as it will poll repeatedly until the condition is true. If that doesn’t work for you, you could implement your own simple wait_for helper method (see e.g. this gist). See also this thread about wait_until going away.

Goodies

These won’t break your code when you upgrade, but they’re sweet new additions:

  • Lots of new selectors, like find(:field, '...'), etc. These can come in handy if you find yourself doing intricate node finding. Check the add_selector calls in lib/capybara/selector.rb for a list.

  • has_content? accepts regexes.

Problems?

Any speed bumps I forgot to mention? Leave a comment.

If you need help with problems, ask away on the mailing list! To report reproducible bugs or suggest changes in Capybara, open an issue in the issue tracker. Jonas and I are monitoring both.

Even better, send a pull request! We’ll love you for it.