Map harder, map faster.

Hackathon #4

For our fourth Hackathon, once again, we mapped hard and we mapped fast. And, once again, one of the projects has already made its way to production: we now have about 2,200 rest areas on our map. (As a parent who road trips with three young children, this is a Pretty Big Deal.) We also focused on polishing up a couple things we worked on last week, and hope to get them out to production this week.

Polishing POI Spread

During last week’s Hackathon, we were able to greatly improve the spread of POIs along a given route. This week, we made the following tweaks to improve it even further:

  • We increased the number of segments we break the route into from five to ten, giving us the ability to surface more POIs along the route. It’s a simple, but very effective change.
  • We’re now using our internal POI ranking system in order to ensure we surface the best possible POIs we can.

We’re pretty happy with the way this turned out and can’t wait to release it into the wild.

Polishing I.P. Geolocation

We had to address some performance issues to get the large dataset into a workable format and that came down to taking better advantage of the location indices available in PostGIS. The import takes about an hour to add more than 1.6 million I.P. locations. There was an issue of duplicates; we wanted to network prefixes to individual cities, but our dataset included multiple IP addresses in the same network prefix. We might be able to dig into this further and make better use of the dupes, but for now we toss them to the curb.

For production use, we’ll use this data to more quickly locate the user. Users who engage our geolocation map control will be “zoomed” to the predicted location while the browser navigator requests permission and tries to get a better location. We also added the ability to our map state URLs that will allow us to share map states that zoom to the user’s location. It will also use the geo I.P. data on the server to load a centered map before trying to use the browser navigator to increase the accuracy.

We’re also hoping to ship these changes this week.

Behind the Scenes

We also dug into a couple of Big Things we’ve been wanting to dig into for a while: upgrading to Rails 4 and moving to server-side React rendering for place cards in our discovery layers. Both of these undertakings deserve—demand?—their own blog posts,—keep an eye on this space!

Map hard, map fast.

For our third Hackathon, we focused on our map and place data. One of the projects has already made its way to production: standardizing place name casing and address formatting. We think the rest will find their way shortly with a bit more polish.

Neighborhoods and Region Highlighting

Regions have always been a big part of our map, but they have traditionally been political—countries and states. We’ve recently added support for non-political regions—e.g. “Cape Cod” or “Florida Keys” or “Route 66”—and decided to take a look at how easily we could add neighborhoods. We were able to find Cincinnati neighborhood shape data at CAGIS and put together a rake task that pulls that data down, imports it, and makes it a searchable region.

I’ve also always wanted to be able to highlight regions on the map, so that searching for a region would present the region with a visual boundary. It turns out this plays very nicely with neighborhood regions. Once a region is activated via our map search, we drop a multipolygon representing the neighborhood on the map and apply a filter to the active search to only return results inside that multipolygon.

You can see highlighting of states, parks, and tourism regions in action in the below screencast.

And here’s an example with experimental neighborhood data.

Spreading POIs

One of the primary factors we consider when returning results in our map search and POI discovery is engagement—how often our users visit or rate places. In general this has worked very well for us as it ensures we’re surfacing high-interest POIs. However, one place this practice doesn’t always work well is when we surface POIs along a trip route that contains waypoints that are high-engagement cities, e.g. San Francisco to Los Angeles. We wanted to find a way to reduce the “barbell effect,” in which the POIs return along the route are clustered in those high-engagement cities, in order to surface places along the route. So we found a way.

Our first cut at improving POI spread breaks up the polyline that represents trip route into five segments. We then generate a polygon for each of those segments by calculating the general slope of the segment, determining the rough angle for the segment, and extrapolating the points that would appear at the edge of the polygon were we to buffer out the entire polyline. We then take advantage of Elasticsearch’s msearch to perform all the queries at once. You can see this in action in the below screencast. The browser on the left is our production site; the browser on the right is a staging environment the “spread” fix.

I.P. Geolocation

We have long supported the geolocation API so users can find themselves on the map, start a trip from wherever they are, etc. Unfortunately, using the geolocation API isn’t always the best experience—it’s a three-step process: the user clicks a geolocation button, we ask for permission, then update the UI if the user grants permission.

In order to get this down to a no-step process with zero interaction from the user, we took a swing at using an IP geolocation DB to infer where a user is located. We got the IP address’ geolocation, then did a quick query against our DB to find the nearest large population center. We were quickly able to put this to use, pre-filling our Welcome Launcher and map state URLs. We’ve got more than a few others ideas on how we can utilize this.

R.I.P. Pin Sprites

We currently use two image sprites for our map pins: one for our “large” category and waypoint pins and another for our small pins. Our UI developer, Chris, has had his eyes on Mapbox’s divIcon for a while, so we let him loose. By the end of the day, he had most of our pins rendered with markup, an SVG, and some CSS. With a bit more refactoring and polish, this will free us up to do things with our map pins like badging, dynamic iconography, and a million other things that have made us say “Wouldn’t it be nice…” Our Long Internal Nightmare of being restricted by an image sprite will soon be over.


94% of the world’s population uses Slack—it’s :science:.

Roadtrippers is part of that 94%. To say it’s an integral tool is an understatement; it is a :hero:. The engineering team in particular leans so heavily on Slack that we’ve developed a language of sorts based entirely on emoji. We’ll get to that later. First, let’s meet some of the team.


Slacking hard.

Now that introductions are out of the way, we wanted to share a few ways we use emoji in #dev.

We have emoji for our stage environments—:absinthe:, :bourbon:, :coffee:—and production environment—:samuel:—and a handful of Git branches—:master:, :canary:, :develop:.

Let’s say :mitch: wants to stage a branch for QA. Rather than typing an actual sentence made with actual words, he can just:

@channel: :canary: to :absinthe: for :brandon:?

Or maybe :chris: has a high-priority bug that needs immediate review:

@channel: Claiming :bourbon: for :ebf:!

When our QA guy, :brandon:, has finished reviewing an epic on a staging environment:

@channel: :coffee: is :green: and ready for :samuel:!

It may seem silly at first glance, but thanks to the quick visual cues of the emoji and emoji autocompletion in Slack, this has actually become a fairly effective way for us to quickly communicate with one another. And as we add more staging environments and Git branches, it will make parsing Slack’s wall of text that much easier.

:highfives:, Slack!

Where Are You?

Our first hack day was by all accounts a success. So much so that we’ve decided to make them a regular thing. We had our second hack day this past Friday. We mostly focused on pet peeves, refactors, and general performance improvements, but we also tackled a few really cool projects

“Here I am!”

One idea that’s been rattling around the company is giving roadtrippers a way to show where exactly they are on their route, so we decided to try to put together a proof-of-concept.

Our iPhone app currently asks for permission to get a user’s location. We periodically check their location data and store it in our database, so it was just a simple matter of passing their most recent data to our web client and displaying their avatar along the trip route.

We’re thinking this can be a great way for touring bands, sports teams, and other brands that spend time on the road to let fans know where they are at any given time.

Polishing Pictures

We have a lot of places in our database. Every place has a photo gallery. This means we have a lot of place photos. Many of these images are provided by our users, which, so far—/me knocks on wood—has gone pretty well. We’ve had very few problems with offensive images or trolling, but the occasional “bad” photo does sneak in. What do I mean by “bad”? For the task at hand, we decided on:

  1. Images generated in Photoshop or some other tool–these are often promotional in one way or another
  2. Blurry or washed-out photos

Keeping in mind that this is A Big Problem we won’t solve in one day, we dove in with four tactics in mind:

  1. Using OCR to identify images with excessive text, which are often promotional in nature
  2. Facial recognition in order to try to identify promotional images via large face-to-image-size ratios
  3. EXIF data in order to identify generated images
  4. Blob detection, which could help us identify both types of “bad” photos

We’ll be digging more into our results in a later blog post, so the TL;DR is that blob detection looks to be the most reliable tool of the four we considered, as it consistently identified large monochromatic patches in photos. In the photos we tested against, these were almost always bad images. We’ll be digging in deeper, and will be posting a more in-depth entry on our experiment in the near future. Stay tuned!

Polishing Places

In the “Why Didn’t We Do This Sooner” category, we also whipped up a quick way for our team members to flag POIs for a handful of reasons, e.g. offensive content, bad classification, etc. It’s a dead simple two-click process that will help us quickly flag bad place data so we can review it internally. If we find it works well, we may expose it to our users to get their help in making sure our place data is as good as it can be.

Faster Page Loads With Modular JavaScript

Any web developer who has written a dynamic web app and then tried to make it fast has encountered the dreaded "Render blocking JavaScript" from Google's PageSpeed Insights. There are a number of techniques to prevent this, but no matter how you shake it every line of JavaScript your web app uses needs to be downloaded, parsed, compiled, and (probably) executed. All of that takes time.

Show Me The Money

What users want is to see and interact with a web app as soon as possible. Achieving the "see" portion is a matter of getting the app's markup ahead of the JavaScript (see Server-generated HTML In a Single-page Web Application). Making apps interactive sooner requires some more thought.

Applications on Stilts

A web app becomes responsive only as soon as all of the necessary JavaScript has been downloaded, parsed, compiled, and has either executed or is ready to execute. What we found at Roadtrippers was that the best way to achieve this was to reduce the amount of JavaScript we load with the initial page load. With a smaller footprint the browser has less to download and less to process, allowing our application to get to its business.

The first step is to determine exactly how much JavaScript is needed to load and run the app's front page. This alone carved over 20 seconds off of the Roadtrippers page load time in many cases.

20 Seconds?!?!

As web developers it is easy for us to excuse slow load times. We simply assume that the problem is one of the following:

  • "I'm running in debug mode."
  • "They just have a really slow internet connection."
  • "Their computer is just really old."
  • "That phone is so old."
  • "It works on my machine."

Really, none of that matters if a user comes to your app and thinks "This is so slow. This sucks." At Roadtrippers, we used a service called "Peek" by UserTesting to get a video of an actual person trying Roadtrippers for the first time. It was utterly painful to watch this user wait nearly 30 seconds for the app to load, so we decided to do something about it.

Dancing With The One You Came With

When thinking about how to enhance Meridian (Roadtrippers' front-end framework) to support modular JavaScript loading, we needed a solution that would fit well with our Ruby on Rails server. Rails' asset pipeline provides tools that will take a bundle of JavaScript files, minify them into a single file, and provide them with a digested file name. Meridian takes advantage of this functionality.

Deciding what code belonged in which package was difficult. At the time, all of the JavaScript for Roadtrippers was packaged into two minified JavaScript files: application.js and map.js. Over time these files had grown very large and took the browser an exceptionally long time to download, parse and compile. We decided to break the JavaScript into modules that matched the discrete sections of Roadtrippers.

This meant we built individual packages with the code for:

  • The map and trip planner
  • Place pages
  • Blog pages
  • About pages

Getting More JavaScript

Having these individual packages is lovely, until it comes time to load them on demand. Before we can execute JavaScript there are a few steps that need to take place:

  • The script must be loaded into the DOM
  • The JavaScript needs to be parsed and compiled
  • We need to know when the JavaScript is ready to be executed

We built a module into Meridian to handle these steps.

Once More Into the DOM

Before we can execute any JavaScript, we must enlist the browser's assistance in fetching the script and loading it. The first step to this is to create a script element.

var script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'path/to/javascript';

Once Meridian appends the script to the DOM, the browser will fetch the JavaScript file, parse it, and compile it. Once it is done, we know we can start utilizing that code.

Call(back) Me, Maybe?

Before we append that script we should add an event listener so that our app can take action on the newly-loaded JavaScript.

var script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'path/to/javascript';

script.addEventListener('load', function() {
  /* ready code goes here */


Of course, if something goes wrong when loading that JavaScript it would be nice to respond appropriately:

var script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'path/to/javascript';

script.addEventListener('load', function() {
  /* ready code goes here */

script.addEventListener('error', function() {
  /* error code goes here */


Within Meridian we utilize promises, specifically when.js from CujoJS. When loading packages we separate the load and error behaviors from the actual package loading:

_when.promise(function(resolve, reject) {
  var script = document.createElement('script'),
    onLoad, onError, unbind;

  script.type = 'text/javascript';
  script.src = 'path/to/javascript';

  unbind = function() {
    script.removeEventListener('load', onLoad);
    script.removeEventListener('error', onError);

  onLoad = function() {

  onError = function() {

  script.addEventListener('load', onLoad);
  script.addEventListener('error', onError);


Notice that after script has been resolved, we clean up our event listeners. As they say, "Your Mom Doesn't Work Here."

Kickstart My Package

In Meridian, we established a convention that each package would have an initialize method to handle. This method gets called as part of the sequence of actions encapsulated within the resolve call above, before any application-defined post-load code.

This convention allows us to do the following sort of thing:

rt.loader.ensureLoaded('my_package').then(function() {
  /* start using code from my_package */

Because Meridian's package loader has initialized everything, we can trust that it is safe to use that code. This sounds obvious, but is a massive improvement over relying on capricious events to execute application code.

Playing Nice Together

I mentioned Rails' asset pipeline earlier. In order to keep our application versions consistent, we rely on digested assets. When a JavaScript package is updated, it is assigned a new digest. Meridian gets a list of the digested package names when it loads up, so that it retrieves consistent versions of the packages. This keeps the user from getting weird combinations of old and new code.

Meridian relies on the Rails server to provide it with the list of packages at initialization. The Rails server can do this because it knows the digested names. This allows us to build a mapping of the application-defined names to the actual JavaScript file names. Meridian takes this mapping and keeps track of which packages have been loaded. This way Meridian only fetches each package once, regardless of how many times ensureLoaded is called.

Today when Roadtrippers loads, it fetches only the JavaScript it needs to do the following:

  • Load the landing page interactions
  • Load the next package requested in response to user actions
  • Manage application state

Should the user land on an end point other than the main welcome page Meridian will load the necessary JavaScript to load the interactions on that page as well. This allows our users to start planning their trips rather than waiting for code that they are never going to execute.