Where Are You?
Our first hack day was by all accounts a success. So much so that we’ve decided to make them a regular thing. We had our second hack day this past Friday. We mostly focused on pet peeves, refactors, and general performance improvements, but we also tackled a few really cool projects
“Here I am!”
One idea that’s been rattling around the company is giving roadtrippers a way to show where exactly they are on their route, so we decided to try to put together a proof-of-concept.
Our iPhone app currently asks for permission to get a user’s location. We periodically check their location data and store it in our database, so it was just a simple matter of passing their most recent data to our web client and displaying their avatar along the trip route.
We’re thinking this can be a great way for touring bands, sports teams, and other brands that spend time on the road to let fans know where they are at any given time.
We have a lot of places in our database. Every place has a photo gallery. This means we have a lot of place photos. Many of these images are provided by our users, which, so far—
/me knocks on wood—has gone pretty well. We’ve had very few problems with offensive images or trolling, but the occasional “bad” photo does sneak in. What do I mean by “bad”? For the task at hand, we decided on:
- Images generated in Photoshop or some other tool–these are often promotional in one way or another
- Blurry or washed-out photos
Keeping in mind that this is A Big Problem we won’t solve in one day, we dove in with four tactics in mind:
- Using OCR to identify images with excessive text, which are often promotional in nature
- Facial recognition in order to try to identify promotional images via large face-to-image-size ratios
- EXIF data in order to identify generated images
- Blob detection, which could help us identify both types of “bad” photos
We’ll be digging more into our results in a later blog post, so the TL;DR is that blob detection looks to be the most reliable tool of the four we considered, as it consistently identified large monochromatic patches in photos. In the photos we tested against, these were almost always bad images. We’ll be digging in deeper, and will be posting a more in-depth entry on our experiment in the near future. Stay tuned!
In the “Why Didn’t We Do This Sooner” category, we also whipped up a quick way for our team members to flag POIs for a handful of reasons, e.g. offensive content, bad classification, etc. It’s a dead simple two-click process that will help us quickly flag bad place data so we can review it internally. If we find it works well, we may expose it to our users to get their help in making sure our place data is as good as it can be.