Introducing Autocomplete to Improve Petition Targeting

Following our recent launch of the Decision Makers feature, decision makers are better equipped than ever to respond and work with petition creators to help improve their circumstances.  Helping people target their petitions at the appropriate decision makers has become more important than ever.

Unfortunately, targeting decision makers is difficult because many leaders, politicians and other decision makers share names with different people.  A 2009 RapLeaf study found that there were over 10,000 people on Facebook named Juan Carlos, including the current King of Spain, two Argentinian politicians, a Pennsylvanian restaurant owner and a former Colombian minister.  We need a system to identify which Juan Carlos we intend to petition if we want a response.

Our solution:  Autocomplete.  Based on what our user has typed, we display a list of the most likely decision makers they might want to petition.

Showing a list of relevant decision makers, whose emails we have confirmed in our database, accomplishes two goals:

  1. A better user experience.  The user of our platform sees visual reassurance and feels more confident about their petition reaching the decision maker.

  2. The petition is delivered to the correct decision maker, who will receive notifications about petition activity and is more likely to respond. The petition has a better chance of achieving victory.

Our autocomplete solution:

  • We created a searchable index of our decision makers table using a full-text search engine, Sphinx, and its Rails gem, Thinking Sphinx.  We tell Sphinx which data fields to index, which other fields to include in the results, and how to weight the results.  For example, publicly-visible decision makers appear earlier in the results than non-publicly-visible ones.  In the future, we plan to prioritize local decision makers depending on the user’s locale.

  • The user starts typing in the “Whom do you want to petition?” field, which triggers an AJAX request to the Node.js server for autocomplete suggestions.  In the future, we plan to cache the results of previous such queries in the browser’s local cache, to reduce the load this would cause on our servers. 

  • The Node.js server then requests search results from Sphinx (using the Node sphinxapi module), and sends the JSON list back to the web browser.

  • The browser displays the appropriate choices, along with hidden fields that precisely identify that petition target in our database.

  • We repeat this with every keystroke the user types to adjust their search.

  • If the user wants to petition somebody who is not in the list, the user can still add her or him as a new decision maker, whom we will store in our database.

image

We expect the autocomplete feature to increase the percentage of petitions that reach their intended decision makers, to increase the number of decision maker responses to petitions, and ultimately to help drive more victories.

Michael “Marick” Arick and Letitia “Tish” Lew
Web Engineers


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: marick at change dot org or tish at change dot org 

Three years of Rails at Change

I’m a bit sentimental. Perhaps that’s not the best way to be in an industry where just about everything is etherial and temporary. Over the years I figure that most of what I’ve created has been edited, refactored, deprecated or outright binned when something different or better came along.

Even so, the process of creation is what I most enjoy, and looking back on my work is more about the evolution of a codebase rather than being able to look at HEAD and stake an “I built that” claim. So when that sentimental mood struck me recently, I created a Gource visualization of our main Rails application, spanning my time at Change.org (a little over 3 years).

Gource generates some pretty beautiful stuff and it’s fun to watch. As a participant, though, it also dredges up all sorts of memories. I see our team grow from 5 to 25, our jumps from Rails 2.3 all the way through 3.2, the pivot away from being a blog platform, the brutal initial i18n effort, and a surprising amount of effort from “YOUR NAME HERE” (which I have to assume is the software equivalent of underpants gnomes).

But mostly I see that evolutionary development process happening. The super-cool thing I wrote one day became the annoying legacy code later. I still take pride in every curly brace and variable name today, but I also find it useful to remember that, viewed longitudinally like this, it probably won’t matter. What matters is that I’m participating, solving problems, and pressing forward. Having a big ego about that piece of code today is a little silly because, in a functioning engineering organization, it won’t be “your” code for long at all.

Kyle VanderBeek
Principal Engineer

How to conduct user interviews

You probably already recognize the value of user research—testing your product with users can confirm suspicions, illuminate difficult interactions, and even spur new product ideas. (Not convinced? Check out Erika Hall’s new book, Just Enough Research, to catch up.) Even so, the process of conducting user interviews can seem dauntingly official, expensive, or time-consuming. It doesn’t need to be any of these. Follow these six basic steps to get started.

Identify subjects.
First, consider your primary audience. Are you targeting a particular age group, geographic location, or level of education? Focus your search as necessary. Then:

  • Ask people you know. Your housemate, neighbor, or coworker can offer surprisingly insightful feedback and you can’t beat the convenience. At Change.org, we occasionally offer unfilled interview slots to internal employees on other teams. They understand our product decisions and we identify their needs.
  • Head to coffee shops. If your interview is casual, ask people sitting alone at coffee shops. Most folks are eager to share their opinions, especially when offered a gift certificate (see step 2). At Change.org, we use this tactic frequently since it requires little planning but still gauges response.
  • Reach out to existing users. If you already have a user base, contact those people directly. Explain the value of their reactions, invite them to your office, and provide flexible time slots. When we want feedback from a particular group, like petition creators, we email our users directly.

Incentivize.
While it’s not required, we like offering a simple incentive, like a gift card. Take a friend to lunch, support the coffee shop in which you’re interviewing, or consider a product-related discount. Our San Francisco office is near a Whole Foods, so we treat visiting users to gift certificates there.

Set up and record.
We limit the group size to three Change.org employees plus the interviewee. Two-on-one is a better balance, but it’s often helpful to have engineers or product managers attend. Plan your questions in advance, but be willing to meander from the script. Designate a primary note-taker so at least one person can make eye contact. Capturing a video and screencast recording of the interview can be vital to share with your wider team. We like Screenflow and Silverback for this purpose.

image

Our design team captured this screencast of Morgane, an internal marketing employee, walking through a prototype.

Warm up.
To start, introduce everyone in the interview. Offer your guest a beverage. Remind your interviewee that all feedback, especially critical, is helpful. Encourage him/her to verbalize every thought. If you’re concerned about confidentiality, sign a nondisclosure agreement. (Don’t get stuck on NDAs. Use the Shake app to create one instantly.) Begin your interview with general questions: “What do you do for a living?” “What kind of mobile device do you have?” “How many hours per day do you spend online?”

Get to the meat of it.
Whether you’re asking general questions, showing mock-ups, or walking through a feature, resist the temptation to talk about yourself or your product. This is not a conversation—you should not explain how your product works or offer solutions if your user gets stuck. Simply acknowledge that you’re listening. To wrap up the interview, explain answers to previously asked questions. Pull for any additional feedback. Thank them for their time.

Dissect your results.
Supplement your notes from the interview while it’s fresh, highlighting the takeaways. If you recorded the session, bookmark key moments for easy reference. Share your notes and recording with your team. While one session is helpful, user research is most valuable when trends emerge. Continue collecting interviews and referencing notes. Adjust your product as necessary; rinse and repeat.

Lauren P. Adams
@laurenpadams
Product Designer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: lauren at change dot org

The widget that was

I remember the olden days of change.org, back when we were a four-person engineering team at a company that hadn’t yet figured out whether it wanted to be a blog or a platform for petitions.

Back in those days, we hired an outside contracting company to deliver our empowerment with a technology called “Flash” (you may have heard of it). The result was an embeddable widget that people could put on a Blogger site or AngelFire homepage. At the time, it was a really powerful and engaging experience that helped us bootstrap the Change.org community by being featured on everything from AlterNet to the tiniest mommy-blog.

Since then, we’ve built Change.org into the world’s largest platform for change, our team has grown dramatically and we constantly challenge ourselves to focus on the most effective features for our community. Knowing when to throw away old things is essential and is actually one of my favorite parts of engineering and company evolution.

That time has come for the Change.org embeddable widget (a.k.a. “e.change.org”), which we will disable in January 2014. It no longer works on modern browsers, we heard from our users that it was slow, and it creates a tiny sub-fraction of the traffic on our site. So it’s time to remove the embed code from your pages, and raise a glass to the widget-that-was, which will soon be relegated to the archive known as our git commit history.

We are always looking at better ways to have ways for people to embed our content and share change.org campaigns (have you seen our API?). If you have ideas for what you’d like to see for embeddable tools to help your movement, leave a comment or send us a tweet!

Kyle VanderBeek
Principal Engineer

Mobile Developer Day with Facebook and Parse

Move fast and break things but don’t let them get to production

We had the pleasure of attending Mobile Developer Day at Facebook’s Menlo Park headquarters. Parse and it’s tools were the main feature along with a hackathon, but the highlights for us were the great talks on design and product that were given including an impromptu visit from Mark Zuckerberg. We thought it would be great to walk through some of the things we learned with some excerpts from the talks.

Why is mobile so important?

Mobile devices have become personal items that we carry with us everywhere we go. And when I say everywhere, I mean EVERYWHERE. Surveys suggest 54% of average users use smart phones in bed, 30% use them at the dinner table and 39% use them in the bathroom. Their adoption is growing at an exponential rate and the devices are ubiquitous.

Why mobile fails & how can we make it better?

Most companies have very small mobile teams - generally 1 or 2 people or worse yet, none. It’s understandable for resource strapped startups to not have the necessary resources allocated to mobile, which is considered a secondary market.

But companies should realize by now that this is no longer the case. If you put together the overall usage of mobile & tablets, it easily surpasses PC usage.

MVP or MDP

The desire to release quickly has always led to discussions on what is the MVP (Minimum Viable Product) but what is the MDP (Minimum Desirable Product) which pushes the product forward and executes on core features.

To achieve great design and desirability, allow space for creativity and let your design be optimistic and ambitious. Let designers craft the best case scenarios then critique the designs and align the scope.

Use the space to engage the user completely as your product should be able to create the desired emotion in the user.

Help the users narrow the choices, but do not force your choices on them. Since many times this involves an ad or conversion point, embed these in the regular flow and take them through an experience ending where the user needs to be.

Release

Some companies have a vast user database and they can leverage beta users to test new products and get feedback. With startups it’s difficult, but that should not mean you test in production. If you release internally, followed by releasing to a small user base and use A/B testing you can craft a great experience which is vital since first impressions mean a lot.

Release regularly; there is no magic number. With websites, this could be twice a day, but with apps this could be every 4 weeks. Continuous deployment and scheduled release cycles can help you achieve this.

With regular releases it is easy to get small set of features out at a time and you can observe its effects, not just the desired ones but also the side effects.

Use the data to understand your user, but don’t drive your product entirely based on the data.

Lastly, use intuition, user feedback, and most importantly, the interests of the people developing the product into consideration when developing your product.

Praneeta Mhatre
http://github.com/Praneeta
Web Engineer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: praneeta at change dot org

Promises and Error Handling

We’ve standardized on using promises to manage most of the asynchronous interactions in our JavaScript codebase. In addition to really enjoying the extra expressiveness and aggregation possibilities offered by promises, we’re benefitting greatly from richer error handling. We’ve had some surprises though, and this post explains some things which caught us out and some guidelines we’re following to avoid similar situations in future.

We use the excellent when.js library, however this post should be relevant no matter what implementation you’re using.

image

1. Reject what you would normally throw

Rejecting a promise should be synonymous with throwing an exception in synchronous code, according to Domenic Denicola. We’ve previously built some promise interfaces which can reject with no value, or with a string error message. Rejecting with a string should be discouraged for the very same reasons as throwing strings. An empty rejection is the async equivalent of throw undefined, which (hopefully) nobody would consider doing.

These practices cause us additional problems with the likes of Express and Mocha since they expect their next and done callbacks to be invoked with an Error instance to trigger the failure flow, otherwise they consider it a successful operation. We, quite reasonably, are in the habit of chaining promise rejections straight onto these (e.g. .then(onResolve, next) or .catch(next)). If the promise rejects with no arguments then it won’t signal a failure when used like this! 

Guideline 1: Always reject promises with an Error instance. Do not reject with no arguments. Do not reject with non-Error objects, or primitive values.

It’s easy to feel like rejecting a promise is less ‘severe’ than throwing an exception, this impulse can lead to promises being rejected where normally you would not throw an exception (e.g. validation checks). If the above guideline is difficult to follow in a specific case because you can’t think of an appropriate Error type to reject with, then perhaps it shouldn’t be a rejection after all.

2. Understand how .catch works

We have run into trouble with catch (a.k.a. otherwise/fail for non-ES5 browsers). The documentation for it might lead you to think that somePromise.then(onResolve).catch(onReject) is equivalent to somePromise.then(onResolve, onReject). This appears true at first glance:

function onResolve(result) { console.log('OK', result) }
function onReject(err) { console.log('NOT OK', err.stack) }

// This:
somePromise.then(onResolve, onReject)

// Is equivalent to:
somePromise.then(onResolve).catch(onReject)

They differ, however, in how they respond to errors thrown in callbacks. If onResolve throws and we are using the .then(onResolve, onReject) form, onReject is not invoked, and the outer promise is rejected.

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}, function onReject(err) {
// Never gets invoked
})

// outer is rejected with the 'this is an error' error

If onResolve throws and we are using the .then(onResolve).catch(onReject) form, onReject is invoked, and the outer promise is resolved.

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}).catch(function onReject(err) {
console.log('FAILED', err)
// => FAILED [Error: this is an error]
})
// outer is resolved

So, throwing inside either handler will reject the outer promise with the thrown error.

Guideline 2: Anticipate failures in your handlers. Consider whether your rejection handler should be invoked by failures in the resolution handler, or if there should be different behavior.

3. Use .finally for cleanup

.finally (.ensure for non-ES5 environments) allows you to perform an action after a promise completes, without modifying the returned promise (unless the finally handler throws).

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}).finally(function cleanup() {
actionButton.enabled = true
})

// outer is rejected with the first Error, as if the finally handler wasn't even there
var outer = resolvedPromise.then(function onResolve(result) {
console.log('resolved')
}).finally(function cleanup() {
throw new Error('failed during cleanup')
})

// outer is rejected with the 'failed during cleanup' Error

As hinted at in the example, this could be especially useful on the browser for re-enabling action buttons once an operation completes, regardless of the outcome.

Guideline 3: Use finally for cleanup

4. Terminating the promise chain

Guideline 4: Either return your promise to someone else, or if the chain ends with you, call done to terminate it. (from the Q docs)

This has bitten us many times when using promises in Express handlers, and resulted in hard-to-debug hung requests. This handler will never respond to the user:

function handler(req, res, next) {
service.doSomething().then(function onResolve(result) {
throw new Error('this is an error')
res.json({result: 'ok'})
}, function onReject(err) {
res.json({err: err.message}) // Never gets invoked
})
}

Changing the then in the above code to done means that there will be no outer promise returned from this, and the error will result in an asynchronous uncaught exception, which will bring down the Node process. In theory this makes it unlikely that any such problem would make it into production, given how loudly and clearly it would fail during development and testing.

An alternative is to add a final .catch(next) to the promise chain to ensure that any error thrown in either handler will invoke the Express error handler:

function handler(req, res, next) {
service.doSomething().then(function onResolve(result) {
throw new Error('this is an error')
res.json({result: 'ok'})
}, function onReject(err) {
res.json({err: err.message}) // Never gets invoked
})
.catch(next) // next is invoked with the 'this is an error' error
}

This goes against the above guideline, since we are creating a new promise rather than ending the promise chain. You could argue that we trust next not to throw and as such there is no chance for the outer promise to reject. In addition there is no possibility of a hanging request or an uncaught failure (unless you throw undefined in either handler!).

If the idea of done bringing down a Node process makes you uncomfortable enough to ignore this ‘golden rule’ of promise error-handling, then perhaps this is a good option. The important thing is that errors do not hang requests, or get quietly transformed into successes.

Summary

We’ve seen 4 simple guidelines which should feel familiar if you’ve dealt with synchronous exception handling best practices. In fact they could almost be generalized to apply to both sync and async exception handling:

  1. Throw meaningful errors (and a string is not an error)
  2. Be aware of downstream effects of errors on the flow of execution
  3. Cleanup as soon as the error occurs
  4. Bubble exceptions up to a top level handler

This is a powerful feature of promises - letting us deal with errors in a stye that is more natural to us, as long as we are actually mindful of this, and remember to follow similar rules.

Jon Merrifield
http://github.com/jmerrifield
Engineer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: jmerrifield at change dot org

Meet Dian, our new Product Manager!

image


Dian Andamari Rosanti Tjandrawinata* just joined Change.org as a Product Manager. Previously, Dian was at Flipboard, where she worked on both mobile and web products (and a ton of localization).

The Bay Area has been her home for the last 8 years, but Dian was born and raised in Jakarta, Indonesia. This means she LOVES spicy food and hot, humid weather. She also likes Southern food, baking, reading, making silly faces, singing her sentences, and long, quiet road trips.

Most of all, Dian loves building communities and bringing together the unexpected. She often finds herself using her PM skills to organize the chaos of a San Francisco warehouse, where she and 5 housemates are developing their home into a work/live/fun community space. When the house isn’t overrun with friends and strangers, you’ll find Dian lounging on a couch with @StarskyTheCat.

* Pronounced like “Dionne”. Also answers to “D”. Sometimes answers to and always orders coffee with “Diane”.

Un-credible. Well, well. In-believable. Tucker in the house…

image

Bob Tucker is our new Infrastructure Engineer. Bob joins us from BazaarVoice/PowerReviews where he was a Senior DevOps Engineer. He lives in Oakland with his dog Hiro, an Asteroids machine and an elderly Jeepster all next door to a urban digerati-hippie co-op complete with bees, turkeys, chickens and ducks.

Free time is spent in dog-friendly hiking, searching for and tinkering with anachronistic machines and clothing, pointing out how someone’s favorite band is derivative of one far-cooler in the recent past and thinks socks with sandals should include mandatory minimum sentence guidelines. Bob was once a bartender in Southern Ohio but apparently came to very different conclusions on how the world should be run than his more famous, thoroughly tanned peer.

Ernest Goes to Change

image

Ernest hails from a sleepy little town just south of SF called Los Angeles. Previously, Ernest was awesome-izing the moviegoing experience as a designer at Fandango. Like Chelsea, Ernest made his way to Change.org via Designer Fund. He was really drawn to Change.org’s mission, its ridiculously good-looking design team, and the opportunity take the mobile experience to the next level.

On nights and weekends Ernest makes every attempt to stay active -  dancing, hiking, biking, and swimming his way to a healthy cardiovascular system.

Should you ever feel the need to bribe Ernest, he’s partial to well-crafted Belgian ales, fresh Thai coconuts, and big hunks o’ steak (medium rare, of course).

Test Driven Elephants at Data Week 2013

Last week I gave a talk at Data Week held here in San Francisco at Fort Mason: “ETL Validation with Cascading on Elastic MapReduce” (hey, I’m an engineer not a copy-writer!). It focused on how Cascading provides a useful abstraction atop of Hadoop that enables us to apply Test Driven Development in building and, more importantly, maintaining large, complex data transformations.

The basic idea is how Cascading abstracts complex, multi-step map/reduce jobs into pipes, the individual units that actually do some processing, and taps, that connect data to your pipe assemblies.  This lends itself to a practical TDD-approach where you build up larger applications from individually isolated and tested components.

Developers from the agile world will already be familiar with the productivity boost that comes from being in “the flow” of TDD’s red/green/refactor cycle.  But if it takes 90+ minutes to get feedback from your Hadoop job, your cycle feels more like a crawl. Additionally, requiring a local or network-accessible Hadoop cluster in order to test your latest changes makes integrating with 3rd-party Continuous Integration (CI) services a lot trickier. By providing a local platform runner which can run both unit and integration tests, Cascading shortens your cycle time to a much more manageable level, and makes integration with 3rd-party CI services as easy as pushing code.

Additionally, by using Gradle to handle testing and building our applications, we get Continuous Deployment in the Hadoop space. This may sound a bit frightening until you get used to the peace-of-mind afforded by good test-driven design. We integrate with CircleCI which allows us to have after-test hooks. A green build on master executes a Gradle command to build a jar for Hadoop and upload it to S3. The next time our ETL pipeline runs, the data verification step will use the new jar when initiating a job on Elastic MapReduce.

We’re continuing to use Cascading here at Change.org for the safety and ease-of-use it provides in developing and maintaining complex Hadoop jobs over our increasingly large production data set. If problems like this and others in the distributed computing/applied data science space get you excited, we’d love to talk!  And come see us at Amazon Web Service’s re:Invent where I’ll be presenting about how we built an automated machine-learning driven targeting application for email campaigning using aws-swf, our lightweight Ruby framework for distributed applications on Amazon SimpleWorkflow. We’d love to meet you!

Vijay Ramesh
Software Engineer, Data Science


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email me directly: vijay at change dot org