User Surveys at Change.org

We use online surveys on our website to learn our users and their goals, identify areas for improvement, and measure user satisfaction. Also, by surveying users, we can test content, information, architecture, visual and interactive design, etc. However, surveys are not always the best tool. Sometimes the data (meaning feedback) may be obtained through other resources and approaches, such as interviews (formal/informal), census figures, observation, and etc. We prefer to use a survey when we need a quick and efficient way to get information, reach out to a large number of people, get statistically valid data about a large number of people, and this information isn’t available through other sources.

There are 3 types of surveys:

1. Case study surveys

  • provide only specific information about the group

  • collect information only from a part of a group

2. Sampled surveys

  • ask a sample portion of a group to answer the questions

  • the results for the sample reflect the results of the entire group

3. Census surveys

  • go to every member of the group

  • give the most accurate information about the group

  • are not practical for large groups

We don’t conduct census surveys at Change.org. The most common and practical approach for us is running a sample survey. Sometimes we also conduct a case study survey, but it depends on the research goal and audience.  

The goal and audience

Defining the audience and the goal is one of the most important stages. It defines the survey structure and its form. First, we consider the purpose of research and then the target.

Goal

Focus on survey goal first: what are the major decision points or areas of uncertainty in the usage of the product? Focus on those areas and clearly define what you hope to discover. Inform or assign the stakeholders in the development of the survey and set up a deadline.

Audience

The next step is clarifying who has the answers to your questions. Which specific group? The audience defines your survey type. The most complicated stage in conducting the survey is to compile your sampling frame.

The Sample (or sampling frame) is the group of people who is selected to be in our research. It’s hard to set up its size. A sample size should be large enough to represent the overall group. It should be proportional to the group. (Sometimes statistics agencies use a sample size calculators to figure out how big the sample should be). A general rule is that the larger the sample size, the more accurate a reflection of the whole it will be.

Caveat: bias and self-selected responders should be consider as a separate group. Otherwise, the surveys can cause more harm than good.

And, don’t forget about margin of error and standard deviation! In analyzing the results, we should consider the margin of error in a sample. When in doubt, go find yourself a statistics nerd to help talk you through these details.

Margin of error is a confidence interval. When we conduct a survey, we’re making an assumption about what the larger population thinks. If a survey has a margin of error saying 5%, it means that if you ran that questionnaire 100 times — asking a different sample of people each time — the overall percentage of people who responded the same way would remain within 5% of the original result.

image

Standard deviation - the amount of variation or dispersion from the average (or how far from the normal). A low SD indicates that the data points tend to be very close to the expected value. A high SD indicates that the data points are spread out over a large range of values.

To summarize: defining the audience and the right user segmentation for the survey may be complicated, because you have to keep in mind a lot of different terms to make your sample correct and reliable. The main rule is to remember that asking more people in one survey helps reduce the margin of error, and analyzing the multiple forms gives a more accurate view of what people really think.

Gathering questions and compiling the survey

Before gathering questions and compiling the survey remember an important rule of human nature: the more questions you ask, the fewer respondents who start a survey will complete the full questionnaire. Research shows that the survey works best with less than 35 questions (source). Talking from our experience, the best response rate we got from the surveys with less than 12-16 questions. However, the most important criteria is not the number of questions. It is the total time users spend on filling the survey. It should not exceed 2-3 minutes.

There are many guidelines on how to formulate questions and how to lay out surveys to make them “respondent friendly”. The answers from user surveys must be relevant to the issues that are important. Therefore, if you can get the data from other sources (internal and external), avoid asking overlapping questions in the surveys. Create a survey that asks the right questions to meet your research objective. Make the form short, simple and clear as possible. Follow the “4 Be” rule: Be Brief - Be Objective - Be Simple - Be Specific.

No matter the main survey purpose and issue areas, in every our survey we want to cover the User Experience Honeycomb (by Peter Morville), which covers user experience (UX) basics. In order to provide a valuable and useful UX, the content on our website must be: useful, usable, desirable, findable, accessible, and credible. Therefore, we ask a block of questions like How clear are the guides and instructions on our website?, How difficult it is to navigate at the bottom of each our survey.

We also try to follow a few general-accepted rules:

  • Use closed-ended (yes/no, multiple choice, rating scale, etc) questions rather than open-ended (comments and essays). However, keep a few open questions. Closed-ended questions give you the better response rate, however open-ended provide the more valuable feedback. So, you have to juggle a bit here.

  • Consider what questions to use and when it is appropriate to use them. The sequence of questions should help to create a certain flow through the survey. I prefer to ask interesting questions in the beginning of the survey to catch the respondant’s attention. According to our observations, if a user started to answer the survey - he/she will the most likely make it to the end. Leading with interesting questions may help to stimulate interest. The most answered (the highest response rate) or interesting questions on Change.org include: Do you use your real name?, Why did you start your petition?, Would you create another petition in the future?, Would you use “I don’t support” button to some campaigns?

  • Develop the concept of trust between yourself and the respondents. Maintain a neutral tone in all your questions. Notify users how much time it will take to complete the form and how you’ll use the results. Stay away from assumptions, personal statements, and complicated question structures.

  • Avoid leading questions, loaded questions, jargon or specific internal terminology.

  • The common practice is to place demographic and/or other sensitive questions at the end of the survey. If they are in the beginning, participants may opt out early. However, we tested a few surveys with different demographic block placement, and it performed much better (and the survey overall) being placed at the beginning.

  • Group similar questions together or in the same area of the survey. At Change.org we create different pages where we post different blocks of questions. The result is a demographic block, close-your-campaign block, start-your-petition block, etc. We name them accordingly: “Tell us about yourself”, “Your petition on Change.org”, Please, share your experience with us”. When users see the title and a new page with questions, they can prepare themselves for a certain theme or topic.

  • Ask if a respondent is willing to answer more questions or provide additional feedback.

There are a lot of rules and techniques how for compiling your survey in the most efficient way. I summarized only a few important rules.

Finalizing and sending the survey

Customizing the survey design according our brand and theme makes the survey look professional. The right color fonts and the background make the survey visually appealing and user-friendly. Think how you going to spread the survey:

  • as a weblink shared through email or social media

  • embedding the survey into the website

  • invitation popup asking to take the survey when someone visits a specific page on the website

  • a survey window popup, containing the survey when someone visits a specific page on the website.

Each one of the forms above has its own specific, advantages and disadvantages. For instance, the survey window popup is the most convenient survey form, but works best only for a few short questions. The embed survey form looks very nice, but it might have its own formatting or interactivity limitations.

It is fairly common to offer incentives for completion in order to drive the participation rate up. A lot of companies use the subject line to emphasize the benefit to customers. Tempting subjects like “Get your holiday free” or “Win your chance..” instead of “Can you fill out our survey?” improve the response rate. We don’t offer incentives, and prefer to emphasize that the survey is easy to fill out (“it only takes about 3 minutes”). In the absence of incentives, implying that the cost or effort is very low can have a similar effect.

Before sending out the survey, we want to make sure the survey design and settings are working as expected. Therefore, we test the survey before it goes live. A small sample of test respondents (10% of sampling) can help verify if the survey is working properly. If something goes wrong on this stage it’s easy to revise and edit questions and/or the survey design. At this stage we can start analyzing results, revising or correcting the questions. If there are often-skipped questions, we can rephrase or get rid of them.

The final full send goes out to all our sampling list, excluding the first test send. At this stage we can’t revise the survey or change its format. Usually, over 70% results come in the first few days after the full send. We wait a week for the final and official survey analysis. That being said, some results can come even in a few months if your survey form isn’t expired.

Learnings

No matter what tool you choose for conducting the survey, you should be able to share and export results in different formats. First, we share results with the stakeholders, and then disseminate them with the rest of the teams. Once we have gathered the data, we can use it to evaluate the UX of the website, recommend improvements, discuss the recommendations, and etc.

In general we can conduct an online survey at any stage of the development process. We’re using the two main methods of conducting a survey, which depends on the long-term or short-term survey purpose. Once per year we launch a broad and global survey to cover mainly product questions and to ask about general experience. In addition, a few times per quarter we can survey a very specific set of users with only a few questions.

One of the most important survey outcomes is the establishing a relationship with our users. The next time we conduct the survey, the results should be better, because the best response rate comes from the audience that knows you.

Olga Chernenko

User Advocacy Data Specialist

Introducing Autocomplete to Improve Petition Targeting

Following our recent launch of the Decision Makers feature, decision makers are better equipped than ever to respond and work with petition creators to help improve their circumstances.  Helping people target their petitions at the appropriate decision makers has become more important than ever.

Unfortunately, targeting decision makers is difficult because many leaders, politicians and other decision makers share names with different people.  A 2009 RapLeaf study found that there were over 10,000 people on Facebook named Juan Carlos, including the current King of Spain, two Argentinian politicians, a Pennsylvanian restaurant owner and a former Colombian minister.  We need a system to identify which Juan Carlos we intend to petition if we want a response.

Our solution:  Autocomplete.  Based on what our user has typed, we display a list of the most likely decision makers they might want to petition.

Showing a list of relevant decision makers, whose emails we have confirmed in our database, accomplishes two goals:

  1. A better user experience.  The user of our platform sees visual reassurance and feels more confident about their petition reaching the decision maker.

  2. The petition is delivered to the correct decision maker, who will receive notifications about petition activity and is more likely to respond. The petition has a better chance of achieving victory.

Our autocomplete solution:

  • We created a searchable index of our decision makers table using a full-text search engine, Sphinx, and its Rails gem, Thinking Sphinx.  We tell Sphinx which data fields to index, which other fields to include in the results, and how to weight the results.  For example, publicly-visible decision makers appear earlier in the results than non-publicly-visible ones.  In the future, we plan to prioritize local decision makers depending on the user’s locale.

  • The user starts typing in the “Whom do you want to petition?” field, which triggers an AJAX request to the Node.js server for autocomplete suggestions.  In the future, we plan to cache the results of previous such queries in the browser’s local cache, to reduce the load this would cause on our servers. 

  • The Node.js server then requests search results from Sphinx (using the Node sphinxapi module), and sends the JSON list back to the web browser.

  • The browser displays the appropriate choices, along with hidden fields that precisely identify that petition target in our database.

  • We repeat this with every keystroke the user types to adjust their search.

  • If the user wants to petition somebody who is not in the list, the user can still add her or him as a new decision maker, whom we will store in our database.

image

We expect the autocomplete feature to increase the percentage of petitions that reach their intended decision makers, to increase the number of decision maker responses to petitions, and ultimately to help drive more victories.

Michael “Marick” Arick and Letitia “Tish” Lew
Web Engineers


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: marick at change dot org or tish at change dot org 

Three years of Rails at Change

I’m a bit sentimental. Perhaps that’s not the best way to be in an industry where just about everything is etherial and temporary. Over the years I figure that most of what I’ve created has been edited, refactored, deprecated or outright binned when something different or better came along.

Even so, the process of creation is what I most enjoy, and looking back on my work is more about the evolution of a codebase rather than being able to look at HEAD and stake an “I built that” claim. So when that sentimental mood struck me recently, I created a Gource visualization of our main Rails application, spanning my time at Change.org (a little over 3 years).

Gource generates some pretty beautiful stuff and it’s fun to watch. As a participant, though, it also dredges up all sorts of memories. I see our team grow from 5 to 25, our jumps from Rails 2.3 all the way through 3.2, the pivot away from being a blog platform, the brutal initial i18n effort, and a surprising amount of effort from “YOUR NAME HERE” (which I have to assume is the software equivalent of underpants gnomes).

But mostly I see that evolutionary development process happening. The super-cool thing I wrote one day became the annoying legacy code later. I still take pride in every curly brace and variable name today, but I also find it useful to remember that, viewed longitudinally like this, it probably won’t matter. What matters is that I’m participating, solving problems, and pressing forward. Having a big ego about that piece of code today is a little silly because, in a functioning engineering organization, it won’t be “your” code for long at all.

Kyle VanderBeek
Principal Engineer

How to conduct user interviews

You probably already recognize the value of user research—testing your product with users can confirm suspicions, illuminate difficult interactions, and even spur new product ideas. (Not convinced? Check out Erika Hall’s new book, Just Enough Research, to catch up.) Even so, the process of conducting user interviews can seem dauntingly official, expensive, or time-consuming. It doesn’t need to be any of these. Follow these six basic steps to get started.

Identify subjects.
First, consider your primary audience. Are you targeting a particular age group, geographic location, or level of education? Focus your search as necessary. Then:

  • Ask people you know. Your housemate, neighbor, or coworker can offer surprisingly insightful feedback and you can’t beat the convenience. At Change.org, we occasionally offer unfilled interview slots to internal employees on other teams. They understand our product decisions and we identify their needs.
  • Head to coffee shops. If your interview is casual, ask people sitting alone at coffee shops. Most folks are eager to share their opinions, especially when offered a gift certificate (see step 2). At Change.org, we use this tactic frequently since it requires little planning but still gauges response.
  • Reach out to existing users. If you already have a user base, contact those people directly. Explain the value of their reactions, invite them to your office, and provide flexible time slots. When we want feedback from a particular group, like petition creators, we email our users directly.

Incentivize.
While it’s not required, we like offering a simple incentive, like a gift card. Take a friend to lunch, support the coffee shop in which you’re interviewing, or consider a product-related discount. Our San Francisco office is near a Whole Foods, so we treat visiting users to gift certificates there.

Set up and record.
We limit the group size to three Change.org employees plus the interviewee. Two-on-one is a better balance, but it’s often helpful to have engineers or product managers attend. Plan your questions in advance, but be willing to meander from the script. Designate a primary note-taker so at least one person can make eye contact. Capturing a video and screencast recording of the interview can be vital to share with your wider team. We like Screenflow and Silverback for this purpose.

image

Our design team captured this screencast of Morgane, an internal marketing employee, walking through a prototype.

Warm up.
To start, introduce everyone in the interview. Offer your guest a beverage. Remind your interviewee that all feedback, especially critical, is helpful. Encourage him/her to verbalize every thought. If you’re concerned about confidentiality, sign a nondisclosure agreement. (Don’t get stuck on NDAs. Use the Shake app to create one instantly.) Begin your interview with general questions: “What do you do for a living?” “What kind of mobile device do you have?” “How many hours per day do you spend online?”

Get to the meat of it.
Whether you’re asking general questions, showing mock-ups, or walking through a feature, resist the temptation to talk about yourself or your product. This is not a conversation—you should not explain how your product works or offer solutions if your user gets stuck. Simply acknowledge that you’re listening. To wrap up the interview, explain answers to previously asked questions. Pull for any additional feedback. Thank them for their time.

Dissect your results.
Supplement your notes from the interview while it’s fresh, highlighting the takeaways. If you recorded the session, bookmark key moments for easy reference. Share your notes and recording with your team. While one session is helpful, user research is most valuable when trends emerge. Continue collecting interviews and referencing notes. Adjust your product as necessary; rinse and repeat.

Lauren P. Adams
@laurenpadams
Product Designer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: lauren at change dot org

The widget that was

I remember the olden days of change.org, back when we were a four-person engineering team at a company that hadn’t yet figured out whether it wanted to be a blog or a platform for petitions.

Back in those days, we hired an outside contracting company to deliver our empowerment with a technology called “Flash” (you may have heard of it). The result was an embeddable widget that people could put on a Blogger site or AngelFire homepage. At the time, it was a really powerful and engaging experience that helped us bootstrap the Change.org community by being featured on everything from AlterNet to the tiniest mommy-blog.

Since then, we’ve built Change.org into the world’s largest platform for change, our team has grown dramatically and we constantly challenge ourselves to focus on the most effective features for our community. Knowing when to throw away old things is essential and is actually one of my favorite parts of engineering and company evolution.

That time has come for the Change.org embeddable widget (a.k.a. “e.change.org”), which we will disable in January 2014. It no longer works on modern browsers, we heard from our users that it was slow, and it creates a tiny sub-fraction of the traffic on our site. So it’s time to remove the embed code from your pages, and raise a glass to the widget-that-was, which will soon be relegated to the archive known as our git commit history.

We are always looking at better ways to have ways for people to embed our content and share change.org campaigns (have you seen our API?). If you have ideas for what you’d like to see for embeddable tools to help your movement, leave a comment or send us a tweet!

Kyle VanderBeek
Principal Engineer

Mobile Developer Day with Facebook and Parse

Move fast and break things but don’t let them get to production

We had the pleasure of attending Mobile Developer Day at Facebook’s Menlo Park headquarters. Parse and it’s tools were the main feature along with a hackathon, but the highlights for us were the great talks on design and product that were given including an impromptu visit from Mark Zuckerberg. We thought it would be great to walk through some of the things we learned with some excerpts from the talks.

Why is mobile so important?

Mobile devices have become personal items that we carry with us everywhere we go. And when I say everywhere, I mean EVERYWHERE. Surveys suggest 54% of average users use smart phones in bed, 30% use them at the dinner table and 39% use them in the bathroom. Their adoption is growing at an exponential rate and the devices are ubiquitous.

Why mobile fails & how can we make it better?

Most companies have very small mobile teams - generally 1 or 2 people or worse yet, none. It’s understandable for resource strapped startups to not have the necessary resources allocated to mobile, which is considered a secondary market.

But companies should realize by now that this is no longer the case. If you put together the overall usage of mobile & tablets, it easily surpasses PC usage.

MVP or MDP

The desire to release quickly has always led to discussions on what is the MVP (Minimum Viable Product) but what is the MDP (Minimum Desirable Product) which pushes the product forward and executes on core features.

To achieve great design and desirability, allow space for creativity and let your design be optimistic and ambitious. Let designers craft the best case scenarios then critique the designs and align the scope.

Use the space to engage the user completely as your product should be able to create the desired emotion in the user.

Help the users narrow the choices, but do not force your choices on them. Since many times this involves an ad or conversion point, embed these in the regular flow and take them through an experience ending where the user needs to be.

Release

Some companies have a vast user database and they can leverage beta users to test new products and get feedback. With startups it’s difficult, but that should not mean you test in production. If you release internally, followed by releasing to a small user base and use A/B testing you can craft a great experience which is vital since first impressions mean a lot.

Release regularly; there is no magic number. With websites, this could be twice a day, but with apps this could be every 4 weeks. Continuous deployment and scheduled release cycles can help you achieve this.

With regular releases it is easy to get small set of features out at a time and you can observe its effects, not just the desired ones but also the side effects.

Use the data to understand your user, but don’t drive your product entirely based on the data.

Lastly, use intuition, user feedback, and most importantly, the interests of the people developing the product into consideration when developing your product.

Praneeta Mhatre
http://github.com/Praneeta
Web Engineer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: praneeta at change dot org

Promises and Error Handling

We’ve standardized on using promises to manage most of the asynchronous interactions in our JavaScript codebase. In addition to really enjoying the extra expressiveness and aggregation possibilities offered by promises, we’re benefitting greatly from richer error handling. We’ve had some surprises though, and this post explains some things which caught us out and some guidelines we’re following to avoid similar situations in future.

We use the excellent when.js library, however this post should be relevant no matter what implementation you’re using.

image

1. Reject what you would normally throw

Rejecting a promise should be synonymous with throwing an exception in synchronous code, according to Domenic Denicola. We’ve previously built some promise interfaces which can reject with no value, or with a string error message. Rejecting with a string should be discouraged for the very same reasons as throwing strings. An empty rejection is the async equivalent of throw undefined, which (hopefully) nobody would consider doing.

These practices cause us additional problems with the likes of Express and Mocha since they expect their next and done callbacks to be invoked with an Error instance to trigger the failure flow, otherwise they consider it a successful operation. We, quite reasonably, are in the habit of chaining promise rejections straight onto these (e.g. .then(onResolve, next) or .catch(next)). If the promise rejects with no arguments then it won’t signal a failure when used like this! 

Guideline 1: Always reject promises with an Error instance. Do not reject with no arguments. Do not reject with non-Error objects, or primitive values.

It’s easy to feel like rejecting a promise is less ‘severe’ than throwing an exception, this impulse can lead to promises being rejected where normally you would not throw an exception (e.g. validation checks). If the above guideline is difficult to follow in a specific case because you can’t think of an appropriate Error type to reject with, then perhaps it shouldn’t be a rejection after all.

2. Understand how .catch works

We have run into trouble with catch (a.k.a. otherwise/fail for non-ES5 browsers). The documentation for it might lead you to think that somePromise.then(onResolve).catch(onReject) is equivalent to somePromise.then(onResolve, onReject). This appears true at first glance:

function onResolve(result) { console.log('OK', result) }
function onReject(err) { console.log('NOT OK', err.stack) }

// This:
somePromise.then(onResolve, onReject)

// Is equivalent to:
somePromise.then(onResolve).catch(onReject)

They differ, however, in how they respond to errors thrown in callbacks. If onResolve throws and we are using the .then(onResolve, onReject) form, onReject is not invoked, and the outer promise is rejected.

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}, function onReject(err) {
// Never gets invoked
})

// outer is rejected with the 'this is an error' error

If onResolve throws and we are using the .then(onResolve).catch(onReject) form, onReject is invoked, and the outer promise is resolved.

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}).catch(function onReject(err) {
console.log('FAILED', err)
// => FAILED [Error: this is an error]
})
// outer is resolved

So, throwing inside either handler will reject the outer promise with the thrown error.

Guideline 2: Anticipate failures in your handlers. Consider whether your rejection handler should be invoked by failures in the resolution handler, or if there should be different behavior.

3. Use .finally for cleanup

.finally (.ensure for non-ES5 environments) allows you to perform an action after a promise completes, without modifying the returned promise (unless the finally handler throws).

var outer = resolvedPromise.then(function onResolve(result) {
throw new Error('this is an error')
}).finally(function cleanup() {
actionButton.enabled = true
})

// outer is rejected with the first Error, as if the finally handler wasn't even there
var outer = resolvedPromise.then(function onResolve(result) {
console.log('resolved')
}).finally(function cleanup() {
throw new Error('failed during cleanup')
})

// outer is rejected with the 'failed during cleanup' Error

As hinted at in the example, this could be especially useful on the browser for re-enabling action buttons once an operation completes, regardless of the outcome.

Guideline 3: Use finally for cleanup

4. Terminating the promise chain

Guideline 4: Either return your promise to someone else, or if the chain ends with you, call done to terminate it. (from the Q docs)

This has bitten us many times when using promises in Express handlers, and resulted in hard-to-debug hung requests. This handler will never respond to the user:

function handler(req, res, next) {
service.doSomething().then(function onResolve(result) {
throw new Error('this is an error')
res.json({result: 'ok'})
}, function onReject(err) {
res.json({err: err.message}) // Never gets invoked
})
}

Changing the then in the above code to done means that there will be no outer promise returned from this, and the error will result in an asynchronous uncaught exception, which will bring down the Node process. In theory this makes it unlikely that any such problem would make it into production, given how loudly and clearly it would fail during development and testing.

An alternative is to add a final .catch(next) to the promise chain to ensure that any error thrown in either handler will invoke the Express error handler:

function handler(req, res, next) {
service.doSomething().then(function onResolve(result) {
throw new Error('this is an error')
res.json({result: 'ok'})
}, function onReject(err) {
res.json({err: err.message}) // Never gets invoked
})
.catch(next) // next is invoked with the 'this is an error' error
}

This goes against the above guideline, since we are creating a new promise rather than ending the promise chain. You could argue that we trust next not to throw and as such there is no chance for the outer promise to reject. In addition there is no possibility of a hanging request or an uncaught failure (unless you throw undefined in either handler!).

If the idea of done bringing down a Node process makes you uncomfortable enough to ignore this ‘golden rule’ of promise error-handling, then perhaps this is a good option. The important thing is that errors do not hang requests, or get quietly transformed into successes.

Summary

We’ve seen 4 simple guidelines which should feel familiar if you’ve dealt with synchronous exception handling best practices. In fact they could almost be generalized to apply to both sync and async exception handling:

  1. Throw meaningful errors (and a string is not an error)
  2. Be aware of downstream effects of errors on the flow of execution
  3. Cleanup as soon as the error occurs
  4. Bubble exceptions up to a top level handler

This is a powerful feature of promises - letting us deal with errors in a stye that is more natural to us, as long as we are actually mindful of this, and remember to follow similar rules.

Jon Merrifield
http://github.com/jmerrifield
Engineer


We are hiring! If you want to come work with us and help empower people to Change the world while working on amazing technology check out our jobs page or email us directly: jmerrifield at change dot org

Meet Dian, our new Product Manager!

image


Dian Andamari Rosanti Tjandrawinata* just joined Change.org as a Product Manager. Previously, Dian was at Flipboard, where she worked on both mobile and web products (and a ton of localization).

The Bay Area has been her home for the last 8 years, but Dian was born and raised in Jakarta, Indonesia. This means she LOVES spicy food and hot, humid weather. She also likes Southern food, baking, reading, making silly faces, singing her sentences, and long, quiet road trips.

Most of all, Dian loves building communities and bringing together the unexpected. She often finds herself using her PM skills to organize the chaos of a San Francisco warehouse, where she and 5 housemates are developing their home into a work/live/fun community space. When the house isn’t overrun with friends and strangers, you’ll find Dian lounging on a couch with @StarskyTheCat.

* Pronounced like “Dionne”. Also answers to “D”. Sometimes answers to and always orders coffee with “Diane”.

Un-credible. Well, well. In-believable. Tucker in the house…

image

Bob Tucker is our new Infrastructure Engineer. Bob joins us from BazaarVoice/PowerReviews where he was a Senior DevOps Engineer. He lives in Oakland with his dog Hiro, an Asteroids machine and an elderly Jeepster all next door to a urban digerati-hippie co-op complete with bees, turkeys, chickens and ducks.

Free time is spent in dog-friendly hiking, searching for and tinkering with anachronistic machines and clothing, pointing out how someone’s favorite band is derivative of one far-cooler in the recent past and thinks socks with sandals should include mandatory minimum sentence guidelines. Bob was once a bartender in Southern Ohio but apparently came to very different conclusions on how the world should be run than his more famous, thoroughly tanned peer.

Ernest Goes to Change

image

Ernest hails from a sleepy little town just south of SF called Los Angeles. Previously, Ernest was awesome-izing the moviegoing experience as a designer at Fandango. Like Chelsea, Ernest made his way to Change.org via Designer Fund. He was really drawn to Change.org’s mission, its ridiculously good-looking design team, and the opportunity take the mobile experience to the next level.

On nights and weekends Ernest makes every attempt to stay active -  dancing, hiking, biking, and swimming his way to a healthy cardiovascular system.

Should you ever feel the need to bribe Ernest, he’s partial to well-crafted Belgian ales, fresh Thai coconuts, and big hunks o’ steak (medium rare, of course).