[User Testing Tips] Using Real Data in Mockups

Us user researchers love to say, “test often, test always.” It’s rarely “too early” to test a design, whether it be scribbled on a napkin or a clickable mockup. If you’re ever looking to test an early stage mockup, we’ve seen huge success with including your users’ real data in the mockup. This gives you the opportunity to not only test something without having it built out and still make it easy on the user to complete the usability study.

What counts as real data?

Anything that makes the mockup relevant to the user is real data. It can be as simple as their usernames and icons, or real data you can gather from their current account in your product. For example take Hootsuite, Buffer, or any other freemium social media tool. If you’re testing a new social publishing flow, you should know your users’ handles for Twitter, Facebook and LinkedIn, and incorporate them into your mockups. If you don’t, start Googling. That information is readily available, and just waiting to be integrated into your mockups. Then, once a user sees the mockup, they won’t be thrown off trying to make sense of what they’re seeing. Instead of using dummy content like Lorem Ipsum or “This is a tweet. Link here,”, find some tweets that the user has sent out recently. They will understand the context better seeing the icon to their Twitter account instead of a fake one or blank avatar. If you’re looking at analytics or someone’s profile, find that information to make it at realistic as possible for the user. This gives you the opportunity to test earlier and more often, without having to ship something.

Do I even have time for this?

Yes. A million yesses. One round of user testing (4 tests, let’s say) with mockups that contain real data will replace multiple rounds of testing with dummy content. From a user researcher's perspective, it makes the job easier when needing to guide a user through the tasks we want them to complete in the test. We used to use HubSpot’s data in the mockups, and the users had a difficult time not starting each statement with “Well, if I was HubSpot …” which didn’t keep them as focused on thinking about the task realistically. If it’s their own data, they’re not thinking too hard about what they’d do if they were someone else, and are completing tasks in their own element.

For the design perspective, I’ve asked a few of my colleagues and connections to chime in.

Amy Guan, a product designer at HubSpot, is a firm believer in including real data in mockups. Once we recruit users for the testing, we send their information her way and she customizes a version of her design for each and every tester.  I asked Amy about how this impacted her process and she explained...

“On average, I would say it takes about 20-40 minutes depending on the complexity of the mockups. For instance, we were recently testing a publishing calendar using real data that required finding the individual's scheduled pieces of content and adding it to the mockup. Some people would only have 2 scheduled items while others had content for every day! Other times, it's as simple as putting their logo and a screenshot of a landing page they published. Both scenarios, regardless of complexity, truly helps put the user in a more relaxed state of mind for the test -- ‘Ok, this is my content. I know what it is and all I need to do is analyze how it's being used.’"

I also asked Amy to elaborate about how using real data affects her design process…

“I consider real data as the final step before user testing. I typically create mockups using data that I make up (think lots of Lorem Ipsum and pictures of puppies) and use those mockups as the initial discussion piece when chatting with my team. It keeps the project fun for me and really speeds up the iterating process when I'm not referring back to real data to make sure I'm not fudging any information up. Once we get to the point of user testing, I then go back and replace the fake data with real data from the users. I find this to be faster because we'll typically have at least 3 users scheduled, allowing me to just quickly run through the mocks consecutively pressing copy and paste.”

Amy was kind enough to send an example our way. These mockups are from a test about a new way to edit content in HubSpot. At first, users had issues with the first version and understanding the “secondary content” section. The second mockup includes more content that a user would be familiar with, and then they understood the “secondary content” section.

Example 1 - A mockup with dummy data

Example 2 - A mockup with real data 

 

Joshua Porter, UX designer and founder of What to Wear, finds using real data one of the more valuable things you can do when testing mockups.

“Real data is important to use because it is the primary reason why someone decides to use a given piece of software...they're not there because they like the UI.”

Real data makes everything better.

Really. We mean it. Your mockups, your testing, your iterations. It all becomes a little clearer once you test your mockups that include your users’ data. You’ll get more actionable takeaways from testing because you won’t be digging through the “if” and “maybe” statements from your users. You can get more concrete data and move on to the next iteration with ease.

 

Using Customer Happiness Surveys to Make Customers Happier (And Get Some Good Data)

At HubSpot, we are often gathering feedback from our customers. We give users opportunities to provide it while in the actual software, as well as quarterly customer happiness surveys. Once collected, this data is readily available for the entire company, and I was on a mission to use it.

An example of a happy customer after a chat with us.

An example of a happy customer after a chat with us.

Have you ever filled out a survey and felt like no one was really looking at the results? This is pretty common when happiness surveys are sent out via email to a large customer base for any website, software or store. This is no different at HubSpot – some customers assume it’s sent out and collected by a third party with no real pull. While this is not the case (we have an employee dedicated to analyzing these surveys), there is no real follow-up with the customers after they provide these thoughts. That’s where the user research team comes in.

If a customer is filling out your happiness survey, it means they have some stake in your product. They are either happy or unhappy, but most likely willing to help you make it better. This is a huge opportunity for user research and the development team. It gives you the chance to not only gather product feedback, but to let them know that the team is listening and cares about what they’re saying. Use this as an opportunity to not only gauge happiness, but to make them happier.

Here are the main things I do when following up on customer feedback.

 

1. Find the right customers to contact

When reaching out using survey results, I work closely with the Product Manager of the feature we’re hoping to learn more about. Together, we select customers based on their Net Promoter Score. We chat with both happy and unhappy customers based on the scoring.

The example outlined in this post was of a recent test where we wanted to learn more about our mobile application. We reached out to customers who were likely to recommend the desktop version of HubSpot, but not the mobile one. We wanted to dig into why the mobile app wasn’t getting the job done for these typically HubSpot-happy customers.

 

2. Make it clear this request is coming from the product team

Customers hear from all different departments in your organization, but it’s likely they’re not often being contacted by the development team.  There is an added sense of importance around being contacted by the the team actually building the product. You are fostering a connection between the people building the product and the people using the product. Use this to your advantage. This is not just another survey, this is a phone call with real people. Make that clear, and you’ll get a higher response rate.

An example of a typical email I send.

Next, I cross my fingers for some replies. We tend to get pretty great conversion rates when requesting clarification around customer survey responses.

 

3. Ask the right questions

For these interviews, our goal was to learn more about why users love the desktop product, but not necessarily the mobile offering. It could be as simple as their “must-have” feature is missing, or something more complex. Because of that, we wanted to dig deeper. This is a version of the script I used in the actual conversation with customers, in an attempt to learn more:

We are following up with on your recent HubSpot customer survey results. It appears that you are fairly likely to recommend HubSpot to others but there are areas within the product and service that could use improvements. Today we’d like to specifically focus on your experiences with the HubSpot iOS app.

  • Have customer briefly describe their role in their organization.
  • How did you find the application?
  • What is your typical use of the HubSpot iOS app?
  • If they need examples: do you typically use it during the day, or nights and weekends? Do you use the dashboard / sources or do you use contacts?
  • How often do you use the app?
  • Is there any key functionality missing for you from the app? How can we make the app better for you?
  • If there are other HubSpot users in your organization or amongst your professional network, would you recommend the iOS app to them?

Through this particular round of follow ups, we found that most users had logged into the app, but it was missing “their” feature at the time. Because it didn’t have exactly what they needed, they rarely opened it again.

 

4. Repeat often

At HubSpot, we’re sending out happiness surveys every month. This gives us a lot of data to work with. The product manager and I go through these results and select the people to contact each month. If you’re only collecting this data quarterly or annually, consider doing it more often. You can really use it to your advantage when developing your product. It’s not a huge time commitment – the calls only need to be 15 to 30 minutes.

 

5. Grow your database

Use these calls as an opportunity to add to your beta and usability testing database. If you stumble upon an articulate user that really fits your persona, ask them if they want to be added to your database for future testing opportunities. Take advantage of talking with more customers than you’re used to; it’ll only help with future development.

 

The moral of this story is that while super helpful data is the obvious result of our customer happiness survey follow up testing, so much more comes with it. Smiles are put on customers’ faces and they have the realization that there really are people listening to their issues. If you’re lucky, you’ll even find some great candidates for future usability & beta testing, too.

How to Remotely Test Your Mobile App

The HubSpot usability team loves to get feedback on apps we’re developing to make our customer’s marketing lives easier. But testing in the mobile environment presents a unique set of challenges. We recently devised a system that allows us to get much more insight from our testing sprints with real mobile users, using real apps in their natural environment – in situ – and discovered a whole new set of questions we were able to ask as a result.

The problems with most prototyping tools

In the past, we used mobile prototypes on desktop computers for mobile testing. We’ve found that this strategy gives us a fairly decent understanding of how our customers use our mobile app. But even though mobile prototyping tools are relatively sophisticated, they were not entirely realistic. When you test with a mobile prototype, you compromise on a lot of important aspects of the user experience.

Prototypes can’t:

  • Be tapped, swiped or zoomed, because there’s no finger interaction
  • Scroll, which means you can’t see more than a screen’s worth of content
  • Explore the whole in-phone context of the app, such as where the app lives, and how the user accesses it
  • Recreate usage on a physical handheld device
  • Be installed from an App Store

In addition, we wanted the least intrusive way to actually see how people were using these prototypes, and doing in-person testing simply would not scale.

 

Trick 1: The Remote Hack

Rachel testing out the hack.

Rachel testing out the hack.

Our team discovered the first hack from the folks at Mailchimp, who shared a method to use the camera on your laptop or iPad to record what was happening on your iPhone. This simple hack allows you to see someone’s phone from nearly the same perspective as they themselves see, and mimics real use amazingly well (see picture). This solved part one of our dilemma, and would enable us to test with anybody who used the HubSpot mobile app no matter where they were.

 

Trick 2: Use Proto.io or InVision

The next step was figuring out how to test with a mobile app that didn’t exist yet. Clickable mockups are reasonably useful for desktop testing, but presented an obstacle in the mobile world. After trying out several different options, we decided to use an interactive mobile prototype through the tool Proto.io because it did the best job of mimicking a fully functional mobile app that can be downloaded and used on a testing subject’s own phone. It made it easy for our team to create and customize prototypes for the varying devices and screen sizes of the testers, which made the testing experience the most realistic.

Another great app that does this is InVision – which our design team swears by for desktop app testing. It makes it easy and has taken our process to the next level. We are able to iterate faster after getting customer feedback using these tools.

(Note: Now that our mobile app exists and is no longer in mockup form, we use Testflight for our beta testing.)

 

Some unexpected insights

Once we had a system in place that would allow us to test mobile users using prototypes with some degree of accuracy and authenticity, we were off and running. Aside from the more general usability insights we gained from these in situ sessions, we also discovered that testing the users on actual phones made a huge difference in the scope of what we were able to discover during testing. We documented findings that would never have arisen in a less realistic testing environment.

For instance, we learned:

  • What other apps people use: Seeing what other apps our users have on their home screens helped us understand what marketers are doing on a day-to-day basis on their phones, what other apps they’re trying and using, and what they like and don’t like about them.
  • How people organize their apps: We got to see how our users group the HubSpot app with their other apps, whether in folders or just on separate screens. The folder names and the app with which they grouped us with were especially insightful.
  • How people navigate between apps: Looking at how HubSpot users interact with their phones (such as using status bar to scroll to top, if they use the multi-tasking bar and so on), how they switch between apps with bookmarklets, copying and pasting etc. gave us a look into their basic actions with their phones.

Although we try not to, sometimes we think we know what we expect to find out from the research. But, the best part of testing your app is finding out something you weren’t expecting. Testing users in their own native environment (like their own mobile phones) not only helps you see how they’d actually complete tasks in your app, but how they think about mobile as a whole. For any tech company, the data is hugely valuable and it’s worth watching your users interact with devices on their own terms. Give it a shot!

Recruiting Non-Users for Usability Tests

One of the most important aspects of conducting successful usability tests is having the best users possible. At HubSpot, we have a database of 300+ customers who are eager and willing to help us at the drop of a hat. However, there are plenty of situations where testing current users of our software would not be helpful, so we begin a search for non-customers.

Recruiting non-users for usability studies is much harder than the process of finding people who use your product. Some of the main issues I’ve encountered are:

  • They don’t owe you anything - you’re not helping them at the moment

  • If they don’t show for the test, it’s no loss to them

  • They are solely speaking with you for the gift at the end of the test

I’ve had my fair share of duds when recruiting for non-user testing, so I’ve adapted and created a process that works best for me here at HubSpot. I learned a lot from Google Venture’s Michael Margolis in his blog post on recruiting, and have created a flow that works best for me when finding participants for a non-user test.

Write a screener

Writing a screener eliminates a few steps of back and forth with potential users. Create a landing page and add a form on there with any information

you’d need about a user to decipher if they’d be great for your upcoming study. Some questions I’ve included on my screener are:

  • A LinkedIn URL (to ensure the user is a professional and not a bot)

  • If they’re a solo marketer or part of a larger team (helps us figure out if they’re one of our buyer personas)

  • If they’ve tried the HubSpot software before

Once people submit the form, they’re sent to my email and I go through and decide if they’ll be a great candidate for the study. If I’m already full for this particular one, I’ll email them and ask if they’d be willing to participate in the next one.

If you want help writing a screener, you can see mine here, as well as use this helpful worksheet that GV created.

Buy a few Craigslist postings

For HubSpot, the Craigslist route has seen little success. I’ve had a lot of bots fill out my form and have rarely gotten a truly qualified person for my tests. For me, our focus tends to be people in the marketing industry and those are rarely scanning Craigslist with ways to make a quick $25. However, if you have a different audience that spends more time on websites like Craigslist, you very well might see success. It’s worth a shot.

Use your social media following

I’m lucky enough to have HubSpot’s giant social media following on my side when recruiting non-users. On Twitter alone, HubSpot has over 350K followers. When I am looking for users, I write up around 5 tweets and send them to our Social Media Manager. She then schedules them out over the week prior to the study. As I mentioned earlier, our main target is marketers and most consider HubSpot a great resource, so there are a lot of qualified non-users following us on Twitter.

twitter1
twitter2
twitter3


Next steps

There are a few other tactics the sisters plan on exploring in upcoming test recruits. We want to try recruiting using:

Why You Should Start A Beta Program Today (And How To Do It)

If you’re looking for an easy way to get started with some user research around your product and really find out how your product fits with what your users need and want, a beta program is one of the ways to feedback and get the product in the hands of users quickly. If you’re just getting started, a beta program does takes some time to get going. But after that, it’s basically just maintenance. By definitiona beta test is a trial of software in the final stages of its development, carried out by someone not directly developing the product. Beta tests give you actionable feedback and useful data that will make your product measurably better. Once you see how valuable it is to get users’ eyes on features prior to releasing them, you won’t go back. Beta testing helps you see if users will actually use a feature, where as other research, like customer stories, personas and usability testing, tell you if something is valuable and usable.

1. Create a beta program page with a form

You’ll need to have a place to send users who want to get involved with the beta program. Share this link with other members of your organization too, in case they have some willing users. This page should accomplish a few things:

  • Explain the benefits of joining the beta program (you gotta convince ‘em!)
  • Detail the requirements of being a beta program member
  • Give users a place to sign up

There are a ton of benefits to participating in a beta program, but there’s a chance your users don’t know what’s in it for them. Make sure you’re outlining that for them. Also, discuss the expectations that come with being some of the first eyes on your new features. Here’s an example email you can use to explain:

HubSpot beta testers get early access to the features the development team is working on. The program is exclusive – only made up of the most beta-tolerant and sophisticated of HubSpot users. We want to make the software better for you so we want thoughtful feedback on each new release from our testers. How useful did you find it? What bugs still need fixing? What could be improved, added, or taken away?   Keep in mind, this program isn’t just a walk in the park. Being a part of the beta group at HubSpot is a commitment. It means you agree to use some software in your daily life that is bound to be a little rough around the edges. It also means you’re willing to take the time to help us smooth out those rough edges.

You can check out my beta site, as well as the form that users have to complete to sign up. When writing up your beta application form, there are a few essential form fields to include:

  • Ask users to rate their comfortability with bugs and issues
  • Ask users to rate their willingness to provide feedback
  • Ask users to rate their most used aspects of the product (if applicable)

You need to be able to segment your applicants by their level of beta tolerance, as well as by their most used features. This segmentation will help you create focused lists for each type of feature you might release to the beta program and ensure that your releases are as relevant to your individual beta users as possible.

2. Invite users to join the beta program

You probably already have a hefty email list of users who have opted in to hear from your organization. If your marketing team is wary about giving you this list, fight for it. Users love talking to the people who are building the product they use. They WANT to hear from you. Trust me. Send an email inviting your users to learn more about your brand new beta program. Explain to them with the benefits of getting involved, a link to your beta program page, and let the responses pour in. If you don’t want to reach out to users via email, you can include a message directly in your product or website asking users to join the program.

3. Turn on beta features for select users

This is when your beta program gets going. But before you release your first feature to your new beta program, you’ll want to do a few more things. First, choose which users to include, based on the segmentation you did in the first step. Start small. Segment your submissions based on comfortability with issues and willingness to provide feedback. Even if that only leaves you a few users who are truly “beta tolerant,” that’s perfect to get started with. If it’s relevant for everyone in your program, include them all. If it’s only for a set product level or use case, only send it to those users. Quality of fit is more important than quantity of sample size. Next, compose an email detailing the feature release you’re introducing. Make sure to include why this makes your users’ lives easier. Include screenshots and try to answer any questions in the email that you expect users will have. Give them a place to provide feedback, whether it be in another form or a forums post. Reassure users that you’ll get back to them if they have any questions and really value their feedback. Here’s an example of a recent beta release email I sent out.

An example of a beta email I’ve sent.

4. Collect users’ feedback

We go about this a few ways here at HubSpot. For some beta releases, we compile all of the feedback into a document and share it with the team. For others, we direct users to a dedicated forum thread where they can discuss the feature with other beta testers and get direct answers from members of the product team. The forums post definitely feeds discussion and gets the testers getting ideas from each other, as well as potentially seeing they are not the only one who loves or is struggling with the beta tool.  If you only have a small set of users, you can just request they email you their feedback and you manage it yourself. Once your program expands, consider other options like a feedback form or forums post.


There are lots of long-term benefits to starting your beta program today. By the time you’re ready for more in-depth user research, you’ll have a cracking good database to get started with. You can ask your beta users to participate in usability testing. You can send product development surveys to your users to gather more general, large-scale information about them. A beta program is an excellent first step in collecting feedback and demonstrating the value of user research long-term. So why not start today?

Usability Testing at a Face Paced Company

Originally posted on the HubSpot Dev Blog.

At HubSpot, we have one of the fastest development teams around. Our dev team continuously deploys code, up to 100 times per day, so our product is constantly changing. This leads to several challenges for us on the UX team, whose job it is to ensure that the software is easy and enjoyable to use.

One big challenge we have is to conduct usability testing in this crazy fast environment. As you may know, usability testing is often one of the first things dropped from the "must have" list of product release schedules. There are several reasons for this, common assumptions that are made about usability testing:  

  • Usability testing takes too long or is too slow
  • Usability testing is too hard to get right
  • We can get the same data in other, easier ways

All of these assumptions are common but do not stand up to scrutiny. When done well, usability testing doesn't slow down the release process, it's not too hard to do, and it provides uniquely valuable information about your product that you simply can't get in other ways. 

So at HubSpot, we've refined our usability testing process to be as fast as possible. We continually test our software with every product team, being sure to implement testing in a way that does not slow any of our developers down. Here are eight things we've learned: 

Do a complete round of testing in one day

In order to keep up with the pace of the software and those developing it, we execute each usability testing sprint in one day. We schedule all the tests in one day, even if it becomes a really long day. This usually means four to five one-hour usability tests back to back, where we can quickly and clearly see the trends among the testers and provide the development team with key takeaways immediately. By keeping the testing to one day, we keep the development teams moving quickly and everyone can schedule around it easily. In addition, by consolidating all the testing to one day we make it easier for the product team members to attend the tests without disrupting other projects. 

Offer different levels of service

Sometimes even one full day of testing is too long or too much on some projects, so we offer several levels of service to our product teams:

  • Support Evaluation - A couple hours of talking and very fast usability testing with several support reps to attack a known technical issue with the software. Our Support folks watch and help customers interact with the software all day long and often see issues earlier than anybody.
  • Consultant Interview - A couple hours talking with several Marketing Consultants to discuss any domain or conceptual issues around the software. Our consultants are helping people do marketing every day and so they are familiar with any issues that crop up.
  • One day sprint - Our typical one day usability testing process that tackles both technical and domain-related issues.

Every so often, there won’t be enough time to commit to a complete testing sprint before releasing something new, so offering different levels of service before we ship can make all the difference.  

Set up a reliable remote testing environment

At HubSpot we conduct remote usability testing most of the time. While we do the occasional in-person test when the situation calls for it, we test with our customers all over the globe and remote testing is the only way to do that quickly. It is valuable to get insight and feedback from users from places as diverse as Massachusetts, Ireland, and New Zealand. By remotely testing our users, we’re able to extract the most value out of our one day testing sprints. Some of the most valuable benefits of our remote testing sessions are:

  • Being able to quickly set up three or four tests in a row
  • Saving time and money by not coordinating in-person testing
  • Setting up a test in less than 24 hours
  • Everyone who should view the test can view the test (even those who are working remotely)
  • No shows are easy to overcome

Rather than having to coordinate in-person usability testing, which takes days to plan and conduct, we focus on remote testing because it speeds things up considerably.   

Keep the best users ready at a moment's notice

We are in a unique situation at HubSpot: we have first-hand access to all of our customers. We can reach out directly to the people actually using the software at any time. We have a detailed database of testers that we keep current notes on and know which features they use most often. The customers in our usability testing database are ready to help us at a moments notice, so the recruiting process is quick and painless. By creating a database of the best, easily-accessible users we shave lots of time from the testing process. We email them a day or two before the test, and they block off an hour in their schedule. We work with only the most helpful, eager customers for these sprints. Without this ability to recruit awesome users quickly, our testing process would last days rather than hours.

Agree on high level tasks and use cases ahead of time

A testing sprint cannot be successful without carefully set high-level tasks and use cases prior to the test. Instead of creating the tasks from scratch every time we test, we've created what we call "product research pages" to record and archive the use cases for each product. These documents are rich with information and research about each product we have, including things like use cases, user research, competitive research, direct and indirect competitor information, as well as the results of any previous testing we've done. By taking a quick glance at this page, anybody (even those unfamiliar) can quickly get up to speed about everything we know and don't know about a product. So when we need to do any sort of usability testing, a quick glance at a product research page gets us moving quickly. Our product and usability teams work to maintain the product research pages over time so they are always up to date. By having these pages we can draw up a task list for usability testing extremely quickly. 

Make sure the whole team is in the room

During our testing sprints, we make sure the product managers, interface designers and front-end developers are in the room observing and asking questions. The team then has the ability to ask the testers questions about their pain points, expectations and needs. The value for the development team when experiencing the user encountering issues and bugs far exceeds them reading a page of notes or listening to a recording. There is no second-hand information being shared when everyone is present, so the team can immediately move to recommending actionable takeaways after testing.

Debrief immediately after testing and create a shared plan

To help make the one day of testing even more effective, we hold a debriefing meeting immediately after the last session. In this half hour we discuss observations, trends, bugs, usability issues and any inferences we make about why users did what they did. We also develop actionable takeaways and assign them to different members of the team. By figuring out the next steps after the tweaks, we can decide if more testing is needed or if the feature is ready for the beta group to test out. The whole group is kept in the loop as development progresses. This debrief allows the team to get started immediately implementing the fixes and recommendations decided on the same day of the test. 

Compile notes & videos

Even though we make sure all the right people are in the room for testing, we still often want and need to share the results with the company as a whole.  Once the actionable takeaways are established, it is important to make the findings available to those who weren’t able to attend the testing, usually within 24 hours. At HubSpot, we have an internal wiki where we post all of our findings. A member of the UX team is taking detailed notes during the testing sessions, and those are summarized to show the main points (e.g. 3/3 people could not find the ‘save button’) of the testing, as well as the next steps. We put together a quick video highlighting the key insights for those who don’t have the time to read the notes, which only takes an hour or two. This wrap-up makes it easy for anyone in the company to see the results within a day of the test.

Bringing it all together

Testing in a fast-paced environment isn’t easy. We often don't have weeks or even days to plan a usability test. Having a successful process in place and knowing how to manipulate it when under pressure is key to releasing the best possible product to customers. While we’re getting pretty good at performing extremely fast usability sprints, there is always room to grow. Stay tuned.