Understanding how — and where — users were interacting with the Basemap app lead to significant improvements in navigation.
As a mapping application for outdoorsmen, Basemap offers a number of great features. But accessing those features in the outdoors was often complicated by inclement weather, rough terrain, and the need to juggle multiple objects at once.
The app's list-style menu and small icon touch-points led to frustration when users tried to navigate the app menu in the field.
In the preceding year, Basemap had released a number of new features. By optimizing the app menu, we had the duel opportunity to increase exposure to those new features — including paid features — and eliminate a user pain point creating an all-around better experience.
As UX Designer and Manager of User Testing, I worked with CEO Jeffrey Balch, as well as the Marketing and Dev teams, to update the app's primary navigation.
To understand the challenges the user faced and how they solved those challenges for themselves, I performed user interviews, researched best practices for interaction design, performed a competitive analysis, and even field tested the app myself to better put myself in my user's hiking boots.
DEFINING THE TARGET AUDIENCE
We cannot design for a user until we know who they are. By creating user personas with which to represent our target audience, we are able to better understand our user’s personality, habits, and motivators. As we had already defined our user groups with representative personas, I began my research by doing a quick review of the personas we were targeting — in this case, all of them, since navigation is a tool used by all of our customers.
GOING DIRECT TO THE SOURCE
With an firm grasp on who we were targeting, I reached out to our customer base, as well as our in-house staff, to find volunteers for one-on-one interviews with individuals within our audience. These interviews were informal, and designed to understand how users navigated our app, where they struggled, and how our competitors were measuring up.
With an idea of how our users were navigating and which apps were doing a better job, I was ready to move on to the competitive analysis.
UNDERSTANDING THE COMPETITION
Between our user interviews and our targeted list of competitors, we had a comprehensive list of mobile apps to analyze.
I downloaded each app to both an android and an iPhone device, and proceeded to document each app’s primary navigation methods. I focused on 3 areas of use — interaction, features, and design. I also noted differences between device types — as there are different underlying human interface guidelines and UI functionality.
ORGANIZING MY THOUGHTS
With a firm understanding of our user’s needs and our competitor’s strengths and weaknesses, I took to the whiteboard to brainstorm ideas. For me, this can look like a lot of things. From lists of what navigation items/features we should include, to mind maps, to very fast & loose visualizations of various concepts as they come to mind. When I feel like I’ve better ordered my thoughts, I usually proceed to the wireframing or design phases (or a stakeholder review.) In this case, I was able to go a step further.
HIKING A MILE IN THE USER'S BOOTS
As we knew from our users that using the app in the outdoors created an added layer of complexity to the interaction, I wanted to take advantage of the opportunity to get a first-hand understanding.
To this end, I took my phone on a walk through a park-like area, near our offices, in inclement weather. I made use of all of our primary features, including recording a track, and recorded my observations about the experience before moving onto the design phase.
With a stronger understanding of the user's needs, it was time to begin exploring a variety of concepts for the design. I began with manual exploration using whiteboards and pen & paper, then moved into the digital realm once I felt I had a few strong candidates for testing.
As the first step in design, I took to the whiteboard to begin sketching possible concepts. This is a way for me to get non-viable ideas out of the way quickly. No point in wasting paper!
Once I had some solid ideas, I began sketching with pencil on graph paper to get a more complete picture of the possible options.
Once I felt I had at least three to five really solid options, I reviewed them with our CEO, Jeff, and our lead Android developer (our lead iPhone developer was unavailable.)
We narrowed our options down to the top three to be made into interactive prototypes for testing. As I work in AdobeXD for this purpose, I started with the basic layouts and then further elaborated on the wireframes to create low-fidelity, interactive prototypes.
After writing up our hypothesis and test plan, we utilized UserTesting.com to perform A/B/C Testing. Once I had analyzed both the quantitative and qualitative feedback, I presented my results to the entire team, along with my recommendations for moving forward.
DEFINING TEST OBJECTIVES
As the first step in the process, I always begin by defining the test goals and creating a solid hypothesis for testing. Next, I nail down the study details, including a timeline for testing which can then be communicated back to stakeholders in an effort to set expectations. Finally, I write out the screener questions for target audience selection and prepare the test questions. Perhaps the most challenging part of this process is making sure the questions remain unbiased in an effort to prevent leading users to a specific answer.
THE ABC'S OF TESTING
The human brain tends to better recall the first and last items seen in a series or list better than than the content in between, and because our test relied on users to access their memory in comparing their experience with each of the three prototypes, there was a possibility that users' inherent bias could skew the test results. To help offset this unavoidable recall bias, users were tested in three groups of 5, with each group viewing the prototypes in a different order. So, group 1 tested the prototypes in the A-B-C order, group 2 tested the prototypes in B-C-A order, and group 3 tested the prototypes in C-A-B order.
WHAT OUR USERS HAD TO SAY
While usertesting.com does offer researchers to review user testing videos, I prefer to watch and document each video myself. This allows me to make notes about the individuals taking the test (in case it's relevant), capture quotes that I feel sum up the user's experience and/or frustrations in a way that can be easily digested by stakeholders, and check user's recorded answers against their spoken answers - as it is not uncommon to find discrepancies or additional information that can alter the quantitative data. Plus - it's just fun!
WHAT OUR USERS ACTUALLY DID
Accessing usertesting.com's user-friendly account center, I was able to view graphs and charts of collated data from multiple-choice and yes-no test questions, as well as, viewing all of the written answers for each of the written test questions. I also was able to download the information for our records. Once I reviewed the quantitative data, I made a few alterations to the downloaded data based on qualitative feedback from the users - one of whom had selected the wrong answer during testing, and one of whom changed their mind later on, but was unable to go back and change their answer. With the data corrected to accurately reflect the user's true preferences, I proceeded to gather the feedback for reporting.
REPORTING BACK TO STAKEHOLDERS
After the data was collected, I then put together a presentation for stakeholders, including CEO Jeffrey Balch, for review. This presentation included a review of the hypothesis (which was proven true), an overview of the test process and efforts made to avoid bias, the qualitative and quantitative feedback from users (including direct quotes), a risk assessment, and the final recommendation to move forward with Prototype A. While users did express a preference for the simplicity of the design in prototype C (which had fewer menu items), they actually had a more difficult time finding what they were looking for in the app - which ultimately lead them to prefer Prototype A.
With stakeholder approval on moving forward, I made a few final revisions to the design and reviewed with both the Android and iOS development teams. During review we were able to cover any questions about functionality, linking, and design elements.
One of the many reasons I enjoy working closely with developers, is because they can often provide insight and suggestions for improvements, as in this case, where I was advised that there were some small pre-existing iOS elements we could reuse instead of creating new assets.
I was then able to test and approve an unreleased version of the app before it was uploaded to the app store.
Seeing all of the work and user testing we put into this menu update go live was very exciting to me. I felt very confident that we had improved the user experience in a way that would have a lasting impact on the app as new features get added in the future.
This was Basemap's first foray into usability testing. While I knew there were some nerves over the cost of investing in a platform like usertesting.com, the stakeholders at Basemap were deeply impressed with the depth of user feedback we were able to glean in such a short period of time. Not only did we discover what wasn't working with our existing menu, we were able to move forward with a better design. Unfortunately, due to the speed with which we were growing, it was difficult to define quantitatively what impact this change had to the business. However, feedback directly from our app's subscribers suggested that our changes were a big hit.