Today we gathered in the studio to put our heads together and come up with some concrete frameworks for all the research we’d been doing. Before coming in, we each took on a single focus that we’d previously identified and did some secondary research on it, and when we arrived we quickly summarized that to each other before we started on putting it all together.
Unfortunately, we quickly began to realize that what had seemed clear-cut and easy when it was explained in class was not quite so simple in execution. We (and the other four group currently working in the studio, some of whom had been there for 6+ hours already) all tried to work out what exactly was expected of our frameworks, and what our goal was in displaying the information we’d come up with.
A few headaches and lots of writing notes to organize our thoughts later, we put dry-erase marker to whiteboard and got to work. Joy had the brilliant idea of separating the services we’d researched into a flow common to almost all of them: user needs, followed by the search through the service, then the evaluation of the products or services found, the decision, the subsequent experience, and then the review or feedback. While some sites stopped at some point in the process (Google, for example, does not ask for reviews of particularly successful or unsuccessful searches), almost all of them followed this process.
From there, we found that a major discrepancy between current health tools and other popular web experiences is the stratification of information; that is, what information is shown at a top-level search, followed by a higher-detail view, and so on. Take a look at some high-level views from searches at popular websites with good UX.
They are, in order, AirBnB, Amazon, Indeed, and Google. We went over a number more, but I think these four are sufficient to explain our thought process.
- AirBnB and Amazon are offering products, so the focus is on seeing what you would get: a picture and the price prominently displayed, the number of reviews (and thus a measure of how many people have bought the same product). AirBnB also includes a picture of the people who own the space as well as what neighborhood it’s in, while Amazon shows that the product is the single best-selling item in the category.
- Indeed is offering an opportunity (to apply to a job), so the focus is on the position: the job title and some light clarification, a snippet of the job description, and the name, average rating, and reviews of the company offering it.
- Google is offering information, so it just gives the title of the page, the URL, a snippet, and some context-sensitive controls.
All of these sites offer things that are necessary to know while scanning at the high-level view, and more importantly, they leave out information that isn’t strictly necessary, so that all the information displayed is relevant. If the high-level view is deemed relevant, that’s when users will click to see a low-level details view.
Contrast this with the provider search on Premera’s site as of now.
In what possible situation would a user need the full addresses of every single place the doctor practices at a high-level view? Just the name and distance in miles is enough for most users. In addition, why are some doctors marked as “accepting new patients?” For one thing, why are doctors which aren’t accepting new patients showing up in my search? For another, why is that implicitly considered the norm? A potential match not accepting new patients should either not show up in my search in the first place or have this marked very noticeably. Furthermore, while each entry is huge, most information patients would want to know does not show up, and even if they are to click for a detailed view…
…it isn’t terribly helpful. Informed by secondary research–including the graphs and data Premera gave us, which are confidential and can’t be reproduced here–we decided to map out the items that users want to know about providers on two axes: high-level vs. low-level (how immediate is the need for this information? should it be shown on a search view, or can it be saved for once the user is looking at details?) and user-provided vs doctor-provided.
Pictured is our labor of love. We argued a lot about which items should be on this framework, and where they should be placed, but in the end, the data prevailed. We used articles and graphs charting user priorities to decide which information is high or low level, and also UX and HCI articles (and our own judgement as UX designers) to make the executive decisions of which should be in which view. We prefer to view the levels of display as a spectrum rather than just two views, as one must also take into account mobile view and other restrictions on how much information may be displayed.
Note the “score” item. We knew that there had to be some kind of compatibility ranking between provider and patient, but we felt that hand-waving it would be irresponsible, so for our second research framework we brainstormed which factors would go into a compatibility framework, based on user priorities and what has worked on other matching algorithms such as OKCupid.
We intentionally stayed away from using a Siren.mobi-style essay model for a number of reasons which we will probably elaborate on at a later date; the short answer is that we felt it would be fatiguing for users to read essays written by every single doctor they are looking at, especially compared to multiple choice questions and priorities, and getting providers to answer those essays would be incredibly difficult.
Finally, after all this, we were ready to revise our research questions and our idea for a solution to the problem we’d identified.
From there, we were finally allowed to head home and get some much-needed rest. Now to make the actual presentation, and we will be receiving feedback from another group tomorrow, at which point I’ll update this post.