Nov 2023
Product Design Lead
Data Scientists, Product Owner, Researchers, TPM, Developers
Experian’s Identity Management Center is a customizable, white-label B2B2C SaaS platform with over 100 million users. If you have ever been offered identity monitoring services by a company that experienced a breach or have a membership (credit card, bank, Costco) that includes identity or credit monitoring as a premium, you almost certainly had access to a version of the platform.
CyberAgent is the platform's flagship product. It is a Dark Web monitoring identity protection tool that allows users to enter personal data (e.g., SSN, email addresses, phone numbers, etc.) to be monitored. CyberAgent then looks for their information in our stolen data database. If a match is found, users are notified and advised on how to address it.
Despite the extensive information in the alerts, users struggled to understand the Dark Web concept and its meaning. Our most frequent support calls were from users who received CyberAgent alerts. We wanted to develop a score similar to a credit score to help users better understand their personal risk level from the Dark Web. We created the Identity Health Score through machine learning and our extensive data sets.
Utilize a score calculated by machine learning to help users better understand the level of risk posed to their identity by the Dark Web.
Before I engaged with the project, stakeholders and the Lead UX Researcher conducted two rounds of user interviews. The key insights from the interviews were that CyberAgent users don’t always understand:
I conducted an in-depth competitive analysis to see how others solved the problems our research uncovered and how we could improve on it in our solution. I noticed several areas where we could best our competition:
At this point, I needed to weave many disparate insights into a cohesive experience. I would need to walk the user through several steps to help them better understand the Dark Web and see how our product helped them protect their identity.
Mapping the user journey was instrumental in helping me plot the user flow and determine where users needed additional context to improve understanding without overwhelming the user with information. It also demonstrated that, beyond being a stand-alone education tool, this product had the potential to drive engagement much more effectively than our current alerts. It also revealed how interconnected the data would be, and I should create a data flow for the development team to reference.
I partnered with a Researcher to iterate on three rounds of concept tests to determine whether the concept was moving in the right direction. In a moderated test, five subjects were guided through a series of screens while answering questions about their thoughts or feelings on the concept and how they might use it. They then completed an unmoderated survey.
Throughout the iterations, my main focus was finding effective ways to deliver just enough information at the right moment. The subjects appreciated the overall concept, but they needed insight into what went into the score to understand the value.
I immediately realized that the algorithm alone did not capture all we needed to produce an accurate and meaningful score. For example, if we uncovered a user’s credit card number but they had already replaced it, their score would be higher than if they had not.
To address this, I created a survey that allowed users to mark mitigating actions as completed and answer questions about habits unrelated to the Dark Web. The survey also demonstrated the level of personalization in the score and increased user understanding.
To better understand the score and its value, users wanted to know more about what went into it, but they would not read large blocks of copy. The content needed to be scannable and succinct.
The survey proved helpful beyond its original purpose of increasing accuracy because the questions gave the users their first glimpse of the underlying data. I also used several UI elements to put more context at user’s fingertips without overwhelming them:
Initially, IHS was only a score, but our initial research showed that users found it frustrating to have a score without a straightforward way to improve it.
I designed a checklist of actions the user could complete and mark “done” to mitigate risk. Each action expands to show more detailed information if desired. The actions were the questions from the initial survey that were not marked “done.” This amount of content would be a big lift for an organization without a dedicated writer. Fortunately, I realized we could leverage existing alert content for most of it. I wrote ten additional actions to cover user habits not related to Dark Web data.
The last step to understanding the value of the Identity Health Score was to see how completing different actions impacted your score and to what extent. I supported this in a few ways:
To ensure that users saw their score change as they marked actions "done," we had the page automatically scroll to the score chart and show a small animation of the score moving higher.
I tagged each action as high, medium, or low impact (with a supporting footnote), highlighted high-impact actions, and placed them first by default.
Initially planned to be a simple score, this product evolved into a robust identity protection tool and ongoing engagement driver with optional paths for upselling. Partners that offered our privacy tools (VPN and Password Manager) could enable CTAs within relevant recommended actions.
With 62 alert types and ten recommended actions, this product required extensive content but had no on-staff writer. Fortunately, I realized that portions of our existing alert content could be edited and reused.
Our original intention for the Identity Health Score was to produce only a score. Still, once users had one, they wanted to understand what it represented and how to maintain or improve it. To that end, I was able to repurpose existing alert content into a personalized action plan of small steps a user could undertake to improve their score. Completing the actions also revealed the relationship between the individual actions and the overall score, increasing the user’s comprehension.
I added the survey to account for any protective actions the user may have taken before enrolling, resulting in a more accurate initial score. However, testing revealed that the survey also highlighted the personal nature of the data (which users found compelling) and could be used to ask additional questions about the user’s online habits, making the score even more comprehensive.
Lastly, I leveraged multiple methods of delivering small amounts of information (individual actions, a survey, animations, tooltips, tabs, accordions) rather than overwhelming users with a wall of content. This inline, “just enough” content allowed them to absorb the information at their own pace and arrive at their “ah-ha” moment.
Back to homepage