At Capital One, I was the sole product designer on a three-month project focused on improving clarity and efficiency within the company’s internal supplier management platform — a tool used to manage supplier activities, contracts, and risk across the enterprise.
The platform had already unified data from multiple legacy systems, but users — especially Accountable Executives — still struggled to navigate its complexity and extract the insights they needed to make effective decisions.
My goal was to help the team solve two core problems: (1) lack of clarity in the platform’s entry experience, so users knew what mattered when they landed, and
(2) inefficiencies in critical workflows, especially around risk oversight and contract approvals.
Through targeted research and design exploration, we arrived at a new dashboard that surfaced the most important metrics and risks upfront — improving how users prioritized their time and decisions. By the end of the pilot and initial rollout, the platform saw a 41% increase in regular users, far exceeding expectations and laying the groundwork for future role-specific improvements.
Visuals are simplified recreations designed to respect NDA boundaries while accurately reflecting design decisions and outcomes.
Capital One’s supplier management platform was introduced to solve a major visibility issue: across the enterprise, different stakeholders — including third-party managers and internal teams — had been pulling supplier data from a patchwork of disconnected systems.
To stay on top of contract status, risk exposure, and supplier activity, users had to regularly check up to eight different platforms — or rely on spreadsheets that were often outdated or error-prone, undermining confidence. The process was time-consuming and easy to get wrong, even for experienced users.
The new platform successfully consolidated these sources into a single centralized hub. But internal research revealed it still wasn’t delivering the expected value or engagement.
Two patterns stood out among users who weren’t actively using the tool:
Even though the system unified data, it didn’t clearly reduce effort — so many users stuck with their established habits, like pulling reports manually, checking individual systems, or relying on email chains.
Some users arrived on the platform but didn’t know how to get started. The landing page showed only pinned suppliers and pending activities — offering little guidance on where to begin or how to extract meaningful value.
To address the latter group — unclear value and guidance for new or uncertain users — we assessed the limitations of the platform’s entry experience.
Original landing page showing Pinned Suppliers and pending Activities, requiring prior system knowledge to navigate effectively.
But this surfaced a broader opportunity: a redesigned landing experience — in the form of a dashboard — could address the needs of both groups. For disoriented users, it could offer a clearer starting point. For hesitant users, it could provide a direct efficiency gain by surfacing essential data not just in one system, but on one screen. Done right, the dashboard could exceed the platform’s original goal of improving visibility, reducing errors, and saving valuable time.
With this in mind, we moved into focused research. While the platform served teams across the enterprise, we concentrated on the supplier management group — the most active user base — and prioritized the Accountable Executive (AE) role. AEs stood to benefit most from a consolidated view, and given their position in the hierarchy, improvements here would likely benefit users both above and below them, even before other persona-specific dashboards were developed.
I conducted nine 1:1 interviews with Accountable Executives (AEs) in the supplier management group to understand:
To synthesize what we heard, I created an affinity map to identify major patterns across participants. Themes were grouped into three clusters: Gaps in visibility, Fragmented data sources, and Manual workarounds — each pointing to different facets of the same broader challenge
While those clusters captured the shape of the problem, risk — whether general or tied to specific metrics — dominated across all of them. It came up far more often than any other theme.
Other
Frequency of themes across interviews — with risk surfacing most often.
"What fires do I need to put out?"
That anxiety around risk shaped how AEs approached nearly every aspect of their work — from where they looked for answers to how they made decisions. The following insights illustrate how that pressure showed up in practice:
With supplier data spread across tools, users worried something important might fall through the cracks.
It was hard to spot trends or systemic issues when each supplier and metric lived in a separate tool, view, or workflow.
Some tracked data manually using spreadsheets or email threads — adding friction and undermining confidence in the data itself.
The research clarified the core need: users didn’t just need access to data — they needed clarity, confidence, and a way to see the full picture. Our next step was to design a dashboard that delivered exactly that.
AEs consistently prioritized eight key risk metrics for assessing individual suppliers, with four of those metrics also providing critical portfolio-wide insight. While Portfolio Health and Supplier Risk drove the core design focus, Contract Approvals emerged as a distinct “quick win” — not because of supplier performance, but because approval delays, often buried in emails, could expose the company to avoidable risks and were relatively easy to surface with the right dashboard hooks.
These insights directly shaped the design approach, ensuring the solution addressed the areas of highest urgency and user value.
To increase platform engagement and reduce risk-related friction, the dashboard needed to do three things:
This design brief led us to structure the dashboard around three core areas — Portfolio Health, Supplier Risk, and Contract Approvals — each directly tied to the patterns uncovered in research. These areas shaped both the information architecture and the interaction model, ensuring users could prioritize effectively, move quickly, and see meaningful results without needing to navigate elsewhere.
Design
1. Wireframe Progression
To evaluate layout directions, I started by testing a “scorecard” view — a card-based layout based on early stakeholder ideas and built using existing Capital One components. While the format offered good readability in isolation, it didn’t scale well across large supplier lists. I moved toward a denser table layout that supported faster scanning, comparison, and interaction in a high-volume setting.
Card-style layout based on early stakeholder direction. Built using Capital One’s internal components. Clear at small scale, but hard to scan and inefficient at volume.
Early table structure. Better for comparison and navigation, but missing prioritization cues and interaction support.
To address the fragmented experience users faced, I introduced a dense “megatable” layout that combined supplier-level details and portfolio-level risk metrics into one scrollable interface. This replaced the disconnected pinned supplier list and activity tracker, enabling users to compare suppliers side by side and understand context without switching views. It became the foundation for all subsequent design decisions.
Combined supplier data, portfolio metrics, and interaction hooks into one cohesive screen — streamlining decision-making in a high-volume, high-risk environment.
3. Summary Graphs
To help users spot patterns in aggregate risk data, I added four summary graphs at the top of the page. These gave users a high-level overview of portfolio health, making it easier to flag system-wide issues or trends without scanning individual suppliers.
High-level risk trends gave users a faster path to prioritization and broader system awareness.
4. Interaction Elements
The dense UI could easily have become overwhelming. To make it usable, I layered in interaction enhancements that surfaced key priorities, reduced friction, and supported natural workflows. These additions helped users take confident action without losing context.
Key Interaction Enhancements
1
Supplier Count Confirmation
Helps users stay oriented by showing the total number of suppliers they’re monitoring, ensuring nothing is missing.
2
Approvals Shortcut
Adds a direct-action button so users can immediately address pending approvals without hunting through the system.
3
More Details Modals
Lets users click a small icon on summary graphs to open a modal with underlying data and a direct link to the relevant page.
4
Filters and Sorts
Provides flexible filtering and sorting so users can prioritize the most critical risks across potentially hundreds of suppliers.
5
Escalated High-Risk Indicators
Visually highlights critical risks with red, clickable cells. Clicking opens modals that display key drill-down data immediately, along with direct links to deeper views in the system — helping users act without losing context.
Example modal opened from a summary graph, surfacing portfolio-level ratings for a specific risk metric.
Example modal opened from the megatable, showing supplier-specific risk details and linked case information.
5. Key Tradeoffs
Several constraints and competing needs shaped how the final design came together.
Rather than running separate usability tests, we treated the pilot as our primary evaluation phase. This let us assess the dashboard in real-world conditions, capturing performance, adoption, and edge cases that traditional task-based testing might have missed.
After securing signoff from subject matter experts that the design met functional and compliance standards, we launched the experience to 30 pilot users and asked for feedback two weeks in. We offered a short survey:
We also opened a direct Slack channel to allow informal feedback, which helped us gather context-rich input across roles. Responses were tracked centrally and reviewed by the product and engineering teams for feasibility and prioritization.
Excerpt from the pilot feedback tracker showing a sample of accepted, deferred, and future design suggestions.
-feedback from a pilot participant
Most feedback reinforced that the dashboard offered real value — but I still made three key refinements before full rollout:
Each change was relatively lightweight but significantly improved usability at scale — and validated our approach of testing “in the wild” rather than in a lab.
Added export options to portfolio graphs so users could include them in decks without taking screenshots.
Introduced secondary and tertiary default sort logic to keep high-priority suppliers consistently near the top.
Added hover tooltips to explain risk terms unfamiliar to adjacent roles outside the core AE group.
In the first month after launch, number of regular users increased by 41%, far exceeding expectations and signaling meaningful behavioral change.
Pilot users described the experience as transformative:
Feedback confirmed that the dashboard wasn’t just helping Accountable Executives — adoption spread to adjacent roles across the org, validating both the design model and its potential for future scaling.
This project clarified several principles I now bring to every design challenge.
Context is the real constraint
A simplified workflow isn’t enough if it doesn’t deliver immediate value in the user’s real environment. The dashboard worked because it surfaced risk clearly in a high-stakes, high-volume setting.
Testing in context revealed what labs couldn’t
Piloting the full experience let us observe adoption patterns, system strain, and role-specific differences we wouldn’t have seen in isolated tasks.
Clarity beat customization
Even without flexible widgets or drag-and-drop columns, users engaged deeply and spread the tool across roles — validating the strength of the interaction model itself.