Nextpoint’s Data Mining experience introduced Early Data Assessment capabilities to help legal teams quickly analyze large volumes of raw data before formal review.
The product was designed to give litigation teams an earlier and more strategic understanding of what they were dealing with by surfacing patterns across custodians, file types, date ranges, metadata, and other signals before that data moved into downstream review workflows.
Rather than relying only on traditional document review after processing, the EDA dashboard aimed to make large-scale data more understandable at the earliest stage of a matter. By combining machine learning, advanced filtering, and interactive analytics, the experience helped users reduce scope, identify risk, and make more informed decisions earlier in the case lifecycle.
This work contributed to the foundation of a product that was later launched publicly as Nextpoint Data Mining, positioned around dramatically faster processing, real-time analytics, and large-scale native data analysis for complex ediscovery matters.
Hi-Fi mockup of the Figma prototype used for user testing.
In ediscovery, legal teams often need to make critical strategic decisions before a full document review begins, but the tooling at that stage is often limited, fragmented, or too technical for efficient early assessment.
Users needed a way to quickly understand what existed inside massive incoming datasets, including who the major custodians were, what kinds of files were present, where relevant time periods clustered, and how keyword or metadata-based patterns might change the scope of a matter. Without a strong early analysis layer, teams risked spending unnecessary time and money moving too much data into downstream workflows.
The challenge was to design a product that could make extremely large and complex datasets feel interpretable without overwhelming users. This required balancing power, flexibility, and clarity for legal professionals who needed fast insight, not just raw processing output.
I led UX design for the Early Data Assessment experience, working closely with product, engineering, and data science teams to understand how legal teams approached early case assessment and where existing workflows created friction.
This included industry research, persona development, and exploration of common ediscovery tasks to better understand how users reason about data before review. The strategy centered on making the dashboard useful as a first layer of analysis rather than as a secondary reporting surface after the fact.
I also partnered closely with data science to translate complex processing and machine learning outputs into user-facing structures that could support real decision-making. That meant designing not only for visibility into data, but for trust, scanability, and the ability to move from high-level patterns into deeper slices of analysis.
The design direction focused on giving legal teams a dashboard that felt both analytical and actionable. Rather than presenting users with raw system output, the experience was shaped around visual summaries, modular filtering, and search-driven exploration patterns that helped users progressively narrow large datasets.
I designed workflows that supported moving between overview and detail, allowing users to identify outliers, investigate trends, and create slices for further downstream use. Visualizations for custodians, file type distributions, timelines, keyword hotspots, and other signals were designed to help users orient quickly and act with more confidence.
Because this experience needed to work at very high scale, the interface emphasized clarity, information hierarchy, and performance-aware layouts that could support complex data environments without becoming visually overwhelming.
I led UX design for the Early Data Assessment experience, working closely with product, engineering, and data science teams to translate complex data signals into actionable, user-facing insights.
This project reinforced how important it is to make complex analytical output feel decision-ready rather than merely visible. In high-volume legal workflows, users do not just need more data. They need faster ways to understand what matters and act on it.
It also highlighted the value of close collaboration between UX, product, engineering, and data science. Many of the most useful product behaviors came from translating technical capabilities into structures that legal users could actually interpret and trust.
If this work were to continue, I would focus on:
Figma, Sketch, Zeplin, UserTesting.com, Mural
Slack, Zoom, Jira, Confluence