NEXTPOINT – EDA DASHBOARD

Overview

Nextpoint’s Data Mining experience introduced Early Data Assessment capabilities to help legal teams quickly analyze large volumes of raw data before formal review.

The product was designed to give litigation teams an earlier and more strategic understanding of what they were dealing with by surfacing patterns across custodians, file types, date ranges, metadata, and other signals before that data moved into downstream review workflows.

Rather than relying only on traditional document review after processing, the EDA dashboard aimed to make large-scale data more understandable at the earliest stage of a matter. By combining machine learning, advanced filtering, and interactive analytics, the experience helped users reduce scope, identify risk, and make more informed decisions earlier in the case lifecycle.

This work contributed to the foundation of a product that was later launched publicly as Nextpoint Data Mining, positioned around dramatically faster processing, real-time analytics, and large-scale native data analysis for complex ediscovery matters.

Hi-Fi mockup of the Figma prototype used for user testing.

Goals for This Project

  • Design a dashboard that surfaced actionable insights in a visually accessible and intuitive format for legal professionals working with very large data volumes.
  • Enable users to search for and create custom data slices based on metadata, file types, keywords, custodians, and other filters.
  • Visualize large-scale datasets while maintaining clarity, performance, and responsiveness.
  • Support earlier strategic decision-making before data entered formal import and document review workflows.
  • Translate machine learning and processing outputs into interfaces that felt useful, credible, and understandable for legal users.

The Problem

In ediscovery, legal teams often need to make critical strategic decisions before a full document review begins, but the tooling at that stage is often limited, fragmented, or too technical for efficient early assessment.

Users needed a way to quickly understand what existed inside massive incoming datasets, including who the major custodians were, what kinds of files were present, where relevant time periods clustered, and how keyword or metadata-based patterns might change the scope of a matter. Without a strong early analysis layer, teams risked spending unnecessary time and money moving too much data into downstream workflows.

The challenge was to design a product that could make extremely large and complex datasets feel interpretable without overwhelming users. This required balancing power, flexibility, and clarity for legal professionals who needed fast insight, not just raw processing output.

Research & Strategy

I led UX design for the Early Data Assessment experience, working closely with product, engineering, and data science teams to understand how legal teams approached early case assessment and where existing workflows created friction.

This included industry research, persona development, and exploration of common ediscovery tasks to better understand how users reason about data before review. The strategy centered on making the dashboard useful as a first layer of analysis rather than as a secondary reporting surface after the fact.

I also partnered closely with data science to translate complex processing and machine learning outputs into user-facing structures that could support real decision-making. That meant designing not only for visibility into data, but for trust, scanability, and the ability to move from high-level patterns into deeper slices of analysis.

Design Direction

The design direction focused on giving legal teams a dashboard that felt both analytical and actionable. Rather than presenting users with raw system output, the experience was shaped around visual summaries, modular filtering, and search-driven exploration patterns that helped users progressively narrow large datasets.

I designed workflows that supported moving between overview and detail, allowing users to identify outliers, investigate trends, and create slices for further downstream use. Visualizations for custodians, file type distributions, timelines, keyword hotspots, and other signals were designed to help users orient quickly and act with more confidence.

Because this experience needed to work at very high scale, the interface emphasized clarity, information hierarchy, and performance-aware layouts that could support complex data environments without becoming visually overwhelming.

Impact

  • Helped define the foundational UX patterns and workflows for Nextpoint’s Data Mining / Early Data Assessment experience.
  • Created a first-layer analysis model that helped legal teams understand massive datasets earlier, before formal review began.
  • Established search, filtering, and slice-creation patterns that supported flexible exploration across metadata, keywords, custodians, and file types.
  • Contributed to a product foundation that was later launched publicly as Nextpoint Data Mining after beta validation with select clients.
  • Supported a product vision centered on faster processing, real-time analytics, and large-scale native data analysis for complex ediscovery matters.

My Role

I led UX design for the Early Data Assessment experience, working closely with product, engineering, and data science teams to translate complex data signals into actionable, user-facing insights.

Methodologies & Responsibilities

  • Conducted industry research and persona development to understand common EDA workflows in legal technology.
  • Designed low- to high-fidelity prototypes visualizing custodian trends, file type distributions, date patterns, and query-based results.
  • Created flexible search interaction patterns allowing users to explore datasets through filters, keyword logic, and metadata attributes.
  • Partnered with data science to align machine learning outputs with usable UI components, including extracted text views, classification signals, and other analytical highlights.
  • Produced detailed design documentation, logic diagrams, and annotated specs to support engineering handoff.
  • Helped shape a scalable interaction model for moving from broad dashboard insights into narrower data slices for downstream use.

Key Features Designed

  • Interactive dashboard with charts and tables for custodians, file types, date ranges, and keyword hotspots.
  • Search-driven exploration tools, including modular filters and query builders for narrowing large datasets.
  • Machine learning integrations to surface extracted text, language classification, media detection, and other analytical insights.
  • Slice creation workflows for selecting, saving, and exporting custom data subsets for review.
  • Accessibility- and scalability-focused design supporting WCAG standards and high-volume data environments.

Learnings & What I’d Do Next

This project reinforced how important it is to make complex analytical output feel decision-ready rather than merely visible. In high-volume legal workflows, users do not just need more data. They need faster ways to understand what matters and act on it.

It also highlighted the value of close collaboration between UX, product, engineering, and data science. Many of the most useful product behaviors came from translating technical capabilities into structures that legal users could actually interpret and trust.

If this work were to continue, I would focus on:

  • Validation at Scale: Study how legal teams use the dashboard across different matter sizes and data mixes to refine what should be surfaced by default.
  • Progressive Disclosure: Continue balancing overview and detail so users can move from high-level signals into deeper investigation without visual overload.
  • Workflow Continuity: Strengthen how data slices move from assessment into downstream review and production workflows.
  • Signal Explainability: Further improve how machine learning outputs are labeled and contextualized so users better understand what each signal means.
  • Performance-Aware Design: Continue refining the experience for extremely large datasets where speed and responsiveness are central to user trust.

Tools

Design & Research

Figma, Sketch, Zeplin, UserTesting.com, Mural

Communication

Slack, Zoom, Jira, Confluence