Case Study Presentation
Democratizing Machine Learning for Everyone
Anuja Harsha — Lead Product Designer
ML Functions · Cloud Software Group — WebFOCUS
DemocratizingMachineLearningforEveryone
I came in with zero ML background and got MIT certified to do this work. Over two years, I navigated engineering delays, layoffs, and resource constraints — leading design through it all.
Role & Expertise
Lead Product Designer (End-to-End Ownership)
Timeline
Jan 2023 – Jan 2025
SHIPPING 2026
Company
Cloud Software Group — WebFOCUS
Act I
Zero ML Knowledge. So I Got MIT Certified.
Turned a knowledge gap into an advantage. MIT certified. Months learning before designing.
I entered this project knowing nothing about machine learning. So I got obsessed.
MIT Professional Certificate
MIT xPRO, BostonProduct Design for Machine Learning & AI
- ML product lifecycle
- Design for AI systems
- Responsible ML practices
Weekly DS Embedding
Constant collaboration with Principal Data Scientist
- Model training logic
- Evaluation metrics
- Domain expertise transfer
AI-Accelerated Learning
Filled knowledge gaps in real-time
- Concept clarification
- Technical terminology
- Quick domain ramp-up
System Audit: The Baseline
Identified critical friction points in the legacy 9.2 workflow.

Fragmented Workflow
// 01_FRAGMENTATION- ✕4+ scattered steps to train a single model
- ✕Drag "model pill" onto data flow canvas
- ✕Configure in disconnected popup windows
- ✕Hidden toolbar controls for execution

Hidden Configuration
// 02_HIDDEN_COMPLEXITY- ✕Hyperparameters buried in right-click menus
- ✕Settings only accessible after training
- ✕No inline guidance or documentation
- ✕Technical jargon without explanation

Opaque Error States
// 03_DEAD_END_ERRORS- ✕"Results not generated" with no explanation
- ✕No guidance on how to fix issues
- ✕Silent failures during model training
- ✕Users abandon workflow mid-process

Platform Disconnect
// 04_PLATFORM_DISCONNECT- ✕No connection to main WebFOCUS workflows
- ✕Felt bolted-on, not platform-native
- ✕Multiple context switches required
- ✕No clear entry points from hub
Act II
No Users. No Access. So I Built a Proxy Network.
Three personas with conflicting needs. The existing experience served none of them well.
Gathering Insights Without Direct Access
How I built user empathy through proxy networks when direct research wasn't possible.
Enterprise security policy blocked direct access to end users. I could not interview the actual people using the tool.
Talked to everyone I could — support reps, technical staff, and data scientist friends outside work. If they touched ML, I picked their brain.
User Personas
Two distinct user types drove the dual-experience approach.
Techy Analyst
TECHNICALSelf-sufficient power users who need right-click entry and advanced controls.
Financial Strategist
BUSINESSGoal-oriented users who need guided workflows and plain-language explanations.
Dual-experience approach: Technical users get right-click entry + advanced controls. Business users get guided workflows + inline teaching.
Act III
From Black Box to 4-Step Guided Flow.
Designed a structured 4-step guided flow: Problem Type → Target → Predictors → Hyperparameters.
. To increase adoption and discoverability, a landing page was a must.
. It took real brainstorming to get there — our need for clean UX and easy decision-making vs.
. I introduced a **model card** pattern instead: each model gets its own card with the key details needed for an informed decision.
The System Blueprint
Physical sketches and architecture maps that defined how the new ML wizard would work within the WebFOCUS platform.

Sketching the "Train Model" wizard steps on paper.

Initial thoughts and requirements gathering.

Mapping the data scientist's journey through the system.

Connecting the disparate machine learning subsystems.

Defining decision trees for model training.

Conceptualizing the drag-and-drop interface.

Refining the step-by-step model configuration.

Solving for parameter fragmentation.

Exploring different layout options.

Visualizing the results dashboard.

The chosen direction for the ML workflow.

How the ML workflow redesign fits within the existing WebFOCUS shell and navigation.

How Train Model and Run Model connect and flow into each other by user type.

Entry points, data selection, and training steps within IQ Plugin.

Complete system architecture for all ML model types supported.
Mapping the Existing Black-Box Workflow
I documented every workflow step, every user decision point, and every place where users got stuck.
Data Selection
- >How users selected data
- >Data flow integration
- >Dataset compatibility
Training Configuration
- >How they configured training
- >Hyperparameter access
- >Model type selection
Model Execution
- >How models ran (or failed)
- >Tiny toolbar play icon
- >Hidden execution states
Results Interpretation
- >How results were interpreted
- >Confusing error messages
- >Unclear model outputs
The core problem wasn't the algorithm—it was the fragmented workflow. Users didn't know *where* to start or *how* to finish without error.
Three Critical Design Pivots
The mapping revealed the problems, but solving them required multiple iterations.
Split Train vs Run
Unified Train+Run experience mixing two mental models
Separate tabs for distinct mental models: Create vs Apply
RATIONALE: We were mixing two different mental models — creating models vs applying them. Splitting them simplified both UX and implementation.
Remove Data Flow from ML UI
Engineering wanted data flow visible inside ML UI
Data flow accessible elsewhere, ML UI stays focused
RATIONALE: After training models hundreds of times myself, I concluded the data flow was adding noise and risking users losing progress.
Model Cards vs Tables
Legacy UI used tables for everything, including models
Model cards for better scanning and comparison
RATIONALE: Despite initial pushback, cards became the primary way to choose a model, with tables reserved for dense tabular data like training logs.
These pivots weren't just UX decisions — they were architectural choices that simplified implementation while improving user experience. The structured guided flow emerged from our domain expert's answer to "What do you absolutely need?" → problem type, target, predictors, hyperparameters.
Act IV
10+ Iterations. One Breakthrough Screen.
10+ iterations on the confusion matrix. Productive tension with Data Science → best screen in the project.


Transformed a fragmented, error-prone technical process into a guided, linear wizard accessible to business users.
Before: Original Interface: Fragmented 4+ step workflow hidden behind context menus
After: Redesign: Guided 4-step wizard with clear error handling
Balancing Model Control with Simplicity
Layered disclosure: serve multiple user types within a single experience.
Default Experience
Non-technical users
- Guided, safe, error-proof flow
- Clear steps, clear terminology
- Tooltips, wizards, onboarding text
- Inline explanations
- Progressive disclosure
Advanced Controls
Technical users
- Expandable sections for parameters
- Ability to adjust algorithms
- Feature selection options
- Training configuration
Expert Mode
Data scientists
- Full control available but not required
- Advanced tuning options
- Hyperparameter presets
- All within the same unified experience
Safety guardrails
- •Incompatible datasets blocked early (Step 1)
- •Inline warnings in plain language
- •Standardized error patterns from design system
- •Users always have a way out: go back, change input, or safely cancel
Future Roadmap
- →Auto-suggesting best model type based on dataset
- →Deep hyperparameter presets and advanced tuning
- →Unified Train+Run single-screen concept
- →Heavier onboarding overlays / carousels
Default users never see expert controls unless needed. Experts can dive deep without wading through tutorials.
System Inspection: The Design Artifacts

Entry Points — Initial landing when no models exist
The Guided 4-Step Workflow
Every screen of the redesigned ML wizard.

Entry Point: Empty state with clear call-to-action
Password Protected
Click to unlock
- ●Fragmented 4+ step workflow
- ●12+ clicks through data flows and menus
- ●Hidden hyperparameters behind right-click
- ●Confusing "results not generated" errors
- ●Structured 4-step guided flow
- ●7-9 clicks via right-click entry
- ●Clear error handling and validation
- ●Accessible for non-technical users
Legacy vs. Redesign


The Confusion Matrix
"The best screen in the entire UX revamp." — Principal Data Scientist

Design System Components
Shared design system used across all three case studies. A unified language for coherence.





Artifact 01 of 24
Act V
I Led This While Owning Three Other Products.
Dual-experience approach emerged from constraints. Led this while simultaneously owning ReportCaster and IQ Plugin.
. After the January 2025 layoffs, I took on WebFOCUS Designer as well — juggling four major enterprise systems at once.
. I was one of the few people left who understood the full ecosystem end-to-end.
The Explainability Deep Dive
How tackling the most complex ML visualization earned the trust to revamp the entire experience
My Principal Data Scientist handed me a screenshot from an external tool and said:"This is the explainability visualization I need in WebFOCUS. Can you figure it out?"
// REF_IMAGE: External explainability tool our Data Scientist wanted me to recreate for WebFOCUS
// NEW_DESIGN: Custom explainability visualization I created for WebFOCUS
The other designers I worked with before didn't really understand what they were doing—they just gave designs. But you sat with us, talked to us, and actually understood. That's why I trust you.
Marcus Horbach
Principal Data Scientist
Understanding before designing: I sat with the data scientists, learned the domain, and understood why explainability mattered before designing how to show it. This approach—earning trust through genuine understanding—unlocked the opportunity to redesign the entire ML training workflow from scratch.
Navigated PM Change. Earned DS Trust.
Weekly syncs with Principal Data Scientist. 10+ iterations on the confusion matrix alone.
DS Partnership
1:1s with Principal Data Scientist · ML concepts → UX decisions · Weekly UX + ML syncs
PM Transition
PM changed mid-project · Onboarded new PM to engineers · Early flows as shared language
Trust Earned
Principal DS became advocate · Owned ML + RC simultaneously · Leadership greenlit early
“"Anuja turned one of our most complex ML flows into an experience users can finally understand." — Principal Data Scientist”
Act VI
5/5 Validated. Patterns Became the Platform Standard.
5/5 SMEs found entry without help. Patterns became foundation for IQ Plugin and platform-wide AI strategy.
Impact & Validation
Demo-Ready at Scale
For the first time, ML was stable enough for 200+ person org-wide demos. Sales engineering could finally showcase the capability confidently.
Zero Abandonment
Eliminating dead-end errors didn't just save clicks; it stopped users from quitting. All SME testers completed the full workflow.
1 Core Pattern
The 4-step guided flow pattern was so robust it was directly inherited by the IQ Plugin, reducing future design/dev time.
New User Tier
Lowering the technical barrier allowed Business Analysts to self-serve, expanding the addressable market beyond just Data Scientists.
Voice of the Team
“The clarity of her designs, in spite of the underlying data science and machine learning complexity, is impressive and has greatly contributed to the success of our products. Her design solutions are rooted in a deep understanding of the purpose of the product.”
“During a User Acceptance Test session, Anuja observed me navigating the screen. I was highly impressed with Anuja's approach. Her design was clean, intuitive, and clearly addressed the needs of users across different skill levels.”
Honest Reflection
What I'd Push Harder For
Early Data Science Alignment: I would have established the "productive tension" earlier. By the time I challenged the Principal DS on the confusion matrix, we had already lost sprint cycles on less effective visualizations. Bringing design into the definition phase—not just the execution phase—would have saved ~3 weeks.
Where I'd Take It Next
Real-Time Visualization: Currently, users must finish training to see results. I'd implement streaming websocket updates to show the confusion matrix evolving during training. This would turn a passive wait time into an active monitoring experience, allowing users to abort bad runs early (saving compute costs).
Next Case Study
Interested in working together?