NextDimensionAI

NextDimensionAI ships safer, faster healthcare voice agents

with Hamming

For us, unit tests are Hamming tests. Every time we talk about a new agent, everyone already knows: step two is Hamming.

Simran Khara, Co-founder at NextDimensionAI

NextDimensionAI logo

Meet NextDimensionAI

NextDimensionAI builds voice agents for healthcare providers, handling scheduling, prescription refills, medical record lookups, and multi-step procedure workflows. Their agents integrate directly with EHR systems and act autonomously from taking the patient phone call to completing multi-step processes like appointment booking.

Because of this, reliability, latency, and HIPAA compliance are essential for every deployment. Building voice agents for clinical settings introduced challenges that traditional call testing couldn't keep up with. Healthcare organizations expect agents to operate with call-center-level reliability, respond quickly even during peak volumes, and maintain strict handling of PHI.

NextDimensionAI's agents often replace entire queues, so a single incorrect response or slow interaction can break trust with both providers and patients. Consistent, high-coverage testing is not just a technical requirement, but a business-critical capability.

With Hamming, NextDimensionAI was able to

Run scenario-based tests of real patient behavior including pauses, accents, and edge cases
Reduce latency by 40% through controlled tests across carriers and configurations
Achieve 99% production reliability through targeted regression tests

The Challenge: Manual Testing Couldn't Keep Up with Complex Clinical Workflows

The team needed a way to validate thousands of possible call patterns, ranging from simple scheduling workflows to complex, multi-step SOPs, without slowing down their rapid development cycles. That need led them to Hamming.

Clinical voice agent dashboard

Solution: Automated, Scenario-Driven Testing at Scale

01

Scenario Generation Mirrors Real Patient Behavior

NextDimensionAI uses its proprietary simulation building system that creates scenarios based test cases in Hamming to test real-life scenarios. These include hundreds of scenarios that reflect real patient behavior, ranging from straightforward appointment requests to incomplete information, pauses, repeated questions, or unexpected call patterns.

Scenario coverage is not limited to clean, idealized flows. The team intentionally creates cases that mimic real patient behaviour, long pauses, interrupted speech, and noisy environments.

These qualitative factors were impossible to capture consistently during manual calling. Engineers tag each test and then run large batches in parallel. Instead of making manual calls one by one, they can execute an entire suite at once and immediately see:

  • Which scenarios pass or fail
  • How prompts change or affect LLM behavior
  • How the agent handles variations in noise, pauses, or accents

Hamming has become part of their default engineering workflow. The team treats Hamming runs the same way a software team treats unit tests

Scenario generation dashboard
02

Controlled Load Testing Across Carriers & Configurations

NextDimensionAI also uses Hamming to benchmark the impact of configuration changes. They run controlled tests to measure latency effects from:

  • Different compute regions
  • PSTN providers
  • LLM temperature and response settings
  • TTFW adjustments

Through these controlled tests, NextDimensionAI identified a configuration that reduced latency by 40%.

Hamming's testing suite allows NextDimensionAI to run hundreds of synthetic calls under controlled conditions, and they can quantify the impact of each configuration change with precision.

For example, the team measured how small fluctuations in TTFW affected patient perception, or how different PSTN carriers performed under load. It also gave them historical reference points, so as new models, telephony providers, or infrastructure environments became available, they could quickly evaluate whether switching would be beneficial.

NextDimensionAI concurrency load testing dashboard
03

Preventing Regressions After Launch

Production calls are monitored continuously by internal QA tools and supervisor agents. When a live call fails, the team reviews it and converts it into a new Hamming test case.

Every improvement goes through two steps:

  • Run the new post-production test case
  • Re-run all original pre-production tests

This ensures incremental improvements don't break existing functionality. Over time, the test suite grows to reflect real patient behavior.

NextDimensionAI's QA loop blends automated evaluation with human-in-the-loop (HITL) review. Their supervisor agents label calls based on intent accuracy, conversation quality, and completion outcomes.

When a call is flagged, a forward deployed AI Engineer reviews AI agent performance, identifies the root cause, and formalizes it as a reproducible Hamming test. This means the entire organization learns from every real failure.

Over time, this has created a comprehensive library of real-world cases: patients with strong accents, callers who provide too much or too little information, prescription issues, or behavior that falls outside any scripted flow. Each of these becomes a test the agent must pass before any future release.

NextDimensionAI regression testing dashboard
04

Ensuring HIPAA Compliance

Since NextDimensionAI operates in healthcare, HIPAA requirements are built directly into their test cases. The team creates scenarios that explicitly check whether an agent is acting in compliance with HIPAA. NextDimensionAI also tests for conversational edge cases, such as callers attempting to bypass verification by volunteering extra details, or asking the agent to read back medical history.

HIPAA compliance testing

“Hamming's responsiveness and support feel like an extension of our engineering team.”

Simran Khara, Co-founder at NextDimensionAI

NextDimensionAI logo

Why NextDimensionAI Chose Hamming

Before Hamming, testing was manual and slow. Engineers could only make ~20 calls a day, and full-team "testing sessions" weren't sustainable. Qualitative issues like pauses, hesitations, and real patient behavior weren't captured reliably.

During platform evaluation, the team directly compared Hamming against two other voice agent testing platforms. They ran identical scenario suites, pushed each system with parallel load, and examined how each platform handled call logs, outcome completion, and result interpretation. Hamming remained stable even when pushed to 200 simultaneous calls.

NextDimensionAI chose Hamming because:

Hamming had the lowest friction from "we should test this" to running actual tests
New engineers could onboard quickly
Competing platforms struggled with parallel load, with test agents going silent under stress
Hamming reliably handled 200 concurrent test calls
The workflow matched how their engineering team already worked

The Impact

Hamming has become the foundation of how NextDimensionAI builds, tests, and ships healthcare voice agents. With a consistent testing workflow and empirical performance data, the team has been able to scale both speed and reliability in ways that weren't previously possible with manual testing.

Using Hamming didn't just improve voice agent reliability; it changed how the team structures development. Engineers can now rely on objective scenario coverage to decide deployment timing. As their product development expands to more specialties and workflows, Hamming provides a scalable foundation for consistent behavior across all agents.

01

Faster Development Cycles

With Hamming embedded into their workflow, NextDimensionAI moves quickly from design to production. Each stage, design, development, and testing is supported by a consistent QA process that makes it easy to validate behaviour and ship updates with confidence. Their voice agents now achieve 99% reliability in production calls.

02

Better Customer Onboarding

Hamming is now part of NextDimensionAI's onboarding experience. The team runs live simulations of 50 simultaneous calls, giving their customers a real-time view of how the agent performs inside their own AI platform.

03

A Scalable QA System for Their Roadmap

As NextDimensionAI expands their product offering, Hamming enables them to test at scale without slowing down development. The growing library of scenarios, benchmarks, and real-world cases ensures that each deployment builds on the reliability of the last.

“We go to our customers knowing what will work and what won't, not hoping. That's a big difference in how fast we can move.”

Simran Khara, Co-founder at NextDimensionAI

NextDimensionAI logo

Featured customer stories

Grove Logo

How Grove AI ensures reliable clinical trial recruitment with Hamming

Podium Logo

How Hamming enables Podium to consistently deliver multi-language AI voice support at scale