NEMIC Expert Advisor: Jennet Toyjanova on Getting Design Inputs Right

Jennet Toyjanova, PhD - NEMIC Expert Advisor, Combination Products

Director of Product Development Management, Nabsys

Introduction

Picture this: a product team has spent months carefully developing a device. They’ve brainstormed, engineered, debated requirements, and finally reached the design verification phase. Spirits are high. The team runs their tests, ready to confirm the device meets all requirements — only to discover it doesn’t. The product falls short, and months of work unravel in an instant.

Unfortunately, this isn’t rare and  it happens with alarming frequency (source). Failing product requirements tests  can beeven worse: imagine the device actually makes it to market, only to fail once in the hands of patients and clinicians. That’s when recalls happen — and nearly 37,000 medical devices have been recalled since 2012 (FDA data). The financial impact is staggering, but the reputational and human cost is even higher.

Why do brilliant and experienced teams continue to make this mistake? Medical device development is very complex, but a repeat offender is surprisingly simple: vague or incomplete design inputs. This is one of the most common (and costly) mistakes teams make. Projects leap forward into verification testing before the inputs are locked down and all product teams commit to them.  These inputs should be clear, measurable, and tied directly to user needs.

When requirements are not clear and measureable, verification becomes a guessing game and regulators eventually uncover what the team missed. In this post, I’ll unpack some of the biggest traps experienced design teams fall into with design inputs and verification. More importantly, I’ll share practical strategies that not only prevent costly rework but also smooth the path to regulatory approval and give teams confidence that they’re building the right product for the right reasons.

When I look at recent FDA recall data, I can’t help but see echoes of design input gaps.

Where Teams Go Wrong

Even the best teams stumble when design inputs and verification get blurred. Here are the most common — and costly — mistakes I’ve seen:

1. Writing vague, non-testable inputs
A requirement like “device should be easy to use” sounds good on paper, but it’s meaningless in practice. What does easy mean? One-handed operation? A task completed in under 10 seconds? Less than two user errors during a simulated use test? Unless the requirement can be objectively measured, the verification team is left guessing — and regulators won’t accept guesswork.

2. Jumping into test design too early
In the rush to show progress, teams sometimes begin drafting verification protocols before finalizing design inputs. The result? Circular logic — tests that only prove what they were designed to measure, not what regulators or users actually care about. And since inputs almost always shift later, verification plans have to be torn up and rewritten, wasting time, budget, and morale.

3. Assuming verification equals clinical relevance
Verification tells you whether design outputs match design inputs — but it doesn’t guarantee that the device works in real-world conditions. A drug delivery system may pass flow-rate testing in the lab but fail in clinical simulation because user technique wasn’t included in the verification test plan. That’s the critical difference between verification (“Did we build it right?”) and validation (“Did we build the right thing?”). Skipping this distinction leads to nasty surprises at FDA or CE review.

Mini Case Study: The Pain of Rework

I once worked with a team developing an electromechanical device. Their original design input read:

  • “Device should be safe under normal electrical use.”

This requirement  sounded reasonable, but it wasn’t SMART — not specific, measurable, or traceable. When regulators reviewed the submission, they asked the obvious follow-up: What exactly does “safe” mean? Safe at what voltage? Under what conditions? With what standard of leakage current?

Because the input wasn’t clearly tied to ISO electrical safety standards, the team had to go back and completely redefine their requirements. The updated input became:

  • “The device shall comply with IEC 60601-1 leakage current requirements, maintaining patient leakage current ≤ 100 µA under normal conditions and ≤ 500 µA under single-fault conditions.”

That’s a SMART input — specific, measurable, and directly traceable to both user safety and regulatory standards. Unfortunately, the change came late. Six months of verification work had to be redone, pushing the launch back by nearly a year.

The lesson? Regulators care about your requirements, and a SMART input would have saved the team months of rework.

Potential Lessons from Recent Recalls

When I look at recent FDA recall data (https://datadashboard.fda.gov/oii/cd/recalls.htm ), I can’t help but see echoes of design input gaps. Now, to be clear — I’m not claiming design inputs were definitively the cause (recall notices rarely spell that out in detail). But from my perspective, the failure modes line up closely with issues that could have been addressed through stronger, clearer requirements.

  • Philips Azurion interventional X-ray systems (2024)
    The FDA noted a “potential loss of imaging functionality,” with the root cause tagged as Device Design. To me, this points back to whether design inputs around restart behavior, system resilience, and hazard mitigation were fully defined. Stronger inputs in these areas might have reduced the risk.

  • Medtronic Pipeline Vantage embolization devices (2025, Class I)
    These devices failed to properly attach or stay attached during patient use, causing serious clinical events. While the FDA summary doesn’t explicitly list “design” as the root cause, this maps directly to performance/apposition requirements that should have been locked into design inputs and validated under realistic vessel geometries.

  • Stryker Mako robotic system (2024)
    The recall cited increased error codes when switching applications without restart, with root cause noted as Device Design. This appears to be a gap in requirements for state management and “hot-switch” workflows. If those inputs had been explicit — and verified — the risk might have been caught earlier.

These examples don’t prove that design inputs were solely at fault, but the pattern is clear: when requirements are vague, incomplete, or don’t fully capture real-world conditions, problems surface later in ways that feel eerily familiar. Even the largest, most sophisticated organizations aren’t immune to this.

How to Get It Right

Avoiding painful rework and recalls starts with clarity and discipline from day one. Here are practices that consistently separate successful teams from struggling ones:

1. Make design inputs SMART
Specific, Measurable, Achievable, Relevant, and Traceable.
Bad input: “Device should deliver medication quickly.”
Good input: “Device shall deliver 2.0 mL ± 5% within 10 seconds at ambient 20–25°C.”

2. Build traceability from day one
A strong design history file should show a clean chain from user need → design input → output → verification method. A well-built traceability matrix isn’t just for audits — it’s a map that keeps your whole team aligned.

3. Involve the right people early
Don’t let engineers write inputs in isolation. Clinicians can surface overlooked use cases, regulatory staff can flag applicable standards, and quality experts can tie inputs to risk management. Collaboration here avoids rewriting requirements midstream.

4. Link verification to risk management
Verification should prove not just performance, but safety. Connecting tests to your FMEA or risk analysis shows regulators you’re prioritizing what matters most — and it keeps surprises from surfacing late.

5. Treat verification as a design tool
The best teams don’t wait until the end. They use early feasibility and engineering testing as “dry runs” to refine inputs and test methods before locking them in. This mindset turns verification into a design enabler rather than a compliance burden.

Conclusion

Verification is only as strong as the design inputs it measures against. When inputs are vague, tests become meaningless; when they’re clear and traceable, verification becomes a powerful tool for both compliance and innovation.

The cost of getting this wrong is high: delays, recalls, and lost trust. But the payoff for getting it right is even higher: faster approvals, fewer surprises, and confidence that your team is building the right thing the right way.

So, ask yourself: when was the last time your team challenged whether your design inputs were truly testable?

https://www.medicaldesignbriefs.com/component/content/article/9924-34454-162 



Jennet Toyjanova - Director of Product Development Management, Nabsys


Previous
Previous

NEMIC and Ursanex Forge Ahead with Strategic Partnership to Advance HealthTech Innovation

Next
Next

NEMIC Newsletter: September 2025