
Harriet Kamendi, PhD, DABT
I design FDA evidence that protects capital · Regulatory Toxicologist, PhD DABT · Medical Devices · Diagnostics · Drugs · Biologics · Medical Foods · Founder & CEO, Kandih BioScience
Your 510(k) Timeline Is Decided Months Before You File
Week seven of FDA review.
The email lands.
Subject: Additional Information Request
“Provide clarification regarding exposure assumptions relative to intended use.”
The toxicological risk assessment modeled once-daily adult use on intact skin.
The Instructions for Use allowed multiple daily uses across broader patient populations — including compromised tissue.
The lab data was fine. The cytotoxicity results were clean. The submission was complete.
But the architecture wasn’t aligned.
And that’s what stopped the clock.
I’ve seen this repeatedly in early-stage startups, Most 510(k) delays are not caused by failed testing.
They are caused by upstream structural decisions that surface during review.
Why Do 510(k) Submissions Get Delayed?
In practice, most 510(k) delays are triggered by misalignment between:
• Toxicological risk assessment (TRA) assumptions
• Intended use and labeling (IFU)
• Material comparability versus the predicate
• ISO 10993 contact categorization
• Extractables/leachables exposure modeling
These gaps often appear during FDA Additional Information (AI) requests.
The issue is rarely missing data.
It is usually inconsistent reasoning.
Here are the five decisions that quietly determine whether your review proceeds efficiently — or stalls.
1. Why Does an FDA AI Letter Often Focus on Exposure Assumptions?
Under ISO 10993-1 and ISO 14971, biocompatibility evaluation is driven by intended use and foreseeable use.
Exposure modeling must mirror labeling.
In the opening scenario:
The TRA assumed:
• Adult population
• Once-daily exposure
• Intact skin contact
The IFU allowed:
• Multiple daily uses
• Broader populations
• Potential compromised tissue contact
When use frequency changes, exposure changes. When exposure changes, toxicological justification changes.
FDA reviewers look for internal consistency first. If labeling and toxicology diverge, the entire safety narrative becomes unstable.
Founder insight: If marketing modifies intended use late in development, toxicology must be updated immediately. Submission consistency is non-negotiable.
2. What Happens When “Same Material” Isn’t Fully Documented?
One engineering team confidently stated:
“It’s the same polyurethane as the predicate.”
On paper, yes.
In practice:
• Additives differed
• Supplier grade differed
• Sterilization changed from EtO to gamma
Under ISO 10993-1, evaluation applies to the finished device — including processing and sterilization.
Gamma irradiation alters degradation chemistry. Additives influence extractables. Supplier variation affects impurity profiles.
FDA requested clarification on chemical characterization relative to the predicate.
There was no formal comparability analysis.
Additional analytical work followed.
Founder insight: Predicate equivalence requires documented material comparability — not assumption. If you cannot table the differences clearly, reviewers will.
3. How Does ISO 10993 Contact Category Impact a 510(k)?
A device was categorized as surface contact.
Human factors testing later revealed foreseeable use on compromised tissue.
Under ISO 10993-1, contact type and duration are defined by worst-case foreseeable use.
Surface contact and breached barrier contact drive different biological endpoint expectations.
When the contact category shifts:
• The test matrix shifts.
• The risk assessment shifts.
• The regulatory justification shifts.
This is not a minor revision.
It is structural.
Founder insight: Contact categorization should be finalized only after realistic use-case mapping. Optimistic classification is a common source of rework.
4. Why “Running the Standard Matrix” Is Not a Strategy
A Series A company inherited predicate testing and told their CRO:
“Run the standard biocompatibility matrix.”
They repeated irritation, sensitization, and systemic toxicity studies without conducting a formal gap analysis.
Under ISO 10993-1, biocompatibility is risk-based. Testing must be justified relative to:
• Materials
• Processing
• Intended use
• Predicate differences
Six months later, FDA did not question missing endpoints.
They asked:
“Provide rationale for selected endpoints and justification for omission of others.”
The team had replicated data.
They had not documented reasoning.
Founder insight: Over-testing without structured gap analysis is not de-risking. It signals lack of regulatory architecture.
5. What Triggers AI Questions About Extractables and AET?
Another team submitted a technically strong extractables report.
Clear peak identification. Defined extraction conditions. Robust analytics.
FDA responded:
“Provide justification that identified extractables do not pose toxicological risk at maximum clinical exposure.”
The issue was alignment.
Chemistry calculated the Analytical Evaluation Threshold (AET) based on device mass.
Toxicology modeled exposure based on repeated clinical use.
Frequency assumptions diverged.
Under ISO 10993-17 and principles reflected in USP <1661>, extractables evaluation must explicitly connect:
Compound identification → Estimated patient exposure → Toxicological threshold → Margin of safety.
If chemistry and toxicology operate separately, this linkage becomes fragile.
Founder insight: Extractables data without integrated exposure modeling invites AI requests. Toxicology must co-own E&L from project kickoff.
Common Causes of 510(k) Toxicology Delays
Most AI letters related to biocompatibility stem from:
• Misalignment between IFU and toxicological exposure assumptions
• Incomplete material comparability documentation
• Incorrect ISO 10993 contact categorization
• Absence of a formal risk-based gap analysis
• Failure to link extractables data to toxicological thresholds and clinical exposure frequency
These are architectural gaps, not laboratory failures.
The Structural Pattern
In each case:
The device was not unsafe. The lab data was not defective.
The reasoning was incomplete.
Toxicology in a 510(k) is not a checklist of tests.
It is the integrated logic that connects:
Materials → Processing → Intended use → Exposure → Biological response → Margin of safety.
When that logic is aligned early, review is efficient.
When it isn’t, you don’t fix a study.
You rebuild the safety story under review pressure.
Final Thought for Founders
Most founders assume regulatory delays happen at submission.
In reality, they are seeded months earlier in design decisions that feel operational, not toxicological.
Toxicology is not a testing package.
It is regulatory architecture.
And architecture determines whether your 90-day review stays 90 days — or quietly becomes something else.
