AI Vulnerability
Quantuma scores portfolio companies on their vulnerability to AI disruption on a 1–5 scale. This is used by investment teams for portfolio monitoring, underwriting, and investor communications. Currently, the AI Vulnerability assessments are for companies in the software sector.
Where it appears
The AI Vulnerability — Software Exposure section appears on the Lender Overview page. It shows:
- Software Exposure — Total fair value of software holdings (e.g., $43.6M)
- % of Portfolio — Software as a share of the BDC’s total portfolio (e.g., 30%)
- Software Companies — Count of software holdings scored
- Scored date — When the analysis was run (e.g., “Scored Mar 2026 via Quantuma AI”)
Scoring scale
| Score | Label | Description |
|---|---|---|
| 1 | Minimal | AI has little relevance. Physical-world operations, regulated infrastructure, or deeply embedded systems with no viable AI substitute. |
| 2 | Low | AI can improve efficiency but doesn’t threaten the core business. Strong defensibility through data, regulation, or customer lock-in. |
| 3 | Moderate | AI creates real competitive pressure. Some product areas are exposed, but defensible elements exist. Company must adapt but can survive. |
| 4 | High | AI threatens the core business model. Weak moats, emerging AI-native competitors, and pricing pressure. |
| 5 | Severe | AI can replicate or eliminate the core value proposition. Business model at existential risk within 2–5 years. |
Scoring dimensions
Five dimensions, each scored 1–5. Equal weight. Composite = simple average, rounded to nearest integer.
1. Data & IP Moat
Does the company have proprietary data or systems of record that AI cannot easily replicate?
- 1 — System of record with years of proprietary data and compounding network effects
- 3 — Some proprietary data, but core logic could be rebuilt on public data or foundation models
- 5 — No data moat. Product wraps third-party data or public LLM capabilities.
2. Customer Embeddedness & Switching Costs
How deeply is the product embedded in customer workflows?
- 1 — Mission-critical system of record. 18+ month implementation. Customer cannot operate without it.
- 3 — Important but not mission-critical. 3–6 month migration possible.
- 5 — Commodity service. Customer could replace overnight.
3. Regulatory & Error Tolerance
Does regulation or high error cost slow AI substitution?
- 1 — Heavily regulated. AI errors = legal liability. Human oversight legally required.
- 3 — Light regulation. Industry norms (not laws) slow adoption.
- 5 — Unregulated AND customers will switch to cheaper AI alternatives the moment quality is “good enough.”
4. Competitive Landscape Exposure
Are well-funded AI-native competitors emerging?
- 1 — No meaningful AI-native competitors. Structural barriers to entry.
- 3 — AI-native competitors exist with meaningful capital but haven’t displaced incumbents.
- 5 — AI-native alternatives already cheaper and comparable. Major platforms offering substitutes.
5. Revenue Model Resilience
How does the revenue model hold up if AI compresses pricing?
- 1 — Usage/outcome-based pricing. AI makes the company more productive = revenue increases.
- 3 — Annual subscription with competitive renewals.
- 5 — Per-seat pricing where AI directly reduces headcount needs. Revenue self-destructs as AI scales.
Composite score
Score = (D1 + D2 + D3 + D4 + D5) / 5Round to nearest integer for the 1–5 rating. The decimal is available for sorting and comparison.