CORE3
CORE3

State of digital assets risk: What we learned by measuring what nobody else does

10 min read

Share in social

Intro

In December 2025, CORE3 released pilot scores for 49 projects. Each was assigned a Probability of Loss metric and reviewed through multiple Proof of Opinion assessments by certified researchers.

This article reports what we observed.

The dataset was built to be mixed on purpose. Memecoins sit beside Layer 1s. RWA protocols appear alongside anonymous cryptocurrencies and oracles. The goal was simple: see whether the PoL method works across categories, not just within one niche.

It does. And the results surprised us.

What the findings show

The CORE3 pilot covers the evaluation of 49 projects against 98 assessments. 

⚠️ Disclaimer: Some results will read as false negatives.

The CORE3’s scoring model treats undisclosed controls as missing. If a project runs monitoring but hasn’t made it public, the score reflects this transparency gap. That choice serves two aims.

First, it shows how much risk analysis in crypto rests on guesswork. Investors and analysts infer safety from reputation, social cues, and loose claims. When documentation is absent, stories fill the space. 

Second, it pushes disclosure. The industry lives with a contradiction. Teams produce dense paperwork for regulators to access institutional capital. Yet they share almost none of the same rigor with the investors whose capital they want: security practices stay private, operating controls stay internal, and treasury behavior stays vague.

Then a failure hits. A project that signaled compliance turns out to have no live monitoring, no written response plan, and no insurance required by the claimed compliance. 

CORE3 scores what can be verified because visibility is the issue. A project with strong controls that keeps them private creates the same information gap as a project that never built them. From the outside, they are indistinguishable.

The method rewards disclosure. And it puts a question back on the table: if the controls are real, why can’t anyone verify them?

Here’s what the public posture of risk in crypto is in 2026.

Only one project from the set holds ISO 27001 certification: Chainlink

That’s 2% of the dataset. ISO standards assume a conventional corporate setup. Most crypto projects may not fit that mold. The absence may point to weak controls, or it may show a mismatch between enterprise frameworks and decentralized systems.

Why no ISO 27001?

  • Culture plays a role. Crypto has favored speed over process for years. Ship first, clean it up later. Formal compliance feels at odds with that instinct, even when the controls would lower risk.
  • Cost is another factor. An ISO audit often starts around $50,000, then continues with software, annual reviews, and documentation upkeep. For teams without near-term institutional deals, that spend competes directly with building the product.
  • There’s also a belief it doesn’t matter. Projects outside traditional finance face no hard requirement. Without partners or regulators asking for it, ISO compliance stays in the long shelf.

73% of crypto projects listed on CORE3 lack real-time monitoring.

Monitoring catches anomalies in the first seconds of an incident, often the span that decides the damage. Hyperliquid and Typus Finance lost millions through attacks, that could be stopped with real-time anomaly detection systems. Missing monitoring doesn’t trigger an exploit, but it puts teams on the same clock as the attacker.

  • Our data noted a secondary pattern: in some cases, monitoring tools exist but aren't configured. Teams pay subscription fees for alerting systems that were never set up to alert. 

41% of assessed projects run no active bug bounty 

That group includes Axie Infinity, Zcash, Hyperliquid, and Official Trump. Some are meme tokens with no claim to institutional rigor. Others are long-running protocols. The meaning of that gap depends on what each project claims to be.

When the bug bounty is intact, the hacker has the alternative of submitting found vulnerabilities and being rewarded, instead of simply exploiting them. Plus, it gives project an opportunity to verify its code with experienced white hat hackers, to make the cybersecurity posture more safe in general.

Why no bug bounty?

  • For teams that present themselves as mature, a bounty can feel like admitting flaws. Saying “this code is battle-tested” while inviting outsiders to break it creates a messaging conflict. Some teams avoid it altogether.
  • Cost matters here too. Serious bounties pay serious money. To attract capable security researchers, rewards usually run from $10,000 to $250,000, depending on severity. For projects without a security line item in the budget, that competes with shipping features and buying attention.

86% of the projects in the pilot set carry no insurance coverage

That figure matters less for its size than for who sits outside it.

Ethereum qualifies; so does Hyperliquid, launched in 2023 by a team with no public identities; Aster makes it to the list; Ether.fi and Uniswap qualify; AAVE completes the group of six.

What we have to understand here, is that insurance providers aren’t persuaded by whitepapers. They examine operating discipline: monitoring, key custody, incident response, and documented controls. Insurers won’t insure something doomed to fail, so it signals there is a foundation of risk mitigation. For example, a project with anonymous founders and real systems can pass. A well-known name without them cannot.

The insurance is important because when incidents happen, the reimbursement process typically begins. And when there is insurance of protocol, the user is confident the compensation will happen. When it’s not, the users hope for compensation from any unaffected funds the project has, typically not enough for any kind of proper refunding.

In our set, there are just 6 projects with it, but 43 others could not have it due to three reasons:

  1. Some are ineligible. They lack the basics insurers require: monitoring, key management, incident response. Without those, no insurance is possible.
  2. Some aren’t interested. For memecoins or experiments, insurance isn’t worth the hassle. So they think. 
  3. Others are constrained by cost. Premiums usually run 2–5% of covered value each year. For smaller treasuries, that’s a real tradeoff against development, incentives, or liquidity

 

Three patterns that contradict common assumptions

We started preparing this article with several biases common in the crypto industry — assumptions we wanted to confirm or challenge with probability of loss data.

Doxxed teams don’t score much better than anonymous ones.

The average Probability of Loss for public teams is 49; For anonymous teams, 51. That’s a ~2-point gap across all scored projects.

Putting a name on a project signals accountability. It puts reputation on the line. What it doesn’t guarantee is that risk controls follow. Markets often treat transparency as a proxy for maturity. The scores indicate they don’t correlate.

Fully regulated projects score higher risk than the dataset mean.

Projects with 100% regulatory compliance: Ondo Finance, RealT, and World Liberty Financial, have an average of 63 PoL. The average across the whole set is 50 PoL.

RealT scores 78 PoL with perfect regulatory standing and a security domain score of 0.00. 

Our own conclusion is that compliance-first projects put full stake in institutional approval, but what they miss is signaling that when institutions start obtaining RWAs, the smart contract will withstand the cybersecurity risks. The other thing worth noting is that current standards of MiCA and the US mix of practices require having a solid security posture. So, when the institutional liquidity flows, it stays profitable for institutional investors rather than hackers from the DPRK. 

Project age doesn't predict operational maturity.

Some protocols operating since 2017 score higher risk than 2023 launches. Longevity might indicate resilience. It might also indicate that nothing has tested the project yet. 

Secret Network launched in 2020, now has a PoL of 72. Hivemapper has been operating since 2022, scored 74 on PoL. Pi Network dates back to 2019 with probability of loss 76.

Meanwhile, Ether.fi launched in 2023 (scored 39 probability of loss); Mantle, same year (scored 41 PoL); Hyperliquid, initially anonymous founders, 2023 launch (holds 45 PoL).

One explanation is timing: Between 2017 and 2020, expectations were low, then many early protocols failed outright during the largest hack cycles. Survivors either weren’t targeted or had the practices but not declared it. Newer projects launched into a harsher environment, as they couldn’t attract liquidity on ideas alone — public risk controls were expected from day one. And this tendencies reflect the probabilities of loss. 

In which categories does risk concentrate?

The dataset includes L1s, L2s, DeFi protocols, RWA platforms, memecoins, oracles, and infrastructure projects. Risk distribution varies by category, but not always in predictable ways.

Layer 1s score the lowest and the highest probabilities of loss. Ethereum scores 6 PoL — the lowest PoL in the dataset, reflecting extensive documentation across all six risk domains. Dogecoin scores 66 PoL; Pi Network scores 76 PoL. They are from the same category, but wildly different operational realities.

Memecoins aren't uniformly high-risk. Pepe scores 46, a bit more risk than Hyperliquid bears (45) and significantly better than Floki (60). In contrast, Official Trump scores 58 probability of loss. The memecoin label doesn't determine risk profile. What the team has actually built does.

DeFi protocols show the widest variance. 

The spread from lowest to highest risk within DeFi alone is 20–62 PoL — is wider than the gap between Ethereum and most Memecoins.

Uniswap scores 20 PoL. AAVE scores 24. Both have documented insurance coverage, real-time monitoring, and transparent treasury management. At the other end: Synapse scores 62. Meteora scores 62. GMX sits at 52

A label like “DeFi” tells you little. Within that single category, scores span 41 points. Infrastructure separates Uniswap from Synapse: insurance, monitoring, auditor quality, treasury management.

What this data lets you do

Composite scores compress six risk domains into a single number. That compression is useful for comparison, but it's dangerous to rely solely on them for understanding.

Arbitrum scores 41 for probability of loss overall. Its security domain score is 14 — half what Ethereum scores. The weighted average smooths this out. The sub-score exposes it.

A low security sub-score doesn't mean exploitation is imminent. It means specific security infrastructure is absent or undocumented. Absence creates exposure. Exposure doesn't guarantee incident. Some of the most documented projects in crypto history have been exploited. Some of the least documented have operated for years without incident.

Sub-scores tell you where the gaps are. They don't tell you when — or whether — those gaps will matter. That uncertainty is the point. Risk is probabilistic, not deterministic.

But here's what changes when you can see the breakdown: you can make decisions based on what matters to you.

An institution evaluating counterparty exposure cares about different parameters than a builder trying to identify where their operational gaps are. A researcher comparing L2 security postures needs different data than a fund manager assessing portfolio concentration risk.

One composite number doesn't serve all those needs. The sub-scores do.

Probability of loss creates the space for decision-making

Most risk analysis in crypto tells you what to think: ratings, recommendations, verdicts.

CORE3 shows you what exists. 

When you see a project's PoL of 41, you could stop there. Or you could see that the 41 probability of loss comes from strong financial infrastructure and weaker security coverage. You could compare it to a project with the same composite score but opposite domain distribution. You could make your own judgment about which risk profile fits your use case.

CORE3 is a system that shows you what's there so you can decide for yourself.

It best suits the users from B2B, retail investors, and project builders who actually want to understand what they're interacting with; probability of loss removes one layer of noise between the question and the answer.

What CORE3 measures

CORE3 measures the probability of loss. But the platform doesn’t measure safety or investing attractiveness. These are different things.

Low PoL means a project has done everything measurable to reduce risk. It also means documented infrastructure exists across security, financial, operational, reputational, regulatory, and dependency domains. But, unfortunately, it doesn't mean black swan events can't happen; yet, the project has minimized the surface area where ordinary failures occur.

Conversely, high PoL means specific risk factors are present or undocumented. It doesn't mean collapse is coming. It means exposure exists that the project could address but hasn't.

The metric quantifies what exists, but it doesn't predict what will happen. Prediction is marketing, while measurement is operational.

Outlook to the future

Forty-nine projects is a starting point, but the methodology scales. In Q2 2026, we’re releasing CORE3 from the pilot phase with bigger coverage (300–1000 projects with PoL). For us, it means that shared language for risk will exist in the Web3 space.

Test the findings in the app: app.core3.io

Author

Dmytro Zaporozhchenko, CORE3 content lead, has a background in public relations for cybersecurity firms, centralized exchanges, and DeFi projects. 


Read Next

Subscribe to our newsletter

Get early access to CORE3 updates, Web3 security insights, and exclusive blockchain content

Subscribe to our newsletter