Trustless Index Automation Brainstorming
To-do:
- Refine scoring criteria
- Draw parallels to existing rubric where applicable
- Build out automation workflow
- Determine what can be automated
- What are the capabilities of Cipherbot?
- How to connect to existing website/database?
- Modular variable-based design for ease of adding/modifying data sources for new/existing entries
- Choose data sources (See references from deep dives)
General Automation Framework
- Data Sources: APIs from reliable, real-time platforms:
- Blockchain Explorers
- Analytics Sites: Chainspect.app, Nakaflow.io, CoinGecko/Coingecko, Messari.io, Dune
- Other: Lido.fi or StakingRewards.com, Crypto51.app, MEVWatch.info, Bitnodes.io.
- Cipherbot to monitor and flag security/censorship/downtime incidents feeding into a database for human review.
- Scoring Logic:
- Break each dimension into 3-5 quantitative subsections.
- Assign a sub-score (1.0-10.0) to each subsection based on the rubric thresholds.
- Each dimension score = Average of sub-scores.
- Final Trustless Index = Weighted average of all 6 dimension scores.
- Automation Pipeline:
- Fetch data via APIs
- Compute sub-scores and averages.
- Store in a database
- Display on https://cipherindex.one/trustless-index
- Human Oversight: For non-quantitative elements like historical exploits, Cipherbot flags potential incidents. A human reviews/approves to adjust a "penalty factor" for each verified incident.
- Set thresholds (Depends on incident type & severity):
- 0 incidents = no penalty
- Set thresholds (Depends on incident type & severity):
- Chain-Specific Adaptation: Use a config file per chain to map explorers/APIs.
- Edge Cases: If data is unavailable, fallback to cached values or flag as "stale." Start with top chains and expand.
1. Decentralization
Measures distribution and diversity of validators/nodes.
- Subsections:
- Validator/Node Count: Total active validators/nodes.
- Data: Chainspect API, explorer APIs.
- Scoring: Relative (Highest =10, lowest =1)
- Stake/Hashrate Distribution: % controlled by top 10 entities.
- Data: CoinGecko rich lists, explorer APIs.
- Validator/Node Count: Total active validators/nodes.
Scoring: <5% by top 10 = 10.0; 5-10% = 9.0-9.9; >50% = 1.0.
- Nakamoto Coefficient: Minimum entities to control 33% stake/51% hashrate.
- Data: Chainspect, Nakaflow.io, exlporer.
- Scoring: Relative (Highest =10, lowest =1)
- Geographic/Jurisdictional Diversity (Bonus/Penalty): # of countries with >1% nodes.
- Data: Ethernodes, Bitnodes.io, Chainspect, etc.
- Scoring: >50 countries = +1.0 bonus; <10 = -1.0 penalty (applied to average).
- Overall Calculation: (Validator Count + Distribution +NC) + Geographic Bonus / 3
2. Censorship Resistance
Evaluates resistance to blocking/alteration. Quantitative via compliance metrics; historical via database.
Subsections:
OFAC-Compliant Validators: % of validators censoring (MEV-boost relays).
Data: MEVWatch.info API (Ethereum), Chainspect (general compliance dashboards).
Scoring: 0% = 10.0; <1% = 9.0-9.9; >50% = 1.0.
Protocol Features Enabling Censorship: Presence of built-in freezes/blacklists/clawbacks (0-1 binary, weighted by severity).
Data: whitepaper/docs
Scoring: None = 10.0; Optional (token-level) = 5.0-7.0; Mandatory = 0.0-3.0. Rework
Historical Censorship Incidents: # of verified events
Data: Cipherbot flags; human-reviewed database tally.
Scoring: 0 = 10.0; 1-2 isolated = 8.0-9.0; >5 or network-wide = 1.0-3.0. Rework
Validator Set Influence on Censorship: Cross-ref with Decentralization's Nakamoto Coefficient (lower NC = higher risk).
Data: Reuse from Decentralization.
Scoring: NC >100 = 10.0; NC 1-10 = 1.0-4.0.
Overall Calculation: Average, with historical incidents as a multiplier (e.g., >2 incidents = -20% to total). Rework
3. Immutability
Assesses resistance to changes/reversals. Quantitative via upgrade frequency; historical via database.
- Subsections:
- History of State Reversals/Rollbacks/Clawbacks: # of verified incidents.
- Data: Cipherbot flags; human-reviewed.
- History of State Reversals/Rollbacks/Clawbacks: # of verified incidents.
Scoring: 0 = 10.0; 1-2 = 7.0-8.9; >5 = 1.0. Rework
- Upgrade/Hard Fork Frequency: # per year (higher = less immutable).
- Data: roadmap/docs; GitHub
Scoring: 0-1/year = 10.0; 1-2 = 8.0-9.0; >5 = 1.0-3.0. Rework
- Presence of Admin Keys/Emergency Halts: Binary check for mutable controls.
- Data: scan contracts/code/docs for "admin" or "pause" functions
- Scoring: None = 10.0; Present but unused = 5.0-7.0; Used historically = 1.0-3.0.
- Cipherbot monitoring for usage to keep updated
- Historical Halts/Outages with State Impact: # of downtime events affecting immutability.
- Data: Cipherbot; human verification
- Scoring: 0 = 10.0; 1-3 minor = 7.0-8.9; Frequent = 1.0.
- Overall Calculation: Average
4. Security
Measures attack resistance. Quantitative via economic metrics; incidents via database.
- Subsections:
- Economic Security: Staked value or 51% attack cost (higher = better).
- Data: StakingRewards.com API, Crypto51.app, CoinGecko.
- Scoring: Relative (Highest = 10 , lowest = 1)
- History of Consensus Attacks/Exploits: # of verified L1 exploits/51% attacks.
- Data: Cipherbot flags; human-verified
- Scoring: 0 = 10.0; 1-2 minor (no losses) = 7.0-8.9; Major losses = 1.0.
- Uptime/Availability: % uptime since launch
- Data: Chainspect dashboards, explorer status APIs.
- Scoring: 99.99%+ = 10.0; 99-99.9% = 8.0-9.0; <95% = 1.0.
- Audits: # of audits + bounty max.
- Data: docs/GitHub for audit links; DefiLlama for bounty data.
- Scoring: >3 audits = 10.0, 2 audits = 8, 1 audit = 5, None = 1.0.
- Bug Bounty Bonus: Active Bug Bounty Program: +1
- Economic Security: Staked value or 51% attack cost (higher = better).
- Overall Calculation: (Economic Security + Consensus Attacks + Uptime + Audits +BBB) / 4
5. Speed
Fully quantitative from real-time metrics.
- Subsections:
- Average TPS: Real-world average over last 24h/7d.
- Data: Chainspect.app, explorers
- Scoring: Relative (Highest = 10, lowest =1)
- Max TPS: Recorded max (theoretical if unavailable).
- Data: Same as above; web_search for benchmarks.
- Scoring: Relative (Highest = 10, lowest =1)
- Block Time: Average block time
- Data: Explorer APIs
- Scoring: Relative (Highest = 10, lowest =1)
- Finality Time: Time to irreversible confirmation.
- Data: Chainspect, docs
- Scoring: Relative (Highest = 10, lowest =1)
- Load Handling (Penalty): % tx failures under peak load (from historical spikes).
- Data: Cipherbot monitors; Chainspect congestion metrics.
- Scoring: 0% failures = no penalty; >10% = -1; >20% = -2.0, etc. to average
- Average TPS: Real-world average over last 24h/7d.
- Overall Calculation: (Avg. TPS + Max TPS + Block Time + Finality) / 4 - Load Handling Penalty
6. Distribution (Ownership)
Measures token fairness. Quantitative via holder stats.
- Subsections:
- Top Holder Concentration: % held by top 10/100 addresses.
- Data: CoinGecko rich lists, explorers
- Scoring: <5% top 10 = 10.0; 5-10% = 9.0; >50% = 1.0.
- Gini Coefficient: Measure of inequality (0 = equal; 1 = one holder owns all).
- Data: compute from rich list data
- Scoring: <0.5 = 10.0; 0.5-0.7 = 7.0-8.9; >0.9 = 1.0.
- Premine/Insider Allocation: % premined/allocated to team/VCs at launch.
- Data: Whitepaper/docs; CoinGecko historical data.
- Scoring: 0% = 10.0; <10% = 9.0; >50% = 1.0.
- Unique Holders/Addresses: Total active addresses with balance >0.
- Data: Explorers, Chainspect
- Scoring: Relative (Highest = 10, lowest = 1)
- Top Holder Concentration: % held by top 10/100 addresses.
- Overall Calculation: Average
Final Score: Weighted Average
How much should each dimension be weighted? For "trustlessness", Speed doesn't seem quite as important as Immutability/Decentralization....