PawNET

Protocol for Anonymous Web Networking

PawNET is a fully decentralized network protocol that replaces HTTP/HTTPS, DNS, domain registrars, and certificate authorities entirely. No central servers. No central control. No entity owns it.

PawNET is NOT a single protocol you interact with directly. It is a base layer, on top of which specialized sub-protocols are built. End users always interact with one of the sub-protocols, not with PawNET base directly.


Sub-Protocols

PawNET Host

Distributed hosting, seeding, gossip-based content delivery. Viewers become seeders.

PawNET Web

Document and page browsing layer. Like HTTP but native to PawNET. Handles how pages are requested and rendered.

PawNET Anonymous

Anonymity layer with onion-style routing. No single node knows both sender and destination.

PawNET Omni

The everyday all-in-one sub-protocol. Web, hosting, chat, and calling. The entry point for general everyday users.


2. Addressing — pwd

PawNET does not use IP addresses at the application layer. It uses its own addressing system called pwd (PawNET Address).

Format: pwd://XXXXX:Y

  • First segment: 00000–99999 (100,000 addresses per tier)
  • Second segment: 0–99 (100 tiers)
  • Total capacity: 10,000,000 unique address spaces

pwd is permanently bound to the node's public key via cryptographic signature. All routing uses pwd; IP addresses are completely abstracted away. pwd addresses are permanent and portable, following the node regardless of physical location or underlying transport.

pwd Discovery via Gossip: Every node maintains a complete local cache of all active pwd addresses through continuous gossip. Node generates keypair, claims pwd address (e.g. pwd://12345:0), creates announcement packet (pwd address + SHA-256(public_key) + public key + digital signature), broadcasts to all direct neighbors. Neighbors verify signature, cache the announcement, propagate it onward until every active node has it cached.

When a new node joins, its neighbors immediately transfer their complete pwd cache. Full network knowledge, instantly, no central directory.

Re-announcement: Once per day maximum (rate-limited at protocol level). Offline/Online sync: When a node goes offline it broadcasts that. When it comes back, neighbors send everything it missed. As long as one node stayed online, complete network state is preserved.

pwd announcements are NEVER garbage collected. pwd is routing infrastructure. Losing it breaks connectivity. pwd is only a temporary IPvx replacement — I do not plan to use pwd in the future, or at least not the version I have now.

Storage Estimates: 10,000 nodes ~1.5 MB — 1,000,000 nodes ~150 MB — 10,000,000 nodes ~1.5 GB. Scales linearly with actual network population, not the 10M address space.

Optional — Merkle Root Verification: Nodes can generate a Merkle root of their complete pwd cache and compare with neighbors to confirm identical state. If they differ, exchange missing announcements. Mathematical proof of complete sync.

3. Cryptographic Domain Ownership

Domains in PawNET are not registered with any authority. They are generated mathematically and owned cryptographically.

Domain Generation: User generates an Ed25519 keypair. Domain identifier = SHA-256(public_key), 32 bytes, permanent, unique. User assigns a human-readable name that maps to that hash. Domain is bound to the keypair; only the private key holder can sign valid announcements.

Domain Announcement Packet (~200 bytes): Domain identifier (SHA-256 of public key, 32 bytes) — Human-readable name — Hosting pwd addresses — Public key (32 bytes for Ed25519) — Timestamp (for conflict resolution) — Ed25519 signature of the entire announcement (64 bytes).

Why It Can't Be Spoofed: Attacker tries to broadcast “alices-blog is now at pwd://99999:99”. Network verifies: does SHA-256(public_key) match alices-blog's domain ID? Does the signature validate? No, attacker doesn't have Alice's private key. Rejected by the entire network. Mathematically impossible to spoof without the private key.

Domain Name Conflicts: Before creating a domain, nodes search cache + query neighbors (higher TTL than normal). Search rate-limited to once per 30 minutes. If duplicate human-readable name found with different crypto ID: newer domain gets 48-hour deletion notice, then auto cache eviction. Prevents squatting and accidental collisions.

Ownership Properties: No renewal fees, no expiration, no registrar. Cannot be seized (no central authority to compel). Cannot be stolen (no registrar exists). Cannot expire (no payment required). Transferable only by transferring the private key itself.

4. Gossip Protocol

All information propagation in PawNET — domain announcements, pwd announcements, re-announcements — happens through gossip. No central coordination ever.

Nodes periodically exchange info with neighbors (typically every 1–10 minutes, implementation-configurable). Propagation Flow: Node A creates announcement → gossips to neighbors B, C, D → neighbors verify cryptographic signature → valid announcements cached locally → included in next gossip cycle (B gossips to E, F, G; C to H, I, J; etc.) → exponential spread until entire network saturated.

Loop Prevention: Nodes track announcement IDs already seen. Before forwarding: “Have I gossiped this?” If yes, skip. If no, forward and mark seen. Announcements naturally stop propagating once saturated.

Bandwidth: Single domain announcement ~200 bytes — Single pwd announcement ~150 bytes — 50,000 cached domains gossiping every 10 min: ~10 MB data — Actual bandwidth: ~60 MB/hour for well-connected nodes in large networks.

Neighbor Selection: 5–20 neighbors per node depending on network size. Mix of nearby (low latency) and distant (network diversity). More neighbors = more redundancy, more bandwidth.

5. Popularity-Based Caching & Garbage Collection

Domain announcements (NOT pwd announcements) are subject to garbage collection based on usage.

Cache Metrics Per Domain: Last-accessed timestamp — Access count — Popularity score: access_count / time_since_first_cache — First-cached timestamp.

Timeout Formula: timeout_hours = base_timeout * (1 + log(popularity_score))

Popularity Tiers: Ultra-popular (10,000+ req/day) — Never garbage collected. Popular (1,000 req/day) — 6-month timeout. Moderate (100 req/day) — 1-month timeout. Low (10 req/day) — 1-week timeout. Rare (1 req/month) — 24–48 hour timeout.

Collection Process: Background task (runs ~hourly): scan cache for domains past timeout threshold. Check: current_time - last_accessed > timeout_hours. If true: remove silently. No network announcement.

Re-Caching After Collection: User requests garbage-collected domain → cache miss → query fallback triggers → query propagates through network with TTL → if found anywhere, domain mapping returns to requester → requester + all nodes along return path re-cache it (path reinforcement). Network auto-optimizes around actual usage with no central planning. Popular content becomes ubiquitous. Abandoned content disappears. Natural content lifecycle. Storage stays manageable (~10 MB for 50,000 domains).

6. Query Fallback Mechanism

For when local cache doesn't have a domain (garbage collected, new domain still propagating, node just joined).

Query Packet: Domain identifier (SHA-256 hash of target) — TTL counter (default 7–10, configurable) — Proof-of-Work solution (anti-spam) — Requester pwd address (for response routing) — Query ID (deduplication).

Propagation: Node needs domain X, not in cache. Solves PoW puzzle. Sends query to all neighbors with TTL=10. Each neighbor: check local cache. If found → send response back to requester. If not → decrement TTL, forward to their neighbors (not back to sender). TTL=0 → drop. Exponential search fans outward. First node with domain sends response back. All nodes on return path re-cache the domain (path reinforcement).

Anti-Spam: PoW per query: ~100–500ms CPU time (negligible for legitimate use). 1,000 queries = 100–500 seconds (expensive for spammers). Rate limiting: nodes can cap query forwarding (e.g. max 100/minute). TTL=10 reaches max ~10 million nodes with branching factor 10.

Query Failure: If TTL exhausted with no response: domain doesn't exist, was fully garbage collected network-wide, or network partition. User sees “Domain not found.”

7. Distributed Hosting Model

Viewers are not passive consumers. While viewing content, they serve portions to other users. This is a protocol requirement, not optional.

Owner Configuration: Hosting percentage: 50%–100%. 50%: owner serves half, viewers collectively serve the other half. 100%: owner serves everything (traditional model, viewer participation off). Configurable at any time, update broadcast via gossip.

Request Distribution: Even split, not random. Distribution is deterministic based on who's in the active viewer pool.

Scaling Dynamics: 10 viewers at 50%: owner serves 50%, each viewer serves ~5%. 1,000 viewers at 50%: owner serves 50%, each viewer serves ~0.05%. More viewers = more distributed capacity = lighter per-viewer burden. More popular = more robust. Inverts the traditional “popularity kills your server” problem.

Active Viewer Tracking: Origin server maintains list of pwd addresses currently viewing. Viewers send heartbeat every 30–60 seconds (“still viewing”). Missed heartbeat → removed from pool. Content closed → explicit “stopped viewing” message or timeout.

Mandatory Participation: Not optional. While viewing = must serve. Refusal to host = refusal to view. No leech mode. Protocol-level enforcement.

Fallback: Viewer disconnects mid-transfer → timeout → re-route to origin server. Origin goes completely offline → viewers can still serve cached portions to each other (partial availability during outage). Network partition → viewers in same partition serve each other (cache islands), heals when partition resolves.

8. Re-Announcement System

Rate Limit: Maximum once per 24 hours per domain. Protocol-level enforcement via timestamps. Nodes reject announcements that violate this.

Use Cases: Domain garbage collected, re-announce to restore — Network partition recovery — Address update (domain moved to different pwd addresses) — Periodic ownership confirmation.

Process: Identical to initial announcement. New timestamp, signed with private key, broadcast to neighbors, gossip propagates it. All nodes verify signature + rate limit compliance. Valid → update cache. Invalid (too frequent) → reject + log spam attempt.

Auto-Scheduling: Content server software can automatically re-announce every 23 hours. Ensures permanent network presence with no manual intervention.

9. Transport Variants

9.1 PawNET Mesh — Physical radio mesh. Zero internet dependency. True infrastructure independence.

  • WiFi Mesh (802.11s): ~100m between nodes
  • LoRa: 2–10km range, low bandwidth (0.3–50 kbps)
  • Long-range WiFi (directional antennas): 10–50km with line-of-sight
  • Hybrid: LoRa for announcements, WiFi for content

Properties: True peer-to-peer mesh, every node is client and router. Self-healing topology (routes around failed nodes). Zero ISP dependency, zero DNS, zero CA, zero internet. Government must physically destroy individual nodes to shut down. No chokepoints for censorship or surveillance.

Practical Scale: Neighborhood (20–100 nodes, WiFi mesh) — Town (100–1,000 nodes, WiFi + LoRa) — Large city (1,000–10,000 nodes, multi-tier) — Regional (10,000+ nodes with backbone links). Real-world precedents: Guifi.net (37,000+ nodes in Catalonia), NYC Mesh, Freifunk (Germany).

9.2 PawNET Global — Internet overlay. Worldwide reach, not infrastructure-independent. Transport: TCP/IP over standard internet. TLS 1.3 or Noise Protocol for all node-to-node communication. Default port configurable (can use 443 to avoid ISP blocking). Persistent TCP sockets between neighbors.

NAT Traversal: STUN (discovers public IP/port mapping) — Relay nodes (volunteer nodes with public IPs forward traffic) — UPnP (auto-configures router port forwarding) — Hole punching (simultaneous connection attempts).

Trade-offs vs Mesh: Pros: Global reach, no special hardware, easy deployment, high bandwidth. Cons: ISP dependency, fails during internet outages, ISP can see PawNET traffic (not content), governments can attempt to block.

vs Tor: PawNET is faster (direct/one-hop routing), not anonymity-focused by default, has distributed hosting model, uses cryptographic permanent domains vs .onion services.

Network Instances: Mesh instances are typically geographically isolated (Oslo Mesh, Tokyo Mesh). Global can form one worldwide instance or multiple regional/topical instances. Instances cannot communicate unless explicitly bridged via PawMB (PawNET Mesh Bridge), opt-in infrastructure.

10. Sub-Protocol Specifications

10.1 PawNET Host: The distributed hosting sub-protocol. This is the original core of what PawNET was designed around. Uses the full distributed hosting model with gossip-based content delivery, viewer seeding, and cryptographic domain ownership. Best suited for hosting websites, files, media, and any content that benefits from the “more viewers = more robust” scaling model.

10.2 PawNET Web: The document and page browsing sub-protocol. Defines how pages and resources are requested, structured, and rendered over PawNET. Analogous to HTTP but native to PawNET's addressing and routing. Does not require or use HTTP. Intended to support HTML-equivalent documents, stylesheets, scripts, and media.

10.3 PawNET Anonymous: Full anonymity layer. Adds onion-style routing on top of base PawNET so that no single node knows both the sender and destination of any given transmission. The cryptographic foundation is already present in base PawNET; Anonymous extends it with layered routing. Slower than other sub-protocols due to multi-hop routing. Intended for users who need strong anonymity guarantees.

10.4 PawNET Omni: The everyday all-in-one sub-protocol. Designed for mass adoption and general use. Supports web browsing, content hosting, text chat, and voice/video calling (planned). Omni is not a stripped-down version of PawNET — it is a practical repackaging of the base protocol focused on what most people actually need. Security is moderate, not maximal. The goal is to be a better replacement to HTTPS, not a full decentralization revolution. Omni is the most important sub-protocol for network growth — it is the entry point for non-technical users, and since PawNET is gossip-based, network size directly affects reliability and content availability. More Omni users = stronger network for everyone.

11. Security — Known Attack Vectors & Mitigations

Sybil Attacks: Problem: Attacker creates 10,000 fake nodes, floods network with garbage. Mitigations: PoW on all announcements (mass spam expensive); rate limiting by IP subnet (/24 limited to X announcements/hour); neighbor reputation tracking (disconnect bad actors); economic deterrence.

Eclipse Attacks: Problem: Malicious nodes surround victim, feed only attacker-controlled data. Mitigations: Diverse neighbor selection (different IP ranges, geographic regions); random exploration (periodically connect to random nodes); majority consensus queries (compare Merkle roots across many random nodes, majority wins); trusted bootstrap seed list (optional).

Network Partitioning / Split Brain: Problem: Network splits, duplicate domains created in different segments. Mitigations: Timestamp-based conflict resolution (earlier timestamp wins on merge); SHA-256 hash space makes accidental crypto ID collisions astronomically improbable; manual resolution possible.

Large File Cold Start: Problem: 10GB video with zero viewers, origin must serve entire first request. Mitigations: Progressive distribution (first viewer helps host for second viewer, chain builds); chunked distribution (early viewers cache + serve chunks already downloaded); pre-seeding (creator runs multiple nodes to bootstrap).

Mobile Devices: Problem: Phones sleep, limited battery, switch networks constantly. Mitigations: Lightweight mode (mobile doesn't participate in full gossip, queries neighbors when needed); aggressive caching while awake; mobile nodes exempt from distributed hosting or minimal participation; desktop nodes do heavy lifting.

12. PawNET vs HTTP/HTTPS

  • Domain ownership: HTTP/HTTPS: Rented, can be seized, stolen, expired. PawNET: Cryptographic, permanent, seizure-impossible.
  • Censorship resistance: HTTP/HTTPS: DNS hijack, registrar compulsion, server seizure. PawNET: No DNS, no registrar, distributed hosting.
  • Cost: HTTP/HTTPS: $10–50/yr domain + $5–100+/mo hosting + CDN. PawNET: Free domain, free hosting at scale.
  • Viral scaling: HTTP/HTTPS: Kills your server. PawNET: More viewers = more servers = more robust.
  • Privacy: HTTP/HTTPS: DNS leaks to ISP, central server logs everything. PawNET: No DNS queries, distributed logs, pwd hides destination.
  • Setup complexity: HTTP/HTTPS: Hours to days. PawNET: Minutes.
  • Surveillance resistance: HTTP/HTTPS: Trivial via DNS monitoring. PawNET: Requires extensive cross-node correlation.

HTTP/HTTPS's only real advantage is network effects and historical inertia — 5+ billion users, 35 years of content. That is purely historical, not technical. PawNET's only real challenge is the bootstrap problem, not protocol quality.

14. Philosophy

PawNET is neutral infrastructure. The protocol neither judges nor restricts content. Like electricity doesn't care what it powers, PawNET doesn't care what it carries.

Natural consequences replace imposed regulation: good content gets accessed → stays cached → remains available. Bad/unwanted content gets ignored → gets garbage collected → disappears. No content police, no algorithmic suppression, no deplatforming.

The trade-off is honest: protocol-level neutrality means some harmful content will exist and cannot be removed at the infrastructure layer. PawNET accepts this and relies on social mechanisms (reputation, community norms, individual filtering) rather than technological enforcement.

Legally: PawNET has no headquarters, no legal entity, no responsible party. Governments cannot compel what does not exist. Likely outcome: similar to BitTorrent and Tor, widely used, criminalized in some jurisdictions, persistent because of technical decentralization.

15. Open Problems

  • pwd routing implementation: theory is solid, actual packet routing through mesh to correct pwd address needs prototyping
  • Gossip scaling benchmarks: need simulation at 10k, 100k, 1M nodes for real bandwidth/convergence data
  • PoW difficulty calibration: needs math, too easy = doesn't stop spam, too hard = legitimate announcements become annoying
  • Bootstrap / “why join” problem: no killer app identified yet for early adoption when network has near-zero content. PawNET Omni is the strongest candidate for solving this due to its all-in-one accessibility.
  • Dynamic/interactive content: current spec handles static files well; databases, real-time interaction, authentication not yet addressed
  • Sub-protocol interop: rules for how sub-protocols interact when a node runs multiple (e.g. a node running both Host and Omni) are not yet defined
  • Distributed hosting legal liability: nodes serving content they haven't reviewed creates potential legal exposure in some jurisdictions, not resolved at protocol level
  • Mobile client architecture: exact rules for mobile participation (exempt from hosting vs minimal participation) not finalized
  • PawNET Omni calling: voice/video calling support is planned but not yet specced, latency and NAT traversal requirements differ significantly from file hosting