PALM is a purely peer-to-peer, local-first distributed storage system that allows devices within the same network to pool disk resources and store large data objects without centralized servers.
Data is fragmented using threshold-based encoding and distributed across multiple nodes such that no single node can reconstruct the original data alone. As long as a minimum number of fragments remain accessible, the data can always be recovered โ even in the presence of node churn or partial network failure.
-
Pure P2P (No Central Server) Every node is equal. No coordinators, no single point of failure.
-
Local-First Storage Optimized for LANs, campuses, communities, and offline-friendly environments.
-
Threshold-Based Encoding (k-of-n) Files are split into
nfragments; anykfragments can reconstruct the original file. -
Privacy by Design Individual fragments are meaningless on their own.
-
Fault Tolerant Tolerates node failures, disconnects, and churn.
-
Bandwidth Efficient Uses local network speeds instead of cloud round-trips.
Think of PALM like a palm fruit bunch:
- The entire bunch represents the original file
- Each fruit is a data fragment
- Losing a few fruits doesn't matter
- You only need enough fruits to extract the oil (recover the file)
No single fruit tells you anything useful on its own.
- A file is split and encoded into
nfragments using erasure coding. - A threshold
kis chosen such thatk โค n.
- Fragments are distributed across different peers in the local network.
- Placement is decentralized and adaptive.
- Each peer stores fragments contributed by other peers.
- No peer holds enough data to reconstruct the file alone.
- When a file is requested, the node fetches fragments from peers.
- Once
kvalid fragments are collected, the file is reconstructed.
- No master node
- No global index
- Discovery happens locally
- Language: Rust
- Encoding: ReedโSolomon erasure coding
- Networking: QUIC (encrypted, fast)
- Discovery: mDNS (automatic local network discovery)
- Storage: Content-addressed fragments
- Local campus or community storage
- Disaster-resilient data sharing
- Low-bandwidth or offline environments
- Edge computing clusters
- Research & distributed systems learning
| System | Difference |
|---|---|
| IPFS | Internet-oriented, global |
| Storj | Central coordination |
| Tahoe-LAFS | Heavier setup |
| PALM | Local-first, lightweight, no servers |
๐ง Active development
- Pool system with coordinator/member roles
- mDNS-based local network discovery
- QUIC networking layer (secure & fast)
- CLI interface (create-pool, discover-pools, join-pool)
- Fragment encoding with Reed-Solomon
- Fragment placement strategy
- Recovery mechanism
Recommended: Use the watch script
# Create a pool (auto-reloads on code changes)
./watch.sh create
# Discover pools (auto-reloads on code changes)
./watch.sh discover
# Join a pool (auto-reloads on code changes)
./watch.sh join <pool-id> <coordinator-ip:port>
# Run tests (auto-reloads)
./watch.sh testAlternative: Using Makefile
make dev-create # Create pool with auto-reload
make dev-discover # Discover pools with auto-reload
make dev-join POOL_ID=<uuid> COORDINATOR=<ip:port>
# Or without auto-reload
make create
make discover
make buildSee DEVELOPMENT.md for more options (just, cargo-watch, etc.)
# Create a pool
cargo run -- create-pool --name "my-pool" --max-members 10 --port 5000
# Discover pools on local network
cargo run -- discover-pools --timeout 5
# Join an existing pool
cargo run -- join-pool --pool-id <uuid> --coordinator <ip:port>cargo build --release
./target/release/banana-protocol create-poolTerminal 1 - Start a pool:
./watch.sh create
# Output shows pool ID and addressTerminal 2 - Discover it:
./watch.sh discover
# Shows all available poolsTerminal 3 - Join it:
./watch.sh join <pool-id-from-terminal-1> <address-from-terminal-1>
# Successfully connected!Any code changes in src/ will auto-rebuild and restart!
Contributions, ideas, and discussions are welcome. This project is exploratory and research-driven.
- DEVELOPMENT.md - Detailed development setup and workflows
- Architecture docs - Coming soon
MIT (TBD)