Skip to main content

MeshCore Routing Architecture

MeshCore Routing Architecture

MeshCore uses a demand-driven path-based routing protocol. Unlike Meshtastic's flooding approach, MeshCore establishes explicit routes before sending data.

Route Request (RREQ) Mechanism

Route discovery works in four steps:

  1. When Node A wants to reach Node D for the first time, it broadcasts an RREQ (Route Request) packet.
  2. Each intermediate node rebroadcasts the RREQ, appending its identity — building a path record as the packet propagates.
  3. When the RREQ reaches Node D (or any node that already knows a route to D), it sends a Route Reply (RREP) back along the reverse path.
  4. Node A receives the RREP and now has a complete route: A → B → C → D.

Route Caching

Discovered routes are cached in each node's routing table. Subsequent messages to the same destination use the cached route without re-discovery, reducing overhead on established links.

Route Maintenance

If a packet fails to reach the next hop, the forwarding node sends a Route Error (RERR) message back toward the source. The source node then initiates a new route discovery cycle, ensuring the network self-heals after topology changes.

Advantages Over Flooding

  • Only packets on the established route traverse the mesh — significantly less airtime consumption in large networks.
  • Scales better: a 100-node MeshCore network consumes far less channel capacity than a 100-node Meshtastic network.
  • Repeater nodes can handle more traffic since they are not blindly rebroadcasting everything.

Disadvantages

  • Route discovery adds latency before first contact with a new destination.
  • Route tables require memory on each node.
  • Topology changes can invalidate cached routes, requiring re-discovery.

Repeater vs. Client Roles

MeshCore explicitly distinguishes between two node types:

  • Repeater nodes (infrastructure): participate fully in route forwarding and carry the routing load of the network.
  • Client nodes (endpoints): user devices that generate and receive messages but do not forward traffic for others.

This separation makes the protocol more efficient — only dedicated infrastructure carries the routing load, and adding more client devices does not degrade backbone performance.