The Terrestrial Assumption Problem

Every routing protocol we have was designed for a network that stays put. Links fail occasionally, but the topology is fundamentally stable. In low Earth orbit, the topology is never stable. So what actually breaks, what doesn't, and does anyone actually know for certain?

We've been building packet networks for over forty years. BGP, OSPF, IS-IS, MPLS. These things work. They've been proven at scales nobody imagined and refined over decades of ops. The engineering community has a deep intuition for how they behave.

So when satellite broadband constellations started becoming real infrastructure, everyone naturally assumed it was the same problem. Links go up, links go down. We have protocols that handle that. The hardware and latencies are different, sure, but the packet forwarding machinery underneath is the exact same stuff we've been running forever.

That assumption could be right. Or it could be completely backwards. The trouble is, nobody has actually proven it either way.

The commercial operators have proven that satellite networking works at scale. Starlink has millions of subscribers, Kuiper is launching, OneWeb is operational. But none of them have published how their networks actually operate at the protocol level. Starlink's internal architecture is essentially a black box. What we think we know about it comes entirely from inference: edge observations, latency ping tests, the occasional conference keynote. That isn't a controlled experiment.

The state of the debate

There are serious people on multiple sides of this. RFC 9717, published by Tony Li in January 2025, makes the case that standard IS-IS with Segment Routing handles orbital dynamics without protocol changes. The argument is thoughtful and technically grounded: IS-IS already handles rapid topology changes, SR eliminates the need for distributed label distribution, and the combination should converge fast enough to track constellation geometry without special-purpose modifications. It is a credible position from a credible author with a long history of routing protocol design.

Then you have Aalyria building Spacetime. They built a temporal SDN layer with manifest-based routing, specifically because they believe standard protocols fundamentally cannot do the job. If topology changes are predictable and scheduled, their argument goes, you shouldn't be using a reactive protocol to figure them out on the fly. It's a massive architectural investment based entirely on the premise that terrestrial protocols are a dead end on orbit.

Starlink operates the largest constellation in existence using what appears to be centralized ground-based topology computation with MPLS forwarding. It demonstrably works. But the implementation is entirely proprietary and cannot be reproduced or compared against alternatives by anyone outside the company.

The actual gap

Every one of these positions is argued from theory, from proprietary internal testing, or from small-scale simulation. None of them have been validated against real routing implementations on realistic orbital topologies. And none of them are reproducible by anyone outside the building.

What terrestrial routing actually assumes

The specific question worth asking is not whether terrestrial protocols can technically run on a satellite constellation. They can, and do. The question is whether they were designed for a topology regime where change is the steady state rather than the exception.

OSPF SPF timers, BGP MRAI, LDP label distribution: these were all calibrated for a world where convergence is rare and needs to finish cleanly. BGP's entire timer architecture treats route changes as exceptional events. MRAI timers exist to rate-limit updates. Route dampening penalizes intermittent reachability to suppress flapping prefixes. But in a constellation, intermittent reachability over a particular path isn't a malfunction. It's physics.

RFC 9717 attacks some of this directly. Running IS-IS with SR dodges the label distribution problem entirely, and it probably handles isolated link events just fine. But what happens under the continuous, overlapping churn of a full-scale constellation? Getting dozens of ground links cycling all at once while cross-plane ISLs flicker at the poles might be a very different story.

What actually needs answering

The honest answer is: we don't know, and we ought to find out empirically instead of arguing from first principles.

Does IS-IS from RFC 9717 handle a 60-node constellation cleanly? What does the convergence time distribution actually look like during a polar crossing event? At what constellation size or ground station density do reactive protocols start showing measurable forwarding gaps? Is there a specific topology geometry where centralized, proactive routing undeniably wins?

To answer those, you need an environment where you can deploy real routing code on a topology driven by actual orbital mechanics, and measure the results.

That is what this lab is for.

What's next

Next up: the architecture of NodalArc and NodalPath, initial results from running IS-IS against real orbital dynamics, and what the measurements show about where the interesting boundary conditions actually are.