From Orbital Math to Carrier Events

Post #002 defined the building blocks: satellite types, constellation geometry, ground stations, routing stacks. This post is about what happens when you turn them on and the network starts moving. Specifically, how orbital mechanics become the link up, link down, and latency shift events that your routing protocol actually sees.

Why the physics matter to your routing protocol

If you run networks on the ground, link events are essentially random. A fiber gets cut, an optic degrades, a line card fails. You can't predict when any of that is going to happen, so your routing protocol is reactive by design. Something breaks, the protocol detects it, the network reconverges. That's how it's always worked.

On orbit, every single link event is deterministic. You know exactly when an ISL will go out of range. When a ground station loses visibility. How the latency on every link shifts, second by second, for the next several orbits. The topology is in constant motion, but it's not random motion. It's physics you can calculate in advance.

Think about what that means. Imagine if you knew exactly when every fiber cut in your terrestrial network was going to happen, down to the millisecond, weeks in advance. Would you still run your IGP the same way? Would you still wait for the link to drop, detect the failure, flood an LSA, and reconverge? Or would you do something with that information?

That question is the center of the reactive vs. proactive routing debate. The Orbital Mechanics Engine makes that predictability available. It takes the constellation primitives from post #002 and turns them into a continuous stream of precisely timed physical events. Everything after this point depends on what the OME produces.

What the OME computes

The OME takes in a constellation definition, the same YAML from post #002, and does two things with it: position propagation and visibility determination.

Position propagation is the orbital math. Given a satellite's orbital parameters (altitude, inclination, RAAN, true anomaly), the engine uses Keplerian propagation to compute the real math for how things orbit. It does this for every satellite in the constellation, continuously, producing position snapshots that include latitude, longitude, altitude, and velocity.

Visibility determination is where it gets interesting for networking. The OME takes those positions and asks: for every possible pair of nodes (satellite to satellite, satellite to ground station), can they see each other right now? And if they can, what does the link actually look like?

"Can they see each other" is not a simple yes or no. For two satellites, line-of-sight means the Earth isn't in the way. But a clear line of sight doesn't mean a usable link. The OME also computes the range between them to determine propagation delay. It computes the angular rate to see how fast they're moving relative to each other. Finally, it checks if that angular rate is within the tracking limits of the ISL hardware defined in the satellite type.

That last piece is where the slew rate story from post #002 pays off. Remember the iridium-next satellite type with cross-plane antennas limited to 2.5 deg/s? The OME is what actually enforces that. It computes the angular rate between two satellites at their current positions, checks it against the terminal's max tracking rate, and if the rate exceeds what the hardware can handle, the link is not viable even though the two satellites can technically see each other. That's the polar seam, computed from first principles every time step.

The OME publishes two kinds of events. PositionEvents are full-constellation snapshots: where every satellite and ground station is right now. VisibilityEvents are the link-level changes: this pair of nodes just became visible, or just lost visibility, or the link characteristics changed enough to matter. These events get published on a ZeroMQ channel. The OME doesn't care who's listening. It just computes the physics and publishes what happened.

From orbital events to network events

The Topology Observer subscribes to the OME's event stream and turns VisibilityEvents into the things a routing protocol actually understands: interfaces appearing, interfaces disappearing, and link characteristics changing.

Here's what that looks like - the OME publishes a VisibilityEvent saying "satellite P02S03 and satellite P02S04 can now see each other at a range of 3,200 km, angular rate within tracking limits." The TO receives that and does the actual networking work:

It creates a veth pair between the two Kubernetes pods running those satellites' FRR instances. That veth pair is the link. As far as FRR is concerned, an interface just came up. Carrier detect. The routing protocol sees the adjacency form, exchanges hellos, and incorporates the new link into its topology database. IS-IS floods an LSP, OSPF floods an LSA. Normal stuff.

But the TO isn't done. It also applies traffic control shaping to that veth pair using tc netem for latency and tc tbf for bandwidth. At 3,200 km range, the one-way propagation delay is about 10.7 milliseconds (speed of light in vacuum). The TO sets that as the netem delay. The bandwidth gets set based on the ISL terminal spec from the satellite type definition. Now the link doesn't just exist, it has the right physical characteristics for its current geometry. And it keeps updating. As the two satellites move relative to each other, the range changes, and the TO adjusts the netem delay to match. A link that started at 3,200 km and 10.7 ms might drift to 4,100 km and 13.7 ms over the next few minutes. The tc parameters follow the physics in real time.

When the OME later publishes a VisibilityEvent saying that pair has gone out of range or the angular rate has exceeded tracking limits, the TO tears down the veth pair. FRR sees carrier loss. The adjacency drops. The routing protocol does whatever it does: floods the topology change, recomputes SPF, updates the forwarding table. The TO doesn't care what the routing protocol does about it. That's the routing protocol's problem.

This separation matters. The TO never makes a routing decision or influences path selection. It translates physical reality into interface events and gets out of the way.

Latency isn't static

On the ground, the latency on a link is effectively constant. A fiber between New York and Chicago is about 15 milliseconds. It was 15 milliseconds yesterday, it'll be 15 milliseconds tomorrow. Your routing protocol can treat link delay as a fixed property of the interface. Most do. IS-IS TE and OSPF TE can carry delay metrics, but in practice those values get set once and nobody touches them unless something physical changes.

On orbit, latency is changing all the time. Two satellites in adjacent planes might be 1,500 km apart at one point in the orbit and 4,800 km apart ten minutes later. That's a propagation delay swing from about 5 ms to about 16 ms on a single link, over the course of minutes. Multiply that by every link in the constellation and you've got a network where the delay characteristics of every path are continuously drifting.

The OME computes the range between every connected pair on a configurable interval. The TO takes those range updates and adjusts the tc netem delay on each veth pair to match. So a link doesn't just exist or not exist. While it exists, its latency is moving. A path that was optimal five minutes ago might not be anymore, not because anything failed, but because the geometry shifted and a different set of hops is now shorter.

Whether the routing protocol notices this depends entirely on the protocol and its configuration. IS-IS with TE extensions can carry delay information, but how often does it re-flood that? What's the granularity? Is SPF going to recompute for a 3 ms latency change on one link? Probably not. Should it? That depends on the traffic and the SLA. Finding out is why the lab exists.

Everything is on the bus

You might have noticed a pattern in the last few sections. The OME publishes events but doesn't know who's listening. The TO consumes those events and publishes its own, but doesn't know what routing stack is running. The MI watches everything but never interferes with anything. None of these components know about each other. That's on purpose.

The whole system runs on a ZeroMQ message bus. The OME publishes position and visibility events on one channel. The TO publishes link state changes on another. The MI publishes convergence and measurement events on a third. Each component has its own port, its own message format, and no dependencies on anything downstream.

The browser console you see in the screenshots is just one consumer. It's a FastAPI service called VS-API that subscribes to all three channels, builds up an in-memory picture of the constellation state, and pushes snapshots to the frontend over a WebSocket at about 1 Hz. It also exposes a REST API for historical state, link event queries, convergence metrics, and path traces.

But that's just the visualization we built. The event streams are there for anyone to tap. You could write your own analysis pipeline that subscribes to the TO's link events and does something completely different with them. A custom dashboard that only cares about convergence timing would work just as well. You could record the raw event stream and replay it later against a different analysis tool. The bus doesn't care. It's just moving messages.

This is the same design principle from post #002 carried into the runtime. The primitives are independent YAML files you can mix and match. The runtime components are independent processes you can swap, extend, or replace. NodalPath, which we'll get to in post #004, is exactly this: another consumer of the OME's event stream, doing something completely different with it.

What's next

Post #004 picks up where the OME leaves off. If the topology is deterministic and you can compute the future, should you use that to install forwarding state before the link events happen? That's what NodalPath does, and whether it actually produces better outcomes than letting IS-IS reconverge is the experiment.