Train locally
Each approved party trains on its own data, inside its own boundary. Records never cross the wall.
→Share the updates. Not the data. Train across parties without moving records out of the rooms they belong in, and keep the proof on the round.
Every collaborator visible before the round can start.
Local sites train. Only the model updates travel.
Aggregation proof and budget left, on one record.
Train locally. Share updates only. Aggregate the gain, with the proof on the round.
Each approved party trains on its own data, inside its own boundary. Records never cross the wall.
→Updates leave the site, records don't. Secure aggregation and a privacy budget keep the spend-down visible.
→The round leaves with a proof hash, participant health, and remaining budget — attached to the release.
A round reads like every other release on the platform. Signal, review, gate — with the budget and the proof attached.
Federation reads like the rest of the platform — every round leaves a record the next one has to clear.
Local updates submitted by each approved party, never the records behind them.
Differential privacy budget spend-down and remaining headroom, on every round.
Who joined, who dropped, communication cost, and the secure aggregation status.
Proof hash, participant health, and verdict — attached to the release record.
Test the run. Review the hard cases. Recruit the right specialist. Remember what each party can share. Approve what's right.
Fine-tune with the rubric, the reviewers, and the data you already keep.
See the page →Deterministic environments for agents that need to be tested before they ship.
See the page →Generated populations that fill the gaps without crossing the wall.
See the page →Bring the parties. Bring the boundaries. We'll handle the rounds, the budget, and the proof.