
Differential privacy has become the gold standard for protecting individual data in analytics and machine learning, but it still relies on outdated assumptions about how people trust one another. Traditional models of differential privacy force developers into a binary: either assume a single trusted curator (central DP) or assume that no one trusts anyone (local DP). In reality, trust is more complex and way more contextual.
A new model called Trust Graph Differential Privacy (TGDP) aims to better reflect these realities. Introduced in a recent paper presented at ITCS 2025, TGDP models user trust as a graph, where edges represent the individuals with whom each person is willing to share data. The framework interpolates between the central and local models, allowing systems to reason about privacy guarantees under more realistic, fine-grained trust assumptions.
See also: 2025 Cloud in Review
Differential Privacy: A Quick Recap
Differential privacy (DP) is a mathematical framework that provides provable guarantees about the privacy of individuals in a dataset. In its most basic form, DP ensures that the inclusion or exclusion of a single individual’s data has a negligible impact on the result of any analysis. That way, no adversary can confidently infer whether any one person’s data was used, even with auxiliary information.
There are two dominant models here:
- Central Differential Privacy: In this model, a trusted data curator collects raw data from users and adds noise before performing computations. It offers strong utility because the curator has access to the full dataset but requires users to place complete trust in a single party.
- Local Differential Privacy: In this approach, each user’s device adds noise to their data before sending it anywhere. It removes the need for a trusted intermediary but significantly reduces utility due to the higher noise required at the individual level.
Both models have tradeoffs. Central DP offers better accuracy but assumes complete trust in a central authority. Local DP offers stronger user autonomy but at the cost of degraded performance. What they share is a rigid view of trust: either everyone trusts the center, or no one trusts anyone. As powerful as they are, they encode overly simplistic assumptions about how trust works in real-world systems.
This binary framing limits the applicability of DP in systems where trust is selective and asymmetric, an everyday reality in social, organizational, and federated environments. Trust Graph DP proposes a new path forward.
Enter Trust Graph Differential Privacy (TGDP)
However, in today’s real-world systems, which are full of social platforms, federated networks, and collaborative analytics, trust is rarely absolute. It’s partial, contextual, and often asymmetric. As the authors of the study mention, a person might share sensitive data with close friends or collaborators but not with unknown parties. Organizations often establish selective data-sharing agreements based on institutional relationships rather than blanket trust.
That’s where Trust Graph Differential Privacy (TGDP) comes in. By introducing a way to model partial trust formally as a graph, TGDP opens up a new middle ground, allowing systems to preserve privacy while respecting the complex, real-world ways in which people and organizations share data.
How it works
In a trust graph, each node represents a user, and an edge between two users indicates a trust relationship. These edges can be based on anything from social ties to institutional agreements or federated device configurations. TGDP uses this structure to define which users’ data can be seen, aggregated, or used in computations, while still providing differential privacy guarantees with respect to all users outside a given trust circle.
Instead of asking, “Do you trust the system?” TGDP asks, “Whom do you trust within the system?”
This graph-based approach allows TGDP to interpolate between the extremes of the central and local models:
- A star graph, where all users are connected to a single central node (e.g., a curator or platform), maps to the central DP model. Everyone trusts one party.
- A fully disconnected graph, where no edges exist, corresponds to the local DP model. No one trusts anyone.
- Any graph structure in between—clusters of friends, institutional hierarchies, neighborhood-style federations—enables a gradient of trust and a gradient of utility.
By tailoring privacy guarantees to each user’s local trust environment, TGDP can offer higher utility than local DP while maintaining more realistic privacy boundaries than central DP. It reflects a philosophical shift as much as a technical one: from privacy as a global policy to privacy as a networked, context-aware contract.
How Trust Affects Accuracy
In TGDP, privacy is tied to trust, but so is performance. The more people you trust (and who trust each other), the more accurately you can compute things without compromising privacy.
To study this, the authors look at a simple example: adding up everyone’s data.
If everyone shared their data with someone they trust, and only trusted people reported the results with noise added for privacy, you could obtain a reasonably accurate estimate. But how good it is depends on how the trust graph is structured.
Two graph ideas help explain this:
- A dominating set is a group of users trusted by everyone else. If just those users share noisy data, you get a strong result. Fewer trusted reporters = less noise overall.
- The packing number tells you how disconnected the graph is (how many people don’t share trust with anyone else).If too many users are isolated, it limits the accuracy of any result, regardless of the approach.
In short, more shared trust means better results. Less trust means more noise or no result at all.
From Theory to Practice: Why TGDP Matters
The benefits of TGDP extend beyond academic models. Its trust-aware approach maps directly onto real-world systems where data is shared selectively, such as social platforms, healthcare collaborations, federated learning, and distributed AI training.
Take federated learning: devices compute model updates locally, and a server aggregates them. TGDP allows updates to be shared only through trusted peers, preserving privacy without flooding the system with noise. The same principle applies to any setting where users or institutions need fine-grained control over who sees what.
TGDP reflects how people already manage privacy: selectively, contextually, and relationally. As platforms and policies move away from centralized data collection and toward federated, user-controlled systems, TGDP may offer a path to stronger privacy without sacrificing accuracy, aligned with how trust works. By formalizing partial, asymmetric trust as part of the privacy framework itself, it bridges the gap between rigid theoretical models and messy real-world dynamics.

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain – clearly – what it is they do.