Jump to: navigation, search

Reputation and Trust in Peer-to-Peer Networks

This article deals with the problems and solutions of trusting strangers in a network. In a Network where you are transacting with identities you don't know, you have a problem when you need to know whether you really get what you want or not. To solve this problem you need a system which can tell you how suitable a peer is for your needs. Such a system can be called Trustsystem or Reputationsystem.

Example Scenarios

Filesharing: Peer A is part of a filesharing network and wants to get some data. Peer A searches and finds a couple of peers providing that data. Peer A needs to select a transaction partner. Which one to choose depends on the P2P network used. Usually things like speed or distance make up that decision. When it is important to get reliable data then the reputation of the providing peer is an important point.

"Expert"network: Peer A is part of a network that provides answers to questions by having experts taking part in it. Peer A has a question and some "experts" can answer it. It would be very nice, if Peer A could somehow know, how credible those experts are.

Network protection: A P2P network is usually open to everyone, so it is also open for malicious users who want to damage the network. They can do that depending on the architecture by flooding, providing wrong data, manipulating searches and so on. For the network to keep going well it needs to protect itself somehow. Restricting users depending on their reputation is a way to do so.

Reputation and Trust in general

Reputation is the general opinion of the public towards an entity. Reputation of an entity can vary for different topics and for different people. Usually someone with a high reputation is trusted more that someone with a low reputation.

Trust represents a relation between an expectation and the reality. Trust between transaction partners usually begins with no trust at all. But trust can be influenced by facts and opinions of others and builds up in time. It stabilises at a level that represents the whole history of that relation. It can also change immediately given a strong difference between expectation and reality. Usually there is a big difference between not trusting yet (due to lack of knowledge) and not trusting anymore (due to a bad history).

Reputation System The goals of an reputation system are to mitigate bad behaviour and to encourage good behaviour. To achieve this the system needs to have a knowledge about past behaviour, means to score and rank participants and means to react on that score. Our society is a such reputation system in a way.

Taxonomy of Trust

How can trust be measured?

Design-Characteristics for Reputationsystems in P2P Networks


Selfish users try to benefit without contributing. This is not really a threat, but a fact that needs to be minimized in order to build up a useful network.

Malicious users want to damage and destroy a network by using different techniques, which can be categorized as follows:

Traitors Collusion Front Peers Whitewashers Denial of Service

Components of a Trust/Reputation System

Information Gathering and Sharing


Sufficiently persistent identities are needed in order to get useful information about an identity's reputation. Therefore it is important to consider, how a P2P network handles identities.

Anonymity can be on a varying scale. High anonymity results in very few information about a users history.

Spoof-Resistance should be considered to prevent adversaries from impersonating other peers identities.

Unforgeability refers to ids that cannot be created by a simple peer. Usually unforgeable ids are created by some trusted (potentially central) system entity. This property can prevent whitewashing.

Information Sharing

In general, quantity and quality of information are diametrically opposed. As the amount of information gathered increases, the credibility of each piece of information usually decreases.

Trust can be computed purely locally, which means one does not have any information about strangers.

External means, that one can ask users which one knows from outside the network and which one has a priori trust to.

Transitive means, that one can ask a peer (a neighbour) for his opinion about a third peer and so on.

Global means, that information about every user in the network is collected by all users. This is the most comprehensive as well as most complex solution.

In conclusion, a peer's reputation is based on information collected about that peer from one or multiple sources. The primary sources are: personal experience, external trusted sources, one-hop trusted peers, multi-hop trusted peers, and a global system. Each source provides increasingly more information about peers. However, that information becomes increasingly less credible as well.

Dealing with Strangers

There are global and local strangers. Strangers can be either trusted from the beginning (optimistic) or not trusted by default until they have proven to behave well (pessimistic). Both approaches have drawbacks. In the optimistic approach each peer is likely to be cheated by a stranger whereas in the pessimistic approach new peers wont be able to build a good reputation. One way on solving this is to adept the strategy depending on the amount of fraud in the network. Using a "generosity" metric based on recent stranger transactions, an agent estimates the likely probability of being cheated by the next stranger and decides whether to trust the next stranger using that probability.

Scoring and Ranking

Having collected information about a peer's history a reputation score should be computed. This can happen by an interested agent, a centralized entity or by a collective of peers. This score helps an agent to decide if whether another peer is suitable or not for the service requested.


It has to be thought of which inputs should be used for a reputation score funkction. Ideally the amount of cooperation and of defect should be considered. This is useless against selfish peers. One can try to compute a contribution rate based on the positively answered requests. When malicious peers are around the defect is more obvious so a defect rate can be computed. This value should outweight the cooperation rate.

Quality and quantity should be a parameter of a scoring function. So if a peer defects a valuable transaction it should be given a worse reputation that if it had defected (maybe a few) transactions of a low value.

Anther paramter should be time. More recent behaviour should have more weight than behaviour from a long time ago. Therefore a sudden change in a peer's behaviour can be discovered and represented in it's reputation.


A reputation score could be a binary (trust, no trust) or some sort of scale. It is useful to maintain a peer's score as multiple component scores, so each peer can compute it's own opinion based on the given information and it's preferences. Possible component scores would be quality, quantity, good behaviour, bad behavior and so on, whatever information the system collects.

Peer Selection

Deciding whether or not to trust an unknown peer can be made using a selection threshold. If a peer's score is below this threshold it is not selected otherwise yes. If more than one peer is able to provide than the peer with the highest score should be chosen.

Taking Action


Mechanisms to encourage cooperation are called incentive schemes. They are very effective in combatting selfishness, since a peer gains a benefit from contributing. They are also useful against malicious peers since they will have to contribute more the more they want to damage the network. The most common types of incentives are money and improved service. Speed refers to higher bandwith or lower latency as a reward to good work. Quality can be used as an incentive if there is a way of controlling the quality of the service provided (for example streaming audio). Quantity could mean that a good peer will be rewarded with higher connectivity or that it can use a service more often. Money means that a peer is rewarded with money. This would usually happed via a micro payment system.


While incentives work well against selfsih behaviour means of punishment are needed to combat bad behaviour. The reputation system can detect actively malicious peers and react on it by warning others (for example by giving it a bad reputation score) or by banning it temporarily or permanently from the network.


There are a couple of reputation systems that address some of the points discussed above. One example is the EigenTrust algorithm.

When anonymity is needed than any reputation system that collects information about the history of an identity is inappropriate. One solution to this is the use of a reputation currency, which is not bound to any identity. It can be moved from one entitiy to another and and can be given as a reward of good behavior. But here you have a couple of real world money problems. Who and where is that currency created. Can it be created by the transacting peers? How long will it be valid? Inflation? Furthermore this money can be earned somewhere and spent somewhere else where the owner is not qualified.

EigenTrust "Trustcurrency"

The Eigentrust Algorithm

Further information

  • Literature
    • A Survey of Trust Management and Resource Discovery Technologies in Peer-to-Peer Applications, (PDF)
    • Paper TrustMetrics Survey (HTML)
    • A Survey of Trust and Reputation Systems for Online Service Provision (PDF)
    • Reputation by Roger Dingledine, Michael J Freedman, David Molnar, David Parkes, Paul Syverson (HTML)