ARM4SNS:ReputationFunctions: Difference between revisions

From
Jump to navigation Jump to search
No edit summary
 
 
(6 intermediate revisions by the same user not shown)
Line 9: Line 9:
*first term of function (1.) <math> cE(u) </math> gives rank value based on initial rank <br>
*first term of function (1.) <math> cE(u) </math> gives rank value based on initial rank <br>
*second term of (1.) <math>c\sum_{v\in N^{-}(u)} {R(v)\over{|N^{+}(v)|}}</math> gives rank value as a function of hyperlinks pointing at <math> u </math><br>
*second term of (1.) <math>c\sum_{v\in N^{-}(u)} {R(v)\over{|N^{+}(v)|}}</math> gives rank value as a function of hyperlinks pointing at <math> u </math><br>

=Beta=
'''Reputation Function'''<br>
Let <math>r^{X}_{T}</math> and <math>s^{X}_{T}</math> represent the collective amount of positive and negative feedback about a Target T provided by an agent (or collection of agents) denoted by X, then the function <math>\varphi(p|r^{X}_{T},s^{X}_{T})</math> defined by<br>
<math>
\varphi(p|r^{X}_{T},s^{X}_{T})=\frac{\Gamma(r^{X}_{T}+s^{X}_{T}+2)}{\Gamma(r^{X}_{T}+1)\Gamma(s^{X}_{T}+1)}p^{r^{X}_{T}}(1-p)^{s^{X}_{T}},\qquad where\ 0 \leq p \leq 1,\ 0 \leq r^{X}_{T},\ 0 \leq s^{X}_{T}
</math><br>
is called T's reputation function by X. The tuple <math>(r^{X}_{T},s^{X}_{T})</math> will be called T's reputation parameters by X. <br>For simplicity <math>\varphi^{X}_{T} = \varphi(p|r^{X}_{T},s^{X}_{T})</math>.<br><br>
The '''probability expectation value''' of the reputation function can be expressed as:<br>
<math>
E(\varphi(p|r^{X}_{T},s^{X}_{T}))=\frac{r^{X}_{T}+1}{r^{X}_{T}+s^{X}_{T}+2}
</math><br><br>
'''Reputation Rating'''<br>
For human users is a more simple representation than the reputation function needed.<br>
Let <math>r^{X}_{T}</math> and <math>s^{X}_{T}</math> represent the collective amount of positive and negative feedback about a Target T provided by an agent (or collection of agents) denoted by X, then the function <math>Rep(r^{X}_{T},s^{X}_{T})</math> defined by<br>
<math>
Rep(r^{X}_{T},s^{X}_{T})= (E(\varphi(p|r^{X}_{T},s^{X}_{T}))-0.5)\cdot 2 = \frac{r^{X}_{T}-s^{X}_{T}}{r^{X}_{T}+s^{X}_{T}+2}
</math><br>
is called T's reputation rating by X. For simplicity <math>Rep^{X}_{T}=Rep(r^{X}_{T},s^{X}_{T})</math><br><br>
'''Combining Feedback'''<br>
Let <math>\varphi(p|r^{X}_{T},s^{X}_{T})</math> an <math>\varphi(p|r^{Y}_{T},s^{Y}_{T})</math> be two different reputation functions on T resulting from X and Y's feedback respectively. The reputation function <math>\varphi(p|r^{X,Y}_{T},s^{X,Y}_{T})</math> defined by:<br>
1. <math>r^{X,Y}_{T}=r^{X}_{T}+r^{Y}_{T}</math><br>
2. <math>s^{X,Y}_{T}=s^{X}_{T}+s^{Y}_{T}</math><br>
is then called T's combined reputation function by X and Y. By using '<math>\otimes</math>' to designate this operator, we get
<math>
\varphi(p|r^{X,Y}_{T},s^{X,Y}_{T})=\varphi(p|r^{X}_{T},s^{X}_{T}) \otimes \varphi(p|r^{Y}_{T},s^{Y}_{T})
</math>.<br><br>
'''Belief Discounting'''<br>
This model uses a metric called ''opinion'' to describe beliefs about the truth of statements. An opinion is a tuple <math>\omega^{A}_{x} = (b,d,u)</math>, where b, d and u represent ''belief'', ''disbelief'' and ''uncertainty''. These parameters satisfy <math> b+d+u=1</math> where <math> b,d,u \in [0,1]</math>.<br>
Let X and Y be two agents where <math>\omega^{X}_{Y}=(b^{X}_{Y},d^{X}_{Y},u^{X}_{Y})</math> is X's opinion about Y's advice, and let T be the Target agent where <math>\omega^{Y}_{T}=(b^{Y}_{T},d^{Y}_{T},u^{Y}_{T})</math> is Y's opinion about T expressed in an advice to X. Let <math>\omega^{X:Y}_{T}=(b^{X:Y}_{T},d^{X:Y}_{T},u^{X:Y}_{T})</math> be the opinion such that:<br>
1. <math>b^{X:Y}_{T}=b^{X}_{Y}b^{Y}_{T}</math>,<br>
2. <math>d^{X:Y}_{T}=b^{X}_{Y}d^{Y}_{T}</math>,<br>
3. <math>u^{X:Y}_{T}=d^{X}_{Y}+u^{X}_{Y}+b^{X}_{Y}u^{Y}_{T}</math>,<br>
then <math>\omega^{X:Y}_{T}</math> is called the discounting of <math>\omega^{Y}_{T}</math> by <math>\omega^{X}_{Y}</math> expressing X's opinion about T as a result of Y's advice to X. By using '<math>\otimes</math>' to designate this operator, we can write <math>\omega^{X:Y}_{T}=\omega^{X}_{Y}\otimes\omega^{Y}_{T} </math>.<br>
The author of '''BETA''' provides a mapping between the opinion metric and the beta function defined by:<br>
<math>b=\frac{r}{r+s+2}</math>,<br>
<math>d=\frac{s}{r+s+2}</math>,<br>
<math>u=\frac{2}{r+s+2}</math>,<br>
By using this we obtain the following definition of the discounting operator for reputation functions.<br><br>

'''Reputation Discounting'''<br>
Let X, Y and T be three agents where <math>\varphi(p|r^{X}_{Y},s^{X}_{Y})</math> is Y's reputation function by X, and <math>\varphi(p|r^{Y}_{T},s^{Y}_{T})</math> is T's reputation function by Y. Let <math>\varphi(p|r^{X:Y}_{T},s^{X:Y}_{T})</math> be the reputation function such that:<br>
1. <math> r^{X:Y}_{T}=\frac{2r^{X}_{Y}r^{Y}_{T}}{(s^{X}_{Y}+2)(r^{Y}_{T}+s^{Y}_{T}+2)+2r^{x}_{T}}</math>,<br>
2. <math> s^{X:Y}_{T}=\frac{2r^{X}_{Y}s^{Y}_{T}}{(s^{X}_{Y}+2)(r^{Y}_{T}+s^{Y}_{T}+2)+2r^{x}_{T}}</math>,<br>
then it is called T's discounted reputation function by X through Y. By using the symbol '<math>\otimes</math>' to designate this operator, we can write
<math>
\varphi(p|r^{X:Y}_{T},s^{X:Y}_{T})=\varphi(p|r^{X}_{Y},s^{X}_{Y})\otimes \varphi(p|r^{Y}_{T},s^{Y}_{T})
</math>. In the short notation this can be written as: <math>\varphi^{X:Y}_{T}= \varphi^{X}_{Y} \otimes \varphi^{Y}_{T}</math>. <br><br>

'''Forgetting'''<br>

Latest revision as of 08:28, 28 February 2006

PageRank

  • : set of hyperlinked webpages
  • : webpages in P
  • : set of webpages pointing to u
  • : set of webpages that v points to
  • the PageRank is: (1.)
  • is chosen such that
  • is a vector over corresponding to a source of rank and is chosen such that
  • first term of function (1.) gives rank value based on initial rank
  • second term of (1.) gives rank value as a function of hyperlinks pointing at

Beta

Reputation Function
Let and represent the collective amount of positive and negative feedback about a Target T provided by an agent (or collection of agents) denoted by X, then the function defined by

is called T's reputation function by X. The tuple will be called T's reputation parameters by X.
For simplicity .

The probability expectation value of the reputation function can be expressed as:


Reputation Rating
For human users is a more simple representation than the reputation function needed.
Let and represent the collective amount of positive and negative feedback about a Target T provided by an agent (or collection of agents) denoted by X, then the function defined by

is called T's reputation rating by X. For simplicity

Combining Feedback
Let an be two different reputation functions on T resulting from X and Y's feedback respectively. The reputation function defined by:
1.
2.
is then called T's combined reputation function by X and Y. By using '' to designate this operator, we get .

Belief Discounting
This model uses a metric called opinion to describe beliefs about the truth of statements. An opinion is a tuple , where b, d and u represent belief, disbelief and uncertainty. These parameters satisfy where .
Let X and Y be two agents where is X's opinion about Y's advice, and let T be the Target agent where is Y's opinion about T expressed in an advice to X. Let be the opinion such that:
1. ,
2. ,
3. ,
then is called the discounting of by expressing X's opinion about T as a result of Y's advice to X. By using '' to designate this operator, we can write .
The author of BETA provides a mapping between the opinion metric and the beta function defined by:
,
,
,
By using this we obtain the following definition of the discounting operator for reputation functions.

Reputation Discounting
Let X, Y and T be three agents where is Y's reputation function by X, and is T's reputation function by Y. Let be the reputation function such that:
1. ,
2. ,
then it is called T's discounted reputation function by X through Y. By using the symbol '' to designate this operator, we can write . In the short notation this can be written as: .

Forgetting