Paper | |
---|---|
edit | |
description | scientific paper published in CEUR-WS Volume 3197 |
id | Vol-3197/paper4 |
wikidataid | Q117341769→Q117341769 |
title | Trust Graphs for Belief Revision: Framework and Implementation |
pdfUrl | https://ceur-ws.org/Vol-3197/paper4.pdf |
dblpUrl | https://dblp.org/rec/conf/nmr/HunterT22 |
volume | Vol-3197→Vol-3197 |
session | → |
Paper | |
---|---|
edit | |
description | scientific paper published in CEUR-WS Volume 3197 |
id | Vol-3197/paper4 |
wikidataid | Q117341769→Q117341769 |
title | Trust Graphs for Belief Revision: Framework and Implementation |
pdfUrl | https://ceur-ws.org/Vol-3197/paper4.pdf |
dblpUrl | https://dblp.org/rec/conf/nmr/HunterT22 |
volume | Vol-3197→Vol-3197 |
session | → |
Trust Graphs for Belief Revision: Framework and Implementation Aaron Hunter1,* , Sam Tadey1 1 British Columbia Institute of Technology Burnaby, Canada Abstract Trust plays a role in the process of belief revision. When information is reported by another agent, it should only be believed if the reporting agent is trusted as an authority over some relevant domain. In practice, an agent will be trusted on a particular topic if they have provided accurate information on that topic in the past. In this paper, we demonstrate how an agent can construct a model of knowledge-based trust based on the accuracy of past reports. We then show how this model of trust can be used in conjunction with Ordinal Conditional Functions to define two approaches to trust-influenced belief revision. In the first approach, strength of trust and strength of belief are assumed to be incomparable as they are on different scales. In the second approach, they are aggregated in a natural manner. We then describe a software tool for modelling and reasoning about trust and belief change. Our software allows a trust graph to be updated incrementally by looking at the accuracy of past reports. After constructing a trust graph, the software can then compute the result of AGM-style belief revision using two different approaches to incorporating trust. Keywords belief revision, knowledge representation, trust 1. Introduction that topic in the past. In the rest of the paper, we proceed as follows. In Belief revision is concerned with the manner in which an the next section, we give a motivating example that will agent incorporates new information that may be incon- be used throughout the paper. We then review formal sistent with their current beliefs. In general, the belief preliminaries related to belief revision and trust. We revision literature assumes that new information is more then introduce trust graphs, our formal model of trust. reliable than the initial beliefs; in this case, new informa- We define a simple approach for building a trust graph tion must always be believed following belief revision. from past revisions, and prove some basic results. We However, in many practical situations this is not a rea- then demonstrate how trust rankings can influence belief sonable assumption. In practice, we need to take into revision in two different ways. First, we consider the account the extent to which the source of the new infor- naive case, where the strength of trust is independent mation is trusted. In this paper, we demonstrate how an of the strength of belief. Second, we consider the more agent can actually build trust in a source, based on past complex case, where strength of trust is aggregated with reports.1 strength of belief. Suppose that an agent believes 𝜑 to be true, and they Finally, we describe and implemented software tool are being told by an agent 𝑅 that 𝜑 is not true. In this kind that automates this entire process. The software pre- of situation, we will use ranking functions to represent sented here is a useful addition to the relatively small both the initial strength of belief in 𝜑 as well as the level collection existing belief revision solvers, because it ex- of trust in 𝑅. Significantly, however, the trust in 𝑅 is not tends the class of practical problems that we can model uniform over all formulas. Each information source is and solve. To the best of our knowledge, the software pre- trusted to different degrees on different topics. The extent sented in this paper is the first implemented system that to which 𝑅 is trusted on a particular topic is determined incrementally builds a model of trust that is specifically by how frequently they have made accurate reports on intended to inform the process of belief revision. NMR 2022: 20th International Workshop on Non-Monotonic Reasoning, August 07–09, 2022, Haifa, Israel 2. Preliminaries * Corresponding author. $ aaronhunter@bcit.ca (A. Hunter); samtadey10@gmail.com 2.1. Motivating Example (S. Tadey) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Consider a situation where there are two rooms 𝐴 and 𝐵 CEUR Workshop Proceedings (CEUR-WS.org) located inside a building. There are two agents, which we CEUR http://ceur-ws.org Workshop ISSN 1613-0073 Proceedings 1 The content of sections 1-4 of this paper have previously been call Absent and Present. Informally, Absent is not in the published in [1]. The implementation discussed in section 5 is new, and was not discussed in that paper. building whereas Present is in the building. These agents 39 �communicate about the status of the lights in each room. that the agent considers it more likely that 𝑠1 is the actual For simplicity, we say that 𝐴 is true when the light is on state, as opposed to 𝑠2 . Note that 𝜅 defines a belief state in room 𝐴 and we say 𝐵 is true when the light is on in 𝐵𝑒𝑙(𝜅) as follows: room 𝐵. We focus on the beliefs of Absent, who initially thinks 𝐵𝑒𝑙(𝜅) = {𝑠 | 𝜅(𝑠) is minimal }. that the light in room 𝐴 is on and the light in room 𝐵 We can also define a revision operator * associated with is off. Now suppose that Present sends a message that 𝜅 as follows: asserts the light is off in 𝐴 and the light is on in room 𝐵. If Present is completely trusted, this is not a problem; the 𝐵𝑒𝑙(𝜅) * 𝜑 = min(|𝜑|). 𝜅 report simply leads Absent to believe they were incorrect about the lights. The operator on belief states specified in this manner de- But suppose that Present has given incorrect reports in fines an AGM belief revision operator, for any underlying the past. We can collect these reports, and check to see OCF. when they have been correct and when they have been incorrect. For example, suppose that Present is always 2.3. Trust correct about the light status in room 𝐴, whereas they are often incorrect about the light status in room 𝐵. We The notion of trust plays an important role in many ap- might draw the conclusion that they are normally physi- plications, including security [4, 5] and multi-agent sys- cally in the room 𝐴, and that they are too lazy to walk tems [6, 7]. In this paper, we are concerned primarily to a another room to check the lights. with knowledge-based trust. That is, we are concerned Formally, Absent does not need a plausible story to ex- with the extent to which one agent trusts another to have plain the mistakes in the reports; they need some mecha- the knowledge required to be trusted on particular state- nism for modelling trust over different propositions. By ments. This is distinct from trust related to honesty or looking at the accuracy of reports on different topics, deception. they build a model of trust that allows information re- We refer occasionally to trust-senstive belief revision ported from Present to be incorporated appropriately. In operators [8]. Trust-sensitive belief revision operators this paper, we develop formal machinery that is suitable are defined with respect to a trust-partition over states. for capturing all facets of this seemingly simple example. The equivalence classes of a trust partition Π consist of states that can not be distinguished by a particular reporting agent. In our motivating example, we might 2.2. Belief Revision define a trust partition for Present that consists of two We assume an underlying set V of propositional vari- equivalence classes: one that includes all states where ables. A formula is a propositional combination of ele- the light is on in room 𝐴, and one that includes all states ments of V, using the usual connectives →, ∧, ∨, ¬. We where the light is off in room 𝐴. In this case, Present is will assume that V is finite in this paper, though that informally trusted to be able to tell if the light in room 𝐴 need not be the case in general. A state is a propositional is on or off. However, Present is not trusted to be able to interpretation over V, which assigns boolean values to tell if the light in room 𝐵 is on or off. all variables. We will normally specify a state by giving A trust-sensitive revision operator *Π is defined with the set of variables that are true. A belief state is a set respect to a given AGM revision operator * and a trust of states, informally representing the set of states that partition Π. The operator *Π operates in two steps when an agent considers possible. We let |𝜑| denote the set of an agent is given a report 𝜑. First, we find the set Π(𝜑) states where the formula 𝜑 is true. of all states that are related by Π to a model of 𝜑. Then The dominant approach to belief revision is the AGM we perform regular AGM revision with this expanded approach. A revision operator is a function * that maps set of states as input. Hence, the model of trust is essen- a belief state 𝐾 and a formula 𝜑 to a new belief state tially used to pre-process the formula for revision, by 𝐾 * 𝜑. An AGM revision operator is a revision operator expanding it to ignore distinctions that we do not trust that satisfies the so-called AGM postulates. We refer the the reporter to be able to make. reader to [2] for a complete introduction to the AGM theory of belief revision. 2.4. Trust Rankings Although we are concerned with AGM revision at times, in this paper we actually define the beliefs of an We can also define trust in terms of a distance function be- agent in terms of Ordinal Conditional Functions (OCFs) tween states. The notion of distance required is generally [3], which are also called ranking functions. An OCF is an ultrametric. a function 𝜅 that maps every state 𝑠 to an ordinal 𝜅(𝑠). Definition 1. An ultrametric is a binary function 𝑑 over Informally, if 𝜅(𝑠1 ) < 𝜅(𝑠2 ), this is understood to mean a set 𝑋, such that for all 𝑥, 𝑦, 𝑧 ∈ 𝑋: 40 � 1 The edge weights represent how strongly a reporting 𝐴, 𝐵 𝐴 agent is trusted to distinguish between a pair of states. If the weight is high, we interpret this to mean that the 2 2 2 2 agent is strongly trusted to distinguish between the states. If the weight is low, then the reporting agent is not trusted to distinguish the states. ∅ 𝐵 In order to build a notion of trust in an agent, we 1 need to have a history of the past reports that agent has provided. Our basic approach is to assume that we Figure 1: A Trust Graph start with a set of statements that a reporting agent has made in the past, along with an indication of whether the reports were correct or not. • 𝑑(𝑥, 𝑦) ≥ 0. Definition 4. A report is a pair (𝜑, 𝑖), where 𝜑 is a for- • 𝑑(𝑥, 𝑦) = 0 if and only if 𝑥 = 𝑦. mula and 𝑖 ∈ {0, 1}. A report history is a multi-set of • 𝑑(𝑥, 𝑦) = 𝑑(𝑦, 𝑥). reports. • 𝑑(𝑥, 𝑧) ≤ max{𝑑(𝑥, 𝑦), 𝑑(𝑦, 𝑧)}. We let Φ, possibly with subscripts, range over report If we remove condition 2, then 𝑑 is a pseudo-ultrametric. histories. A report history Φ represents all of the claims The following definition of a trust ranking is given in [9]. that an agent has truthfully or falsely claimed in the past. Informally, if (𝜑, 1) ∈ Φ then the agent in question has Definition 2. For any propositional vocabulary, a trust reported 𝜑 in the past in a situation where 𝜑 was shown ranking is a pseudo-ultrametric over the set 𝑆 of all states. to be true. On the other hand, (𝜑, 0) ∈ Φ means that 𝜑 A trust ranking is intended to capture the degree to which has been reported in a situation where it was false. an agent is trusted to distinguish between states in a graph. If 𝑑(𝑠1 , 𝑠2 ) is large, this means the agent can be 3.2. Construction from Reports trusted to distinguish the states 𝑠1 and 𝑠2 . However, if the distance is small, they can not be trusted to draw this Suppose we start with a trust graph in which the report- distinction. ing agent is essentially trusted to be able to distinguish all states, with a default confidence level. For each true report in the history, we strengthen our trust in the re- 3. Building Trust porting agent’s ability to distinguish certain states. For each false report, we weaken our trust. 3.1. Trust Graphs We now turn to our main problem: building a notion Definition 5. For any 𝑛 > 0, the initial trust graph of trust from data. We assume throughout this paper a 𝑇𝑛 = ⟨𝑆, 𝑤⟩ where 𝑆 is the set of states, and 𝑤 is defined fixed, finite vocabulary V. All states, belief states, and such that 𝑤(𝑠, 𝑡) = 0 if 𝑠 = 𝑡 and 𝑤(𝑠, 𝑡) = 𝑛 otherwise. formulas will be defined with respect to this underlying The idea of the initial trust graph is that the reporting vocabulary. agent is trusted to distinguish between all states equally Definition 3. Let 𝑆 be the set of states over V. A trust well. graph over 𝑆 is a pair ⟨𝑆, 𝑤⟩, where 𝑤 : 𝑆 × 𝑆 → N. We are now interested in giving a procedure that takes a report history, and returns a trust graph; this is pre- Hence, a trust graph is just a complete weighted graph sented in Algorithm 1. The algorithm looks at each report along with a distance between states. Informally, a trust in the history 𝑅, and it increases the weight on edges graph represents the trust held in another agent. The where there have been true reports and decreases the weight on the edge between two states 𝑠1 and 𝑠2 is an weight on edges where there have been false reports. indication of how strongly the agent is trusted to directly distinguish between those states. Proposition 1. Given a report history 𝑅, the weighted graph returned by Algorithm 1 is a trust graph. Example 1. Consider the motivating example, in the case where Absent trusts Present more strongly to check if the This result relies on the fact that 𝑤 only returns non- light in room 𝐴 is on as opposed to room 𝐵. This could negative values; this is guaranteed by the choice of 𝑛 for be captured by the trust graph in Figure 1, by having a the initial trust graph. higher weight on edges that connect states that differ on the value of 𝐴. Note that the minimax distance 𝑑 can easily Example 2. We return to our running example. Suppose be calculated from this graph. that we have no initial assumptions about the trust held 41 �Algorithm 1 Construct_from(𝑅) 4 𝐴, 𝐵 𝐴 Input 𝑅, a report history. 7 6 𝑛 ← size of 𝑅. 𝑇 = ⟨𝑆, 𝑤⟩ is the initial trust graph for 𝑛. 6 5 while 𝑅 ̸= ∅ do Get some (𝜑, 𝑖) ∈ 𝑅 if 𝑖 = 0 then ∅ 𝐵 3 𝑤(𝑠1 , 𝑠2 ) ← 𝑤(𝑠1 , 𝑠2 ) − 1 for all 𝑠1 , 𝑠2 such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 else Figure 2: Graph Construction 𝑤(𝑠1 , 𝑠2 ) ← 𝑤(𝑠1 , 𝑠2 ) + 1 for all 𝑠1 , 𝑠2 such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 end if 𝑅 = 𝑅 − (𝜑, 𝑖). Definition 6. For any trust graph 𝑇 = ⟨𝑆, 𝑤⟩, let 𝑑𝑇 end while denote the minimax distance between states. Return ⟨𝑆, 𝑤⟩. The distance 𝑑𝑇 captures an overall trust ranking that can be used to inform belief revision. Informally, even if an agent is not trusted to distinguish two states directly, in Present, and that the report history 𝑅 consists of the they may be trusted to distinguish them based on a path following reports: in the graph. The important feature of such a path is the ⟨𝐴, 1⟩, ⟨𝐴, 1⟩, ⟨𝐵, 0⟩, ⟨𝐴 ∧ 𝐵, 1⟩ minimax weight. The following is a basic result about the notion of distance defined by a trust graph. Since our report history has size 4, the initital trust graph would look like Figure 1, except that all edge weights would Proposition 2. For any trust graph 𝑇 = ⟨𝑆, 𝑤⟩, the be 4. After the first report, the edge weights would be function 𝑑𝑇 is a pseudo-ultrametric on 𝑆. increased on the following edges: Recall from Section 2 that a pseudo-ultrametric over ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐵}), ({𝐴}, ∅), ({𝐴}, {𝐵}). states can be used to define a ranking that is suitable The same thing would happen after the second report. On for reasoning about trust. We remark that, in fact, every the third report, we would subtract one from the following ultrametric over a finite set is actually equivalent up to edges: isomorphism to an ultrametric defined by the minimax distance over some weighted graph. This means that ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐴}), ({𝐵}, ∅), ({𝐴}, {𝐵}). every trust ranking can be defined by a trust graph. Finally, the fourth report would add one to the following The next result shows that there is nothing particu- edges: larly special about the trust graphs constructed by our algorithm. ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐴}), ({𝐴, 𝐵}, {𝐵}). The final trust graph is given in Figure 2. Based on this Proposition 3. Every weighted graph over 𝑆 is the trust graph, Present is least trusted to distinguish the states {𝐵} graph obtained from some report history 𝑅. and ∅. This is because the positive reports were all related to This can be proven by a simple construction where each the truth of 𝐴, and the only false report was a report about report only modifies a single edge weight. the trust of 𝐵. Hence, the graph is intuitively plausible. In the next results, we adopt some simplifiying nota- tion. If 𝑅 is a report history and 𝑟 is a report, we let 𝑅 · 𝑟 denote the multiset obtained by adding 𝑟 to 𝑅. Also, if 3.3. Basic Results 𝑅 is a report history, we let 𝑇 (𝑅) denote the trust graph We have defined an approach to building trust graphs obtained from 𝑅 and we let 𝑑𝑅 denote the distance 𝑑 from reports. We remark that the edge weights will not be defined by 𝑇 (𝑅). used directly when it comes to belief revision. For belief As stated, Algorithm 1 can only construct a trust graph revision, what we need is a single trust ranking that is starting from scratch. However, the following proposi- derived from the trust graph. However, constructing the tion states that we can iteratively modify a trust graph graph allows us to define the ranking function as sort of as we get new reports. a consequence of the reports. In this section, we show the construction satisfies some desirable properties. Proposition 4. Let 𝑅 be a report history and let 𝑟 be a First, we define the trust ranking associated with a report. Then 𝑇 (𝑅·𝑟) is identical to the trust graph obtained trust graph. by modifying 𝑇 (𝑅) as follows: 42 � • Increment weights between states that disagree on 2. Present is not trusted to check two rooms at once. 𝜑, if 𝑟 is a positive report. These kind of assertions give us a hint about how belief • Decrement weights between states that disagree on revision might occur. For example, in the first case, Ab- 𝜑, if 𝑟 is a negative report. sent would interpret a report to mean that exactly one of • Defining a new minimax distance 𝑑 in accordance the rooms is lit. with the new edge weights. Note, however, that a trust graph does not simply give Hence, rather than viewing trust graphs as something a binary notion of trust; it defines a distance function that created with no a priori knowledge, we can think of indicates strength of trust in various distinctions. Simi- trust graphs as a simple model of trust together with an larly, the beliefs of an agent might be held with different operation that tweeks the weights to respond to a new levels of strength. So, even if we have a trust graph, there report. are still problems with incorporating reports related to One desirable feature of our construction is that a comparing strength of belief with strength of trust. report of (𝜑, 0) should make the reporting agent less In our example, if Absent just left the building, they trustworthy with regards to reports about the trust of 𝜑. might believe very strongly that the light in room 𝐴 must The next proposition shows that this is indeed the case. be off. They might believe this so strongly that they disregard Present’s report entirely. But disregarding re- Proposition 5. Let 𝑅 be a report history, let 𝑠1 and 𝑠2 ports is not the only option. It might be the case that be states such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑. Then the exact strength of Absent’s beliefs needs to be consid- ered. Suppose Absent believes the light in room 𝐴 is off 𝑑𝑅 (𝑠1 , 𝑠2 ) ≥ 𝑑𝑅·(𝜑,0) (𝑠1 , 𝑠2 ). with a medium degree of strength. In that case, a report We have a similar result for positive reports. from a weakly trusted agent will not change their beliefs, whereas a report from a strongly trusted agent would be Proposition 6. Let 𝑅 be a report history, let 𝑠1 and 𝑠2 more convincing. Moreover, Absent also needs to have a be states such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑. Then strength ranking over possible alternatives. Hence, this is 𝑑𝑅 (𝑠1 , 𝑠2 ) ≤ 𝑑𝑅·(𝜑,1) (𝑠1 , 𝑠2 ). not simply a binary comparison between strength of de- gree and strength of trust. In order to model interaction Taken together, these results indicate that negative (resp. between belief and trust, we need a precise formal ac- positive) reports of 𝜑 make the reporting agent less (resp. count that permits a comparison of the two. We also need more) trusted with respect to 𝜑. We remark that the to account for the way that Present develops a reputation, inequalities in the previous theorems would be strict if either for laziness or inaccuracy. we were considering actual edge weights; but they are not strict for 𝑑𝑅 , since there may be multiple paths between 4.2. Naive Revision with Trust Graphs states. We have seen that trust graphs define a distance over In the remainder of this paper, we assume that the beliefs states that represents a general notion of trust that is of an agent are represented by an OCF. We show how a implicit in the graph. Significantly, trust graphs can be trust graph allows us to capture an approach to belief re- constructed in a straightforward way by looking at past vision that takes trust into account. In fact, the approach reports; the implicitly defined trust ranking is based on in this section depends only on a pseudo-ultrametric 𝑑𝑇 the accuracy of these reports. In the next section, we defined by a trust graph. consider how the notion of trust defined by a trust graph For any pseudo-ultrametric 𝑑, we define a family of can be used to construct different approaches to revision. revision operators *𝑛 . Definition 7. Let 𝜅 be an OCF and let 𝑑 be a pseudo- 4. Using Trust Graphs ultrametric over 𝑆. For each 𝑛, the operator *𝑛 is defined such that 𝐵𝑒𝑙(𝜅) *𝑛 𝜑 is equal to: 4.1. Example Revisited min{𝑠 | there exists 𝑡 such that 𝑑(𝑡, 𝑠) ≤ 𝑛 and 𝑡 |= 𝜑} 𝜅 Consider again our example involving reports about the lights in a building. We previously pointed out that Ab- From the theory of metric spaces, we have the following. sent might not actually trust the reports from Present, and we gave an approach to construct a trust graph. Proposition 7. For any pseudo-ultrametric 𝑑 over a set Informally, when talking about trust, we might make 𝑋, if 𝑛 ∈ N then the collection of sets 𝑌𝑥 = {𝑦 | assertions of the following form: 𝑑(𝑥, 𝑦) ≤ 𝑛} is a partition of 𝑋. 1. Present is not trusted to know which room they The next result relates these revision operators to trust- are in. sensitive revision operators. A parallel result is proved 43 �in [9], although the result here is stated in terms of OCFs We have supposed that Present reports ¬𝐴 ∧ 𝐵. So, rather than AGM revision. what should Absent believe? It depends on the threshold 𝑛. If we set 𝑛 = 3, then by Proposition 6, *3 is the trust- Proposition 8. Let 𝜅 be an OCF and let 𝑇 be a trust sensitive revision operator defined by the partition with cells graph. For any formula 𝜑 and any 𝑛: {{𝐴}, {𝐵}} and {{𝐴, 𝐵}, ∅}. Since {𝐴} and {𝐵} are 𝐵𝑒𝑙(𝜅) *𝑛 𝜑 = 𝐵𝑒𝑙(𝜅) *Π 𝜑 in the same cell, it follows that revision by 𝐵 is equivalent to revision by 𝐴 ∨ 𝐵. Hence: where Π is the partition defined by (𝑑𝑇 , 𝑛) and *Π is the trust-senstive revision operator associated with Π. 𝐵𝑒𝑙(𝜅) *3 𝐵 = {{𝐴}}. Hence 𝜅 and 𝑑𝑇 define a set of trust-sensitive revision op- This is a belief state containing just one state; so Absent erators. The parameter 𝑛 specifies how close two states believes that the most plausible state is the unique state must be to be considered indistinguishable in the parti- where only the light in room 𝐴 is on. Hence, if Present tion. reports that the light in room 𝐵 is on, it will not change We refer to the operators *𝑛 as naive trust-sensitive the beliefs of 𝐴 at all. revision operators in this paper. These operators are naive in the sense that they do not allow us to take into For naive operators, it does not matter how strongly account the relative magnitudes of the values in 𝜅 and Absent believes the light in room 𝐴 is on. It only mat- the distances given by 𝑑𝑇 . In other words, the scales of 𝜅 ters whether or not the reporting agent can distinguish and 𝑑𝑇 are not compared; it doesn’t matter if the initial between particular states. strength of belief is high or low. This makes sense in applications where the magnitudes in 𝜅 and 𝑑𝑇 are seen 4.3. General Revision with Trust Graphs as independent. In the previous section, we considered the case where Example 3. We refer back to our motivating example. strength of belief and strength of trust are incomparable; Suppose that the initial beliefs of Absent are given by 𝜅 the magnitudes of the values are not on the same scale. In such that: this case, we can not meaningfully combine the numeric values assigned by 𝜅 with the numeric distances given by 𝜅({𝐴}) = 0, a trust graph; we essentially have two orderings that have 𝜅({𝐵}) = 1, to be merged in some way. This is the general setting of 𝜅({𝐴, 𝐵}) = 1, AGM revision, and trust-sensitive revision. 𝜅(∅) = 2 However, there is an alternative way to define revision that actually takes the numeric ranks into account. First, Hence the initial belief set for Absent is {𝐴}. Now suppose we define a new OCF, given some initial beliefs and a that Present passes a message that asserts ¬𝐴 ∧ 𝐵; in trust distance function. other words, the light is off in 𝐴 while it is on in 𝐵. If this Definition 8. Let 𝜅 be an OCF and let 𝑑 be a pseudo- information was given to Absent as infallible sensory data, ultrametric. For any 𝑠 ∈ 𝑆: then the result could be determined easily with regular AGM revision. But this is not sensory data; this is a report, 𝜅𝜑𝑑 (𝑠) = 𝜅(𝑠) · min{𝑑(𝑠, 𝑡) | 𝑡 |= 𝜑}. and trust can play a role in how it is incorporated. To make this concrete, suppose that Absent thinks that The OCF 𝜅𝜑𝑑 (𝑠) combines the a priori belief in the likeli- Present is generally lazy and unaware of the room that hood of 𝑠 along with a measure indicating how easily 𝑠 they are in. It is unlikely therefore, that Present would run can be distinguished from a model of 𝜑. Essentially, this quickly from one room to another to verify the status of definition uses 𝑑 to construct a ranking function over the light in both. So perhaps the trust graph 𝑇 constructed states centered on |𝜑|. This ranking is aggregated with from past reports defines the distance function 𝑑𝑇 from 𝜅, by adding the two ranking functions together. {𝐵} as follows: Given this definition, we can define a new revision operator. 𝑑𝑇 ({𝐵}, {𝐴}) = 1 𝑑𝑇 ({𝐵}, {𝐵}) = 0 Definition 9. Let 𝜅 be an OCF and let 𝑑 be a pseudo- ultrametric. For any formula 𝜑, define ∘𝑑 such that 𝑑𝑇 ({𝐵}, {𝐴, 𝐵}) = 10 𝑑𝑇 ({𝐵}, ∅) = 5 𝐵𝑒𝑙(𝜅) ∘𝑑 𝜑 = {𝑠 | 𝜅𝜑𝑑 (𝑠) is minimal}. This distance function does indeed encode the fact that This new definition lets the initial strength of belief be Present is not strongly trusted to distinguish {𝐴} and {𝐵}; traded off with perceived expertise. We return to our this is because they do not always know where they are. example. 44 � panels for different actions: initializing a trust graph, ma- nipulating the trust graph, visualizing the trust graph, and performing revision. The only constraint is that the vocabularly needs to be provided to initialize the trust graph. After the initial trust graph is constructed, a user can jump between different panels. For example, one could add new information about past reports at any time, even after revision has been performed. In the following sections, we describe the basic usage Figure 3: Initializing a Trust Graph of the software. 5.2. Constructing a Trust Graph Example 4. Consider the light-reporting example again, with the initial belief state 𝜅 and the distance function 𝑑𝑇 In order to perform belief revision using T-BEL , we first specified in Example 3. Now suppose again that Present need to initialize a trust graph. This is done through the reports 𝜑 = ¬𝐴 ∧ 𝐵, i.e. that only the light in room 𝐵 panel in Figure 3. The user simply enters a propositional is on. We calculate 𝜅𝜑𝑑 (𝑠) for all states 𝑠 in the following vocabulary as a comma delimited sequence of strings. table. Optionally, one can specify an initial trust value; this is the weight that will be assigned to all edges in the trust 𝑠 𝜅(𝑠) 𝑑({𝐵}, 𝑠) 𝜅𝜑𝑑 (𝑠) graph. If it is not specified, it will default to 1. {𝐴} 0 1 1 The panel in Figure 4 is used for visualizing and ma- {𝐵} 1 0 1 nipulating the trust graph. After the trust graph has been {𝐴, 𝐵} 1 10 11 generated, it is displayed on the left side as a matrix that ∅ 2 5 7 gives the weight between every pair of states. The val- ues in this matrix can be edited manually, but this is not Since the first two rows both have minimal values, it the preferred way to change the values. The main goal follows that of T-BEL is to allow trust to be built incrementally by adding reports. This is done through the report entry 𝐵𝑒𝑙(𝜅) ∘𝑑 *¬𝐴 ∧ 𝐵 = {{𝐴}, {𝐵}}. section in Figure 4. Reports are entered as formulas in a Following revision, Absent believes exactly one light is on. simple variant of propositional logic, using the keyboard- friendly symbols & (conjunction), | (disjunction) and − This example demonstrates how the strength of belief (negation). The reports are tagged with 1 (positive) and and the strength of trust can interact. The given result 0 (negative). By default, when the Add Reports button occurs because the strength of belief in {𝐴} is identical is pressed, the matrix on the left updates the values in to the strength of trust in the report of {𝐵}. Increasing accordance with the following update rules: or decreasing either measure of strength will cause the result to be different. Note also that this approach gives Update Rule 1. For each pair of states 𝑠1 , 𝑠2 such that a full OCF as a result, so we have a ranking of alternative 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 decrease the value 𝑤(𝑠1 , 𝑠2 ) to states as well. 𝑤(𝑠1 , 𝑠2 ) − 1. Update Rule 2. For each pair of states 𝑠1 , 𝑠2 such that 5. Implementation 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑, increase the value 𝑤(𝑠1 , 𝑠2 ) to 𝑤(𝑠1 , 𝑠2 ) + 1. 5.1. Functionality These update rules correspond to the construction of a We describe T-BEL , a Java application for modeling the trust graph in Algorithm 1. However, we remark that dynamics trust and belief. The core functionality of T-BEL T-BEL is not restricted to these updates. If the user would is as follows. It allows a user to create a trust graph that like to specify different update rules, this can be done by captures the distinctions an information source is trusted providing a text file specifying new update rules. to make. It allows a user to enter a series of reports, which There is one remaining feature to mention in this panel: might be correct or incorrect. These reports trigger an the Distance Checker. We will see in the next section that update to the trust graph. Finally, the user can calculate we actually do not use the values in the trust matrix the result of belief revision, in a manner that accounts directly; we use the minimax distance generated from for the influence of trust. these values. As such, we provide the user with a simple Note that the steps listed above need not be done se- mechanism for checking minimax distance for testing quentially. The interface for the software provides several and experimentation. 45 �Figure 4: The Trust Panel 5.3. Specifying an Epistemic State distance from the set of satisfying assignments to create a full ranking. In other words, the default approach defines As noted previously, epistemic states are represented in a ranking that corresponds to Dalal’s revision operator T-BEL using ranking functions. The software provides [10]. However, T-BEL also provides a flexible mechanism two different ways to specify an epistemic state. for reading alternative rankings from a file input. The first way to specify an epistemic state is by explic- itly specifying a total pre-order over all states. This is done by creating an external text file that lists a “level” 5.4. Calculating the Result of Revision for all states starting from 0. For example, if we had two T-BEL implements both naive revision and general revi- variables 𝐴 and 𝐵, then one example input file could be sion; the user chooses the mechanism to be used in the specified as follows: menu in Figure 3. 2 If Naive Revision is selected, then the user needs to 0:00 enter a threshold value. Following Proposition 8, this 1:10 threshold value defines a trust-sensitive revision opera- tor. This operator is used to calculate the result of belief The first line indicates that there are 2 variables. The revision when the Naive option is selected. The result of second line says that the state where 𝐴 and 𝐵 are both revision is displayed as a formula, capturing the minimal false is the most plausible, so it is the only state in level 0. states in the new ranking. We note that the software The next line specifies the states in level 1. Any states not can be used to perform more than one revision when the listed are put in level 2. A ranking over states specified Hamming distance has been specified for the ranking. in this manner gives us enough information to perform However, for file-based rankings, iterated revision is not belief revision. possible. Manually specifying a complete ranking in this manner We can also specify that we want to use general re- can be problematic, because it is time consuming and it is vision in the dropdown menu in Figure 3. In this case, easy to make mistakes. As such, we also give the user the if 𝜅 is the original ranking function, 𝑑 is the minimax ability to experiment with revision simply by entering distance and 𝜑 is the formula for revision, then we can a belief state as a set of formulas through an input box define a new function: in the main interface. For example, we could enter the beliefs by giving this list of formulas: 𝜅𝜑𝑑 (𝑠) = 𝜅(𝑠) + min{𝑑(𝑠, 𝑡) | 𝑡 |= 𝜑}. A&B By normalizing this function, we define a new ranking A|-B function that represents the beliefs following revision. The result of belief revision is displayed as a formula. To generate a ranking function from such a list, T-BEL However, general revision can be iterated because the finds all satisfying assignments. In the example given, full ranking is maintained after each revision. the only satisfying assignment occurs when 𝐴 and 𝐵 are both true. By default, T-BEL then uses the Hamming 46 �Figure 5: Revision Output 5.5. Step by Step Example hashmap of hashmaps; the lookup time is very fast. An- other place where we focus on efficiency is in the transla- Assume we want to work with the vocabularly {𝑎, 𝑏}, tion from formulas to belief states, where we use a DPLL as well as past reports of (𝑎 ∨ 𝑏, 1) and (𝑎, 1). Assume solver to find satisfying assignments. However, the run further that we would like to start with the belief state time for T-BEL still becomes slow as the vocabulary size (𝑎 ∧ 𝑏) and then revise by (𝑎 ∧ ¬𝑏) ∨ (¬𝑎 ∧ 𝑏). Using increases. It is a useful prototype for reasoning about T-BEL , then can solve this problem through the following small examples, and demonstrating the utility of trust steps: graphs. In future work, we will look to improve run time 1. Enter the vocabulary 𝑎, 𝑏 and a default value of by integrating a competition level ALLSAT solver for the 5. hard calculations [12]. 2. Enter reports (𝑎|𝑏, 1) and (𝑎, 1) then click Add Reports. 3. Select Naive revision with threshold 3. 6. Discussion 4. Enter the belief state 𝑎&𝑏 and formula (𝑎& − 6.1. Related Work 𝑏)|(−𝑎&𝑏). 5. Click Revise. This work fits in the general tradition of formalisms that address notions of trust and credibility for belief The default value in step 1 should be set so that it is at revision. There are alternative approaches, based on least as high as the number of reports. However, beyond non-prioritized and credibility-limited revision as well that constraint, it will not impact the results. After step [13, 14, 15]. The notion of trust has been explored in the 2, the values in the matrix representing the trust graph setting of Dynamic Epistemic Logic (DEL), by adding an will be as follows: explicit measure of trust to formulas [16]. Finally, since 00 01 10 11 we are primarily concerned with with trust based on ex- pertise, the formalism presented here is also related to 00 0 6 7 7 recent work on truth discovery [17]. 01 6 0 6 6 But fundamentally, this work is really about building 10 7 6 0 5 trust in a source based on the knowledge demonstrated 11 7 6 5 0 in past reports; our goal is to develop a formal model of knowledge-based trust. To the best of our knowledge, The revision panel following the example is in Figure this problem has not been explored previously in the 5, showing the input and the fact that the beliefs are context of formal belief change operators. However, it unchanged after revision. It can easily be verified that has been explored in some practical settings, such as the this is correct. formulation of search engine results [18]. The software introduced here can be seen as an exten- 5.6. Performance sion of the GenB system [19]. GenB is a general solver The question of run time is a challenging one to address for revision with a limited capacity to capture trust; T- for any implemented belief revision system, due to the BEL is significantly more sophisticated when it comes to well known compexity of revision [11]. The problem is representing and reasoning about the dynamics of trust even worse when we add trust graphs, which become and belief. very large as the vocabulary size increases. The present implementation has made many imple- 6.2. Conclusion mentation choices in order to optimize performance. In this paper, we have addressed the problem of building For example, we represent a trust map internally as a trust from past reports. We demonstrated that, in the 47 �context of OCFs, trust can be interpreted in two ways. ceedings of the 21st International Joint Conference First, if the scale used for the the strength of belief is on Artificial Intelligence (IJCAI), 2009, pp. 272–277. deemed to be independent of the distance metric, then [8] R. Booth, A. Hunter, Trust as a precursor to belief we can use a trust ranking to define a family of naive revision, J. Artif. Intell. Res. 61 (2018) 699–722. revision operators for trust-sensitive revision. On the [9] A. Hunter, R. Booth, Trust-sensitive belief revision, other hand, if strength of trust and strength of belief are in: Q. Yang, M. J. Wooldridge (Eds.), Proceedings of considered to be comparable on the same scale, then we the Twenty-Fourth International Joint Conference have shown how the two can be aggregated to define a on Artificial Intelligence, IJCAI 2015, Buenos Aires, new approach to trust-influenced belief revision. Argentina, July 25-31, 2015, AAAI Press, 2015, pp. We have also described a tool for solving belief change 3062–3068. problems influenced by trust. The focus is on building [10] M. Dalal, Investigations into a theory of knowledge trust from reports, and then performing belief revision. base revision, in: Proceedings of the National Con- Our software provides a simple interface that can be used ference on Artificial Intelligence (AAAI), 1988, pp. to build a trust graph iteratively, and then this graph is 475–479. used to adjust the behaviour of a formal belief change [11] T. Eiter, G. Gottlob, On the complexity of proposi- operator to account for trust. We suggest that this tool tional knowledge base revision, updates and coun- is an important step towards demonstrating the utility terfactuals, Artificial Intelligence 57 (1992) 227–270. of belief change operators for solving practical problems [12] T. Toda, T. Soh, Implementing efficient all solu- with partially trusted information sources. tions sat solvers, ACM Journal of Experimental There are many directions for future research. Beyond Algorithmics 21 (2016) 1–44. expanding the formal theory, we are primarily interested [13] S. O. Hansson, E. L. Fermé, J. Cantwell, M. A. in practical applications of this work. In future work, we Falappa, Credibility limited revision, J. Symb. Log. intend to improve run time performance, apply the tool 66 (2001) 1581–1596. to concrete problems in the evaluation of web resources, [14] R. Booth, E. Fermé, S. Konieczny, R. P. Pérez, and connect our approach to related work on learning Credibility-limited revision operators in proposi- with respect to trust. tional logic, in: Principles of Knowledge Repre- sentation and Reasoning: Proceedings of the Thir- teenth International Conference, KR 2012, Rome, References Italy, June 10-14, 2012, 2012. [15] G. Bonanno, Credible information, allowable in- [1] A. Hunter, Building trust for belief revision, in: Pro- formation and belief revision - extended abstract, ceedings of the Pacific Rim Conference on Artificial in: L. S. Moss (Ed.), Proceedings Seventeenth Con- Intelligence (PRICAI), 2021, pp. 543–555. ference on Theoretical Aspects of Rationality and [2] C. E. Alchourrón, P. Gärdenfors, D. Makinson, On Knowledge, TARK 2019, Toulouse, France, 17-19 the logic of theory change: Partial meet functions July 2019, volume 297 of EPTCS, 2019, pp. 82–90. for contraction and revision, Journal of Symbolic [16] F. Liu, E. Lorini, Reasoning about belief, evidence Logic 50 (1985) 510–530. and trust in a multi-agent setting, in: PRIMA 2017: [3] W. Spohn, Ordinal conditional functions. A dy- Principles and Practice of Multi-Agent Systems - namic theory of epistemic states, in: W. Harper, 20th International Conference, Nice, France, Oc- B. Skyrms (Eds.), Causation in Decision, Belief tober 30 - November 3, 2017, Proceedings, volume Change, and Statistics, vol. II, Kluwer Academic 10621 of Lecture Notes in Computer Science, Springer, Publishers, 1988, pp. 105–134. 2017, pp. 71–89. [4] M. Carbone, M. Nielsen, V. Sassone, A formal model [17] J. Singleton, R. Booth, An axiomatic approach to for trust in dynamic networks, in: International truth discovery, in: Proceedings of the International Conference on Software Engineering and Formal Conference on Autonomous Agents and Multiagent Methods, 2003, pp. 54–61. Systems (AAMAS), 2020, pp. 2011–2013. [5] K. Krukow, M. Nielsen, Trust structures, Inter- [18] X. Dong, E. Gabrilovich, K. Murphy, V. Dang, national Journal of Information Security 6 (2007) W. Horn, C. Lugaresi, S. Sun, W. Zhang, Knowledge- 153–181. based trust: Estimating the trustworthiness of web [6] T. D. Huynh, N. R. Jennings, N. R. Shadbolt, An in- sources, Proceedings of the VLDB Endowment 8 tegrated trust and reputation model for open multi- (2015). agent systems, Autonomous Agents and Multi- [19] A. Hunter, E. Tsang, GenB: A general solver Agent Systems 13 (2006) 119–154. for AGM revision, in: Proceedings of the Euro- [7] A. Salehi-Abari, T. White, Towards con-resistant pean Conference on Logics in Artificial Intelligence trust models for distributed agent systems, in: Pro- (JELIA), 2016, pp. 564–569. 48 �
Trust Graphs for Belief Revision: Framework and Implementation Aaron Hunter1,* , Sam Tadey1 1 British Columbia Institute of Technology Burnaby, Canada Abstract Trust plays a role in the process of belief revision. When information is reported by another agent, it should only be believed if the reporting agent is trusted as an authority over some relevant domain. In practice, an agent will be trusted on a particular topic if they have provided accurate information on that topic in the past. In this paper, we demonstrate how an agent can construct a model of knowledge-based trust based on the accuracy of past reports. We then show how this model of trust can be used in conjunction with Ordinal Conditional Functions to define two approaches to trust-influenced belief revision. In the first approach, strength of trust and strength of belief are assumed to be incomparable as they are on different scales. In the second approach, they are aggregated in a natural manner. We then describe a software tool for modelling and reasoning about trust and belief change. Our software allows a trust graph to be updated incrementally by looking at the accuracy of past reports. After constructing a trust graph, the software can then compute the result of AGM-style belief revision using two different approaches to incorporating trust. Keywords belief revision, knowledge representation, trust 1. Introduction that topic in the past. In the rest of the paper, we proceed as follows. In Belief revision is concerned with the manner in which an the next section, we give a motivating example that will agent incorporates new information that may be incon- be used throughout the paper. We then review formal sistent with their current beliefs. In general, the belief preliminaries related to belief revision and trust. We revision literature assumes that new information is more then introduce trust graphs, our formal model of trust. reliable than the initial beliefs; in this case, new informa- We define a simple approach for building a trust graph tion must always be believed following belief revision. from past revisions, and prove some basic results. We However, in many practical situations this is not a rea- then demonstrate how trust rankings can influence belief sonable assumption. In practice, we need to take into revision in two different ways. First, we consider the account the extent to which the source of the new infor- naive case, where the strength of trust is independent mation is trusted. In this paper, we demonstrate how an of the strength of belief. Second, we consider the more agent can actually build trust in a source, based on past complex case, where strength of trust is aggregated with reports.1 strength of belief. Suppose that an agent believes 𝜑 to be true, and they Finally, we describe and implemented software tool are being told by an agent 𝑅 that 𝜑 is not true. In this kind that automates this entire process. The software pre- of situation, we will use ranking functions to represent sented here is a useful addition to the relatively small both the initial strength of belief in 𝜑 as well as the level collection existing belief revision solvers, because it ex- of trust in 𝑅. Significantly, however, the trust in 𝑅 is not tends the class of practical problems that we can model uniform over all formulas. Each information source is and solve. To the best of our knowledge, the software pre- trusted to different degrees on different topics. The extent sented in this paper is the first implemented system that to which 𝑅 is trusted on a particular topic is determined incrementally builds a model of trust that is specifically by how frequently they have made accurate reports on intended to inform the process of belief revision. NMR 2022: 20th International Workshop on Non-Monotonic Reasoning, August 07–09, 2022, Haifa, Israel 2. Preliminaries * Corresponding author. $ aaronhunter@bcit.ca (A. Hunter); samtadey10@gmail.com 2.1. Motivating Example (S. Tadey) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Consider a situation where there are two rooms 𝐴 and 𝐵 CEUR Workshop Proceedings (CEUR-WS.org) located inside a building. There are two agents, which we CEUR http://ceur-ws.org Workshop ISSN 1613-0073 Proceedings 1 The content of sections 1-4 of this paper have previously been call Absent and Present. Informally, Absent is not in the published in [1]. The implementation discussed in section 5 is new, and was not discussed in that paper. building whereas Present is in the building. These agents 39 �communicate about the status of the lights in each room. that the agent considers it more likely that 𝑠1 is the actual For simplicity, we say that 𝐴 is true when the light is on state, as opposed to 𝑠2 . Note that 𝜅 defines a belief state in room 𝐴 and we say 𝐵 is true when the light is on in 𝐵𝑒𝑙(𝜅) as follows: room 𝐵. We focus on the beliefs of Absent, who initially thinks 𝐵𝑒𝑙(𝜅) = {𝑠 | 𝜅(𝑠) is minimal }. that the light in room 𝐴 is on and the light in room 𝐵 We can also define a revision operator * associated with is off. Now suppose that Present sends a message that 𝜅 as follows: asserts the light is off in 𝐴 and the light is on in room 𝐵. If Present is completely trusted, this is not a problem; the 𝐵𝑒𝑙(𝜅) * 𝜑 = min(|𝜑|). 𝜅 report simply leads Absent to believe they were incorrect about the lights. The operator on belief states specified in this manner de- But suppose that Present has given incorrect reports in fines an AGM belief revision operator, for any underlying the past. We can collect these reports, and check to see OCF. when they have been correct and when they have been incorrect. For example, suppose that Present is always 2.3. Trust correct about the light status in room 𝐴, whereas they are often incorrect about the light status in room 𝐵. We The notion of trust plays an important role in many ap- might draw the conclusion that they are normally physi- plications, including security [4, 5] and multi-agent sys- cally in the room 𝐴, and that they are too lazy to walk tems [6, 7]. In this paper, we are concerned primarily to a another room to check the lights. with knowledge-based trust. That is, we are concerned Formally, Absent does not need a plausible story to ex- with the extent to which one agent trusts another to have plain the mistakes in the reports; they need some mecha- the knowledge required to be trusted on particular state- nism for modelling trust over different propositions. By ments. This is distinct from trust related to honesty or looking at the accuracy of reports on different topics, deception. they build a model of trust that allows information re- We refer occasionally to trust-senstive belief revision ported from Present to be incorporated appropriately. In operators [8]. Trust-sensitive belief revision operators this paper, we develop formal machinery that is suitable are defined with respect to a trust-partition over states. for capturing all facets of this seemingly simple example. The equivalence classes of a trust partition Π consist of states that can not be distinguished by a particular reporting agent. In our motivating example, we might 2.2. Belief Revision define a trust partition for Present that consists of two We assume an underlying set V of propositional vari- equivalence classes: one that includes all states where ables. A formula is a propositional combination of ele- the light is on in room 𝐴, and one that includes all states ments of V, using the usual connectives →, ∧, ∨, ¬. We where the light is off in room 𝐴. In this case, Present is will assume that V is finite in this paper, though that informally trusted to be able to tell if the light in room 𝐴 need not be the case in general. A state is a propositional is on or off. However, Present is not trusted to be able to interpretation over V, which assigns boolean values to tell if the light in room 𝐵 is on or off. all variables. We will normally specify a state by giving A trust-sensitive revision operator *Π is defined with the set of variables that are true. A belief state is a set respect to a given AGM revision operator * and a trust of states, informally representing the set of states that partition Π. The operator *Π operates in two steps when an agent considers possible. We let |𝜑| denote the set of an agent is given a report 𝜑. First, we find the set Π(𝜑) states where the formula 𝜑 is true. of all states that are related by Π to a model of 𝜑. Then The dominant approach to belief revision is the AGM we perform regular AGM revision with this expanded approach. A revision operator is a function * that maps set of states as input. Hence, the model of trust is essen- a belief state 𝐾 and a formula 𝜑 to a new belief state tially used to pre-process the formula for revision, by 𝐾 * 𝜑. An AGM revision operator is a revision operator expanding it to ignore distinctions that we do not trust that satisfies the so-called AGM postulates. We refer the the reporter to be able to make. reader to [2] for a complete introduction to the AGM theory of belief revision. 2.4. Trust Rankings Although we are concerned with AGM revision at times, in this paper we actually define the beliefs of an We can also define trust in terms of a distance function be- agent in terms of Ordinal Conditional Functions (OCFs) tween states. The notion of distance required is generally [3], which are also called ranking functions. An OCF is an ultrametric. a function 𝜅 that maps every state 𝑠 to an ordinal 𝜅(𝑠). Definition 1. An ultrametric is a binary function 𝑑 over Informally, if 𝜅(𝑠1 ) < 𝜅(𝑠2 ), this is understood to mean a set 𝑋, such that for all 𝑥, 𝑦, 𝑧 ∈ 𝑋: 40 � 1 The edge weights represent how strongly a reporting 𝐴, 𝐵 𝐴 agent is trusted to distinguish between a pair of states. If the weight is high, we interpret this to mean that the 2 2 2 2 agent is strongly trusted to distinguish between the states. If the weight is low, then the reporting agent is not trusted to distinguish the states. ∅ 𝐵 In order to build a notion of trust in an agent, we 1 need to have a history of the past reports that agent has provided. Our basic approach is to assume that we Figure 1: A Trust Graph start with a set of statements that a reporting agent has made in the past, along with an indication of whether the reports were correct or not. • 𝑑(𝑥, 𝑦) ≥ 0. Definition 4. A report is a pair (𝜑, 𝑖), where 𝜑 is a for- • 𝑑(𝑥, 𝑦) = 0 if and only if 𝑥 = 𝑦. mula and 𝑖 ∈ {0, 1}. A report history is a multi-set of • 𝑑(𝑥, 𝑦) = 𝑑(𝑦, 𝑥). reports. • 𝑑(𝑥, 𝑧) ≤ max{𝑑(𝑥, 𝑦), 𝑑(𝑦, 𝑧)}. We let Φ, possibly with subscripts, range over report If we remove condition 2, then 𝑑 is a pseudo-ultrametric. histories. A report history Φ represents all of the claims The following definition of a trust ranking is given in [9]. that an agent has truthfully or falsely claimed in the past. Informally, if (𝜑, 1) ∈ Φ then the agent in question has Definition 2. For any propositional vocabulary, a trust reported 𝜑 in the past in a situation where 𝜑 was shown ranking is a pseudo-ultrametric over the set 𝑆 of all states. to be true. On the other hand, (𝜑, 0) ∈ Φ means that 𝜑 A trust ranking is intended to capture the degree to which has been reported in a situation where it was false. an agent is trusted to distinguish between states in a graph. If 𝑑(𝑠1 , 𝑠2 ) is large, this means the agent can be 3.2. Construction from Reports trusted to distinguish the states 𝑠1 and 𝑠2 . However, if the distance is small, they can not be trusted to draw this Suppose we start with a trust graph in which the report- distinction. ing agent is essentially trusted to be able to distinguish all states, with a default confidence level. For each true report in the history, we strengthen our trust in the re- 3. Building Trust porting agent’s ability to distinguish certain states. For each false report, we weaken our trust. 3.1. Trust Graphs We now turn to our main problem: building a notion Definition 5. For any 𝑛 > 0, the initial trust graph of trust from data. We assume throughout this paper a 𝑇𝑛 = ⟨𝑆, 𝑤⟩ where 𝑆 is the set of states, and 𝑤 is defined fixed, finite vocabulary V. All states, belief states, and such that 𝑤(𝑠, 𝑡) = 0 if 𝑠 = 𝑡 and 𝑤(𝑠, 𝑡) = 𝑛 otherwise. formulas will be defined with respect to this underlying The idea of the initial trust graph is that the reporting vocabulary. agent is trusted to distinguish between all states equally Definition 3. Let 𝑆 be the set of states over V. A trust well. graph over 𝑆 is a pair ⟨𝑆, 𝑤⟩, where 𝑤 : 𝑆 × 𝑆 → N. We are now interested in giving a procedure that takes a report history, and returns a trust graph; this is pre- Hence, a trust graph is just a complete weighted graph sented in Algorithm 1. The algorithm looks at each report along with a distance between states. Informally, a trust in the history 𝑅, and it increases the weight on edges graph represents the trust held in another agent. The where there have been true reports and decreases the weight on the edge between two states 𝑠1 and 𝑠2 is an weight on edges where there have been false reports. indication of how strongly the agent is trusted to directly distinguish between those states. Proposition 1. Given a report history 𝑅, the weighted graph returned by Algorithm 1 is a trust graph. Example 1. Consider the motivating example, in the case where Absent trusts Present more strongly to check if the This result relies on the fact that 𝑤 only returns non- light in room 𝐴 is on as opposed to room 𝐵. This could negative values; this is guaranteed by the choice of 𝑛 for be captured by the trust graph in Figure 1, by having a the initial trust graph. higher weight on edges that connect states that differ on the value of 𝐴. Note that the minimax distance 𝑑 can easily Example 2. We return to our running example. Suppose be calculated from this graph. that we have no initial assumptions about the trust held 41 �Algorithm 1 Construct_from(𝑅) 4 𝐴, 𝐵 𝐴 Input 𝑅, a report history. 7 6 𝑛 ← size of 𝑅. 𝑇 = ⟨𝑆, 𝑤⟩ is the initial trust graph for 𝑛. 6 5 while 𝑅 ̸= ∅ do Get some (𝜑, 𝑖) ∈ 𝑅 if 𝑖 = 0 then ∅ 𝐵 3 𝑤(𝑠1 , 𝑠2 ) ← 𝑤(𝑠1 , 𝑠2 ) − 1 for all 𝑠1 , 𝑠2 such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 else Figure 2: Graph Construction 𝑤(𝑠1 , 𝑠2 ) ← 𝑤(𝑠1 , 𝑠2 ) + 1 for all 𝑠1 , 𝑠2 such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 end if 𝑅 = 𝑅 − (𝜑, 𝑖). Definition 6. For any trust graph 𝑇 = ⟨𝑆, 𝑤⟩, let 𝑑𝑇 end while denote the minimax distance between states. Return ⟨𝑆, 𝑤⟩. The distance 𝑑𝑇 captures an overall trust ranking that can be used to inform belief revision. Informally, even if an agent is not trusted to distinguish two states directly, in Present, and that the report history 𝑅 consists of the they may be trusted to distinguish them based on a path following reports: in the graph. The important feature of such a path is the ⟨𝐴, 1⟩, ⟨𝐴, 1⟩, ⟨𝐵, 0⟩, ⟨𝐴 ∧ 𝐵, 1⟩ minimax weight. The following is a basic result about the notion of distance defined by a trust graph. Since our report history has size 4, the initital trust graph would look like Figure 1, except that all edge weights would Proposition 2. For any trust graph 𝑇 = ⟨𝑆, 𝑤⟩, the be 4. After the first report, the edge weights would be function 𝑑𝑇 is a pseudo-ultrametric on 𝑆. increased on the following edges: Recall from Section 2 that a pseudo-ultrametric over ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐵}), ({𝐴}, ∅), ({𝐴}, {𝐵}). states can be used to define a ranking that is suitable The same thing would happen after the second report. On for reasoning about trust. We remark that, in fact, every the third report, we would subtract one from the following ultrametric over a finite set is actually equivalent up to edges: isomorphism to an ultrametric defined by the minimax distance over some weighted graph. This means that ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐴}), ({𝐵}, ∅), ({𝐴}, {𝐵}). every trust ranking can be defined by a trust graph. Finally, the fourth report would add one to the following The next result shows that there is nothing particu- edges: larly special about the trust graphs constructed by our algorithm. ({𝐴, 𝐵}, ∅), ({𝐴, 𝐵}, {𝐴}), ({𝐴, 𝐵}, {𝐵}). The final trust graph is given in Figure 2. Based on this Proposition 3. Every weighted graph over 𝑆 is the trust graph, Present is least trusted to distinguish the states {𝐵} graph obtained from some report history 𝑅. and ∅. This is because the positive reports were all related to This can be proven by a simple construction where each the truth of 𝐴, and the only false report was a report about report only modifies a single edge weight. the trust of 𝐵. Hence, the graph is intuitively plausible. In the next results, we adopt some simplifiying nota- tion. If 𝑅 is a report history and 𝑟 is a report, we let 𝑅 · 𝑟 denote the multiset obtained by adding 𝑟 to 𝑅. Also, if 3.3. Basic Results 𝑅 is a report history, we let 𝑇 (𝑅) denote the trust graph We have defined an approach to building trust graphs obtained from 𝑅 and we let 𝑑𝑅 denote the distance 𝑑 from reports. We remark that the edge weights will not be defined by 𝑇 (𝑅). used directly when it comes to belief revision. For belief As stated, Algorithm 1 can only construct a trust graph revision, what we need is a single trust ranking that is starting from scratch. However, the following proposi- derived from the trust graph. However, constructing the tion states that we can iteratively modify a trust graph graph allows us to define the ranking function as sort of as we get new reports. a consequence of the reports. In this section, we show the construction satisfies some desirable properties. Proposition 4. Let 𝑅 be a report history and let 𝑟 be a First, we define the trust ranking associated with a report. Then 𝑇 (𝑅·𝑟) is identical to the trust graph obtained trust graph. by modifying 𝑇 (𝑅) as follows: 42 � • Increment weights between states that disagree on 2. Present is not trusted to check two rooms at once. 𝜑, if 𝑟 is a positive report. These kind of assertions give us a hint about how belief • Decrement weights between states that disagree on revision might occur. For example, in the first case, Ab- 𝜑, if 𝑟 is a negative report. sent would interpret a report to mean that exactly one of • Defining a new minimax distance 𝑑 in accordance the rooms is lit. with the new edge weights. Note, however, that a trust graph does not simply give Hence, rather than viewing trust graphs as something a binary notion of trust; it defines a distance function that created with no a priori knowledge, we can think of indicates strength of trust in various distinctions. Simi- trust graphs as a simple model of trust together with an larly, the beliefs of an agent might be held with different operation that tweeks the weights to respond to a new levels of strength. So, even if we have a trust graph, there report. are still problems with incorporating reports related to One desirable feature of our construction is that a comparing strength of belief with strength of trust. report of (𝜑, 0) should make the reporting agent less In our example, if Absent just left the building, they trustworthy with regards to reports about the trust of 𝜑. might believe very strongly that the light in room 𝐴 must The next proposition shows that this is indeed the case. be off. They might believe this so strongly that they disregard Present’s report entirely. But disregarding re- Proposition 5. Let 𝑅 be a report history, let 𝑠1 and 𝑠2 ports is not the only option. It might be the case that be states such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑. Then the exact strength of Absent’s beliefs needs to be consid- ered. Suppose Absent believes the light in room 𝐴 is off 𝑑𝑅 (𝑠1 , 𝑠2 ) ≥ 𝑑𝑅·(𝜑,0) (𝑠1 , 𝑠2 ). with a medium degree of strength. In that case, a report We have a similar result for positive reports. from a weakly trusted agent will not change their beliefs, whereas a report from a strongly trusted agent would be Proposition 6. Let 𝑅 be a report history, let 𝑠1 and 𝑠2 more convincing. Moreover, Absent also needs to have a be states such that 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑. Then strength ranking over possible alternatives. Hence, this is 𝑑𝑅 (𝑠1 , 𝑠2 ) ≤ 𝑑𝑅·(𝜑,1) (𝑠1 , 𝑠2 ). not simply a binary comparison between strength of de- gree and strength of trust. In order to model interaction Taken together, these results indicate that negative (resp. between belief and trust, we need a precise formal ac- positive) reports of 𝜑 make the reporting agent less (resp. count that permits a comparison of the two. We also need more) trusted with respect to 𝜑. We remark that the to account for the way that Present develops a reputation, inequalities in the previous theorems would be strict if either for laziness or inaccuracy. we were considering actual edge weights; but they are not strict for 𝑑𝑅 , since there may be multiple paths between 4.2. Naive Revision with Trust Graphs states. We have seen that trust graphs define a distance over In the remainder of this paper, we assume that the beliefs states that represents a general notion of trust that is of an agent are represented by an OCF. We show how a implicit in the graph. Significantly, trust graphs can be trust graph allows us to capture an approach to belief re- constructed in a straightforward way by looking at past vision that takes trust into account. In fact, the approach reports; the implicitly defined trust ranking is based on in this section depends only on a pseudo-ultrametric 𝑑𝑇 the accuracy of these reports. In the next section, we defined by a trust graph. consider how the notion of trust defined by a trust graph For any pseudo-ultrametric 𝑑, we define a family of can be used to construct different approaches to revision. revision operators *𝑛 . Definition 7. Let 𝜅 be an OCF and let 𝑑 be a pseudo- 4. Using Trust Graphs ultrametric over 𝑆. For each 𝑛, the operator *𝑛 is defined such that 𝐵𝑒𝑙(𝜅) *𝑛 𝜑 is equal to: 4.1. Example Revisited min{𝑠 | there exists 𝑡 such that 𝑑(𝑡, 𝑠) ≤ 𝑛 and 𝑡 |= 𝜑} 𝜅 Consider again our example involving reports about the lights in a building. We previously pointed out that Ab- From the theory of metric spaces, we have the following. sent might not actually trust the reports from Present, and we gave an approach to construct a trust graph. Proposition 7. For any pseudo-ultrametric 𝑑 over a set Informally, when talking about trust, we might make 𝑋, if 𝑛 ∈ N then the collection of sets 𝑌𝑥 = {𝑦 | assertions of the following form: 𝑑(𝑥, 𝑦) ≤ 𝑛} is a partition of 𝑋. 1. Present is not trusted to know which room they The next result relates these revision operators to trust- are in. sensitive revision operators. A parallel result is proved 43 �in [9], although the result here is stated in terms of OCFs We have supposed that Present reports ¬𝐴 ∧ 𝐵. So, rather than AGM revision. what should Absent believe? It depends on the threshold 𝑛. If we set 𝑛 = 3, then by Proposition 6, *3 is the trust- Proposition 8. Let 𝜅 be an OCF and let 𝑇 be a trust sensitive revision operator defined by the partition with cells graph. For any formula 𝜑 and any 𝑛: {{𝐴}, {𝐵}} and {{𝐴, 𝐵}, ∅}. Since {𝐴} and {𝐵} are 𝐵𝑒𝑙(𝜅) *𝑛 𝜑 = 𝐵𝑒𝑙(𝜅) *Π 𝜑 in the same cell, it follows that revision by 𝐵 is equivalent to revision by 𝐴 ∨ 𝐵. Hence: where Π is the partition defined by (𝑑𝑇 , 𝑛) and *Π is the trust-senstive revision operator associated with Π. 𝐵𝑒𝑙(𝜅) *3 𝐵 = {{𝐴}}. Hence 𝜅 and 𝑑𝑇 define a set of trust-sensitive revision op- This is a belief state containing just one state; so Absent erators. The parameter 𝑛 specifies how close two states believes that the most plausible state is the unique state must be to be considered indistinguishable in the parti- where only the light in room 𝐴 is on. Hence, if Present tion. reports that the light in room 𝐵 is on, it will not change We refer to the operators *𝑛 as naive trust-sensitive the beliefs of 𝐴 at all. revision operators in this paper. These operators are naive in the sense that they do not allow us to take into For naive operators, it does not matter how strongly account the relative magnitudes of the values in 𝜅 and Absent believes the light in room 𝐴 is on. It only mat- the distances given by 𝑑𝑇 . In other words, the scales of 𝜅 ters whether or not the reporting agent can distinguish and 𝑑𝑇 are not compared; it doesn’t matter if the initial between particular states. strength of belief is high or low. This makes sense in applications where the magnitudes in 𝜅 and 𝑑𝑇 are seen 4.3. General Revision with Trust Graphs as independent. In the previous section, we considered the case where Example 3. We refer back to our motivating example. strength of belief and strength of trust are incomparable; Suppose that the initial beliefs of Absent are given by 𝜅 the magnitudes of the values are not on the same scale. In such that: this case, we can not meaningfully combine the numeric values assigned by 𝜅 with the numeric distances given by 𝜅({𝐴}) = 0, a trust graph; we essentially have two orderings that have 𝜅({𝐵}) = 1, to be merged in some way. This is the general setting of 𝜅({𝐴, 𝐵}) = 1, AGM revision, and trust-sensitive revision. 𝜅(∅) = 2 However, there is an alternative way to define revision that actually takes the numeric ranks into account. First, Hence the initial belief set for Absent is {𝐴}. Now suppose we define a new OCF, given some initial beliefs and a that Present passes a message that asserts ¬𝐴 ∧ 𝐵; in trust distance function. other words, the light is off in 𝐴 while it is on in 𝐵. If this Definition 8. Let 𝜅 be an OCF and let 𝑑 be a pseudo- information was given to Absent as infallible sensory data, ultrametric. For any 𝑠 ∈ 𝑆: then the result could be determined easily with regular AGM revision. But this is not sensory data; this is a report, 𝜅𝜑𝑑 (𝑠) = 𝜅(𝑠) · min{𝑑(𝑠, 𝑡) | 𝑡 |= 𝜑}. and trust can play a role in how it is incorporated. To make this concrete, suppose that Absent thinks that The OCF 𝜅𝜑𝑑 (𝑠) combines the a priori belief in the likeli- Present is generally lazy and unaware of the room that hood of 𝑠 along with a measure indicating how easily 𝑠 they are in. It is unlikely therefore, that Present would run can be distinguished from a model of 𝜑. Essentially, this quickly from one room to another to verify the status of definition uses 𝑑 to construct a ranking function over the light in both. So perhaps the trust graph 𝑇 constructed states centered on |𝜑|. This ranking is aggregated with from past reports defines the distance function 𝑑𝑇 from 𝜅, by adding the two ranking functions together. {𝐵} as follows: Given this definition, we can define a new revision operator. 𝑑𝑇 ({𝐵}, {𝐴}) = 1 𝑑𝑇 ({𝐵}, {𝐵}) = 0 Definition 9. Let 𝜅 be an OCF and let 𝑑 be a pseudo- ultrametric. For any formula 𝜑, define ∘𝑑 such that 𝑑𝑇 ({𝐵}, {𝐴, 𝐵}) = 10 𝑑𝑇 ({𝐵}, ∅) = 5 𝐵𝑒𝑙(𝜅) ∘𝑑 𝜑 = {𝑠 | 𝜅𝜑𝑑 (𝑠) is minimal}. This distance function does indeed encode the fact that This new definition lets the initial strength of belief be Present is not strongly trusted to distinguish {𝐴} and {𝐵}; traded off with perceived expertise. We return to our this is because they do not always know where they are. example. 44 � panels for different actions: initializing a trust graph, ma- nipulating the trust graph, visualizing the trust graph, and performing revision. The only constraint is that the vocabularly needs to be provided to initialize the trust graph. After the initial trust graph is constructed, a user can jump between different panels. For example, one could add new information about past reports at any time, even after revision has been performed. In the following sections, we describe the basic usage Figure 3: Initializing a Trust Graph of the software. 5.2. Constructing a Trust Graph Example 4. Consider the light-reporting example again, with the initial belief state 𝜅 and the distance function 𝑑𝑇 In order to perform belief revision using T-BEL , we first specified in Example 3. Now suppose again that Present need to initialize a trust graph. This is done through the reports 𝜑 = ¬𝐴 ∧ 𝐵, i.e. that only the light in room 𝐵 panel in Figure 3. The user simply enters a propositional is on. We calculate 𝜅𝜑𝑑 (𝑠) for all states 𝑠 in the following vocabulary as a comma delimited sequence of strings. table. Optionally, one can specify an initial trust value; this is the weight that will be assigned to all edges in the trust 𝑠 𝜅(𝑠) 𝑑({𝐵}, 𝑠) 𝜅𝜑𝑑 (𝑠) graph. If it is not specified, it will default to 1. {𝐴} 0 1 1 The panel in Figure 4 is used for visualizing and ma- {𝐵} 1 0 1 nipulating the trust graph. After the trust graph has been {𝐴, 𝐵} 1 10 11 generated, it is displayed on the left side as a matrix that ∅ 2 5 7 gives the weight between every pair of states. The val- ues in this matrix can be edited manually, but this is not Since the first two rows both have minimal values, it the preferred way to change the values. The main goal follows that of T-BEL is to allow trust to be built incrementally by adding reports. This is done through the report entry 𝐵𝑒𝑙(𝜅) ∘𝑑 *¬𝐴 ∧ 𝐵 = {{𝐴}, {𝐵}}. section in Figure 4. Reports are entered as formulas in a Following revision, Absent believes exactly one light is on. simple variant of propositional logic, using the keyboard- friendly symbols & (conjunction), | (disjunction) and − This example demonstrates how the strength of belief (negation). The reports are tagged with 1 (positive) and and the strength of trust can interact. The given result 0 (negative). By default, when the Add Reports button occurs because the strength of belief in {𝐴} is identical is pressed, the matrix on the left updates the values in to the strength of trust in the report of {𝐵}. Increasing accordance with the following update rules: or decreasing either measure of strength will cause the result to be different. Note also that this approach gives Update Rule 1. For each pair of states 𝑠1 , 𝑠2 such that a full OCF as a result, so we have a ranking of alternative 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑 decrease the value 𝑤(𝑠1 , 𝑠2 ) to states as well. 𝑤(𝑠1 , 𝑠2 ) − 1. Update Rule 2. For each pair of states 𝑠1 , 𝑠2 such that 5. Implementation 𝑠1 |= 𝜑 and 𝑠2 ̸|= 𝜑, increase the value 𝑤(𝑠1 , 𝑠2 ) to 𝑤(𝑠1 , 𝑠2 ) + 1. 5.1. Functionality These update rules correspond to the construction of a We describe T-BEL , a Java application for modeling the trust graph in Algorithm 1. However, we remark that dynamics trust and belief. The core functionality of T-BEL T-BEL is not restricted to these updates. If the user would is as follows. It allows a user to create a trust graph that like to specify different update rules, this can be done by captures the distinctions an information source is trusted providing a text file specifying new update rules. to make. It allows a user to enter a series of reports, which There is one remaining feature to mention in this panel: might be correct or incorrect. These reports trigger an the Distance Checker. We will see in the next section that update to the trust graph. Finally, the user can calculate we actually do not use the values in the trust matrix the result of belief revision, in a manner that accounts directly; we use the minimax distance generated from for the influence of trust. these values. As such, we provide the user with a simple Note that the steps listed above need not be done se- mechanism for checking minimax distance for testing quentially. The interface for the software provides several and experimentation. 45 �Figure 4: The Trust Panel 5.3. Specifying an Epistemic State distance from the set of satisfying assignments to create a full ranking. In other words, the default approach defines As noted previously, epistemic states are represented in a ranking that corresponds to Dalal’s revision operator T-BEL using ranking functions. The software provides [10]. However, T-BEL also provides a flexible mechanism two different ways to specify an epistemic state. for reading alternative rankings from a file input. The first way to specify an epistemic state is by explic- itly specifying a total pre-order over all states. This is done by creating an external text file that lists a “level” 5.4. Calculating the Result of Revision for all states starting from 0. For example, if we had two T-BEL implements both naive revision and general revi- variables 𝐴 and 𝐵, then one example input file could be sion; the user chooses the mechanism to be used in the specified as follows: menu in Figure 3. 2 If Naive Revision is selected, then the user needs to 0:00 enter a threshold value. Following Proposition 8, this 1:10 threshold value defines a trust-sensitive revision opera- tor. This operator is used to calculate the result of belief The first line indicates that there are 2 variables. The revision when the Naive option is selected. The result of second line says that the state where 𝐴 and 𝐵 are both revision is displayed as a formula, capturing the minimal false is the most plausible, so it is the only state in level 0. states in the new ranking. We note that the software The next line specifies the states in level 1. Any states not can be used to perform more than one revision when the listed are put in level 2. A ranking over states specified Hamming distance has been specified for the ranking. in this manner gives us enough information to perform However, for file-based rankings, iterated revision is not belief revision. possible. Manually specifying a complete ranking in this manner We can also specify that we want to use general re- can be problematic, because it is time consuming and it is vision in the dropdown menu in Figure 3. In this case, easy to make mistakes. As such, we also give the user the if 𝜅 is the original ranking function, 𝑑 is the minimax ability to experiment with revision simply by entering distance and 𝜑 is the formula for revision, then we can a belief state as a set of formulas through an input box define a new function: in the main interface. For example, we could enter the beliefs by giving this list of formulas: 𝜅𝜑𝑑 (𝑠) = 𝜅(𝑠) + min{𝑑(𝑠, 𝑡) | 𝑡 |= 𝜑}. A&B By normalizing this function, we define a new ranking A|-B function that represents the beliefs following revision. The result of belief revision is displayed as a formula. To generate a ranking function from such a list, T-BEL However, general revision can be iterated because the finds all satisfying assignments. In the example given, full ranking is maintained after each revision. the only satisfying assignment occurs when 𝐴 and 𝐵 are both true. By default, T-BEL then uses the Hamming 46 �Figure 5: Revision Output 5.5. Step by Step Example hashmap of hashmaps; the lookup time is very fast. An- other place where we focus on efficiency is in the transla- Assume we want to work with the vocabularly {𝑎, 𝑏}, tion from formulas to belief states, where we use a DPLL as well as past reports of (𝑎 ∨ 𝑏, 1) and (𝑎, 1). Assume solver to find satisfying assignments. However, the run further that we would like to start with the belief state time for T-BEL still becomes slow as the vocabulary size (𝑎 ∧ 𝑏) and then revise by (𝑎 ∧ ¬𝑏) ∨ (¬𝑎 ∧ 𝑏). Using increases. It is a useful prototype for reasoning about T-BEL , then can solve this problem through the following small examples, and demonstrating the utility of trust steps: graphs. In future work, we will look to improve run time 1. Enter the vocabulary 𝑎, 𝑏 and a default value of by integrating a competition level ALLSAT solver for the 5. hard calculations [12]. 2. Enter reports (𝑎|𝑏, 1) and (𝑎, 1) then click Add Reports. 3. Select Naive revision with threshold 3. 6. Discussion 4. Enter the belief state 𝑎&𝑏 and formula (𝑎& − 6.1. Related Work 𝑏)|(−𝑎&𝑏). 5. Click Revise. This work fits in the general tradition of formalisms that address notions of trust and credibility for belief The default value in step 1 should be set so that it is at revision. There are alternative approaches, based on least as high as the number of reports. However, beyond non-prioritized and credibility-limited revision as well that constraint, it will not impact the results. After step [13, 14, 15]. The notion of trust has been explored in the 2, the values in the matrix representing the trust graph setting of Dynamic Epistemic Logic (DEL), by adding an will be as follows: explicit measure of trust to formulas [16]. Finally, since 00 01 10 11 we are primarily concerned with with trust based on ex- pertise, the formalism presented here is also related to 00 0 6 7 7 recent work on truth discovery [17]. 01 6 0 6 6 But fundamentally, this work is really about building 10 7 6 0 5 trust in a source based on the knowledge demonstrated 11 7 6 5 0 in past reports; our goal is to develop a formal model of knowledge-based trust. To the best of our knowledge, The revision panel following the example is in Figure this problem has not been explored previously in the 5, showing the input and the fact that the beliefs are context of formal belief change operators. However, it unchanged after revision. It can easily be verified that has been explored in some practical settings, such as the this is correct. formulation of search engine results [18]. The software introduced here can be seen as an exten- 5.6. Performance sion of the GenB system [19]. GenB is a general solver The question of run time is a challenging one to address for revision with a limited capacity to capture trust; T- for any implemented belief revision system, due to the BEL is significantly more sophisticated when it comes to well known compexity of revision [11]. The problem is representing and reasoning about the dynamics of trust even worse when we add trust graphs, which become and belief. very large as the vocabulary size increases. The present implementation has made many imple- 6.2. Conclusion mentation choices in order to optimize performance. In this paper, we have addressed the problem of building For example, we represent a trust map internally as a trust from past reports. We demonstrated that, in the 47 �context of OCFs, trust can be interpreted in two ways. ceedings of the 21st International Joint Conference First, if the scale used for the the strength of belief is on Artificial Intelligence (IJCAI), 2009, pp. 272–277. deemed to be independent of the distance metric, then [8] R. Booth, A. Hunter, Trust as a precursor to belief we can use a trust ranking to define a family of naive revision, J. Artif. Intell. Res. 61 (2018) 699–722. revision operators for trust-sensitive revision. On the [9] A. Hunter, R. Booth, Trust-sensitive belief revision, other hand, if strength of trust and strength of belief are in: Q. Yang, M. J. Wooldridge (Eds.), Proceedings of considered to be comparable on the same scale, then we the Twenty-Fourth International Joint Conference have shown how the two can be aggregated to define a on Artificial Intelligence, IJCAI 2015, Buenos Aires, new approach to trust-influenced belief revision. Argentina, July 25-31, 2015, AAAI Press, 2015, pp. We have also described a tool for solving belief change 3062–3068. problems influenced by trust. The focus is on building [10] M. Dalal, Investigations into a theory of knowledge trust from reports, and then performing belief revision. base revision, in: Proceedings of the National Con- Our software provides a simple interface that can be used ference on Artificial Intelligence (AAAI), 1988, pp. to build a trust graph iteratively, and then this graph is 475–479. used to adjust the behaviour of a formal belief change [11] T. Eiter, G. Gottlob, On the complexity of proposi- operator to account for trust. We suggest that this tool tional knowledge base revision, updates and coun- is an important step towards demonstrating the utility terfactuals, Artificial Intelligence 57 (1992) 227–270. of belief change operators for solving practical problems [12] T. Toda, T. Soh, Implementing efficient all solu- with partially trusted information sources. tions sat solvers, ACM Journal of Experimental There are many directions for future research. Beyond Algorithmics 21 (2016) 1–44. expanding the formal theory, we are primarily interested [13] S. O. Hansson, E. L. Fermé, J. Cantwell, M. A. in practical applications of this work. In future work, we Falappa, Credibility limited revision, J. Symb. Log. intend to improve run time performance, apply the tool 66 (2001) 1581–1596. to concrete problems in the evaluation of web resources, [14] R. Booth, E. Fermé, S. Konieczny, R. P. Pérez, and connect our approach to related work on learning Credibility-limited revision operators in proposi- with respect to trust. tional logic, in: Principles of Knowledge Repre- sentation and Reasoning: Proceedings of the Thir- teenth International Conference, KR 2012, Rome, References Italy, June 10-14, 2012, 2012. [15] G. Bonanno, Credible information, allowable in- [1] A. Hunter, Building trust for belief revision, in: Pro- formation and belief revision - extended abstract, ceedings of the Pacific Rim Conference on Artificial in: L. S. Moss (Ed.), Proceedings Seventeenth Con- Intelligence (PRICAI), 2021, pp. 543–555. ference on Theoretical Aspects of Rationality and [2] C. E. Alchourrón, P. Gärdenfors, D. Makinson, On Knowledge, TARK 2019, Toulouse, France, 17-19 the logic of theory change: Partial meet functions July 2019, volume 297 of EPTCS, 2019, pp. 82–90. for contraction and revision, Journal of Symbolic [16] F. Liu, E. Lorini, Reasoning about belief, evidence Logic 50 (1985) 510–530. and trust in a multi-agent setting, in: PRIMA 2017: [3] W. Spohn, Ordinal conditional functions. A dy- Principles and Practice of Multi-Agent Systems - namic theory of epistemic states, in: W. Harper, 20th International Conference, Nice, France, Oc- B. Skyrms (Eds.), Causation in Decision, Belief tober 30 - November 3, 2017, Proceedings, volume Change, and Statistics, vol. II, Kluwer Academic 10621 of Lecture Notes in Computer Science, Springer, Publishers, 1988, pp. 105–134. 2017, pp. 71–89. [4] M. Carbone, M. Nielsen, V. Sassone, A formal model [17] J. Singleton, R. Booth, An axiomatic approach to for trust in dynamic networks, in: International truth discovery, in: Proceedings of the International Conference on Software Engineering and Formal Conference on Autonomous Agents and Multiagent Methods, 2003, pp. 54–61. Systems (AAMAS), 2020, pp. 2011–2013. [5] K. Krukow, M. Nielsen, Trust structures, Inter- [18] X. Dong, E. Gabrilovich, K. Murphy, V. Dang, national Journal of Information Security 6 (2007) W. Horn, C. Lugaresi, S. Sun, W. Zhang, Knowledge- 153–181. based trust: Estimating the trustworthiness of web [6] T. D. Huynh, N. R. Jennings, N. R. Shadbolt, An in- sources, Proceedings of the VLDB Endowment 8 tegrated trust and reputation model for open multi- (2015). agent systems, Autonomous Agents and Multi- [19] A. Hunter, E. Tsang, GenB: A general solver Agent Systems 13 (2006) 119–154. for AGM revision, in: Proceedings of the Euro- [7] A. Salehi-Abari, T. White, Towards con-resistant pean Conference on Logics in Artificial Intelligence trust models for distributed agent systems, in: Pro- (JELIA), 2016, pp. 564–569. 48 �