Difference between revisions of "Vol-3197/paper6"

From BITPlan ceur-ws Wiki
Jump to navigation Jump to search
(edited by wikiedit)
(edited by wikiedit)
 
Line 9: Line 9:
 
|dblpUrl=https://dblp.org/rec/conf/nmr/HeyninckKM22
 
|dblpUrl=https://dblp.org/rec/conf/nmr/HeyninckKM22
 
|wikidataid=Q117341759
 
|wikidataid=Q117341759
 +
|description=scientific paper published in CEUR-WS Volume 3197
 
}}
 
}}
 
==Conditional Syntax Splitting, Lexicographic Entailment and the Drowning Effect==
 
==Conditional Syntax Splitting, Lexicographic Entailment and the Drowning Effect==

Latest revision as of 16:54, 30 March 2023

Paper

Paper
edit
description  scientific paper published in CEUR-WS Volume 3197
id  Vol-3197/paper6
wikidataid  Q117341759→Q117341759
title  Conditional Syntax Splitting, Lexicographic Entailment and the Drowning Effect
pdfUrl  https://ceur-ws.org/Vol-3197/paper6.pdf
dblpUrl  https://dblp.org/rec/conf/nmr/HeyninckKM22
volume  Vol-3197→Vol-3197
session  →

Conditional Syntax Splitting, Lexicographic Entailment and the Drowning Effect

load PDF

Conditional Syntax Splitting, Lexicographic Entailment and the
Drowning Effect
Jesse Heyninck1,2,3,* , Gabriele Kern-Isberner4 and Thomas Meyer2
1
  Vrije Universiteit Brussel, Belgium
2
  University of Cape Town and CAIR, South-Africa
3
  Open Universiteit Heerlen, the Netherlands
4
  Technische Universität Dortmund


                                                Abstract
                                                Lexicographic inference [1] is a well-behaved and popular approach to reasoning with non-monotonic conditionals. In recent work we have
                                                shown that lexicographic inference satisfies syntax splitting, which means we can restrict our attention to parts of the belief base that share atoms
                                                with a given query. In this paper, we introduce the concept of conditional syntax splitting, inspired by the notion of conditional independence
                                                as known from probability theory. We show that lexicographic inference satisfies conditional syntax splitting, and connect conditional syntax
                                                splitting to several known properties from the literature on non-monotonic reasoning, including the drowning effect.

                                                Keywords
                                                Non-monotonic Reasoning, lexicographic inference, defeasible reasoning, non-monotonic logic, syntax splitting



1. Introduction                                                                                                                                       us from splitting the belief base into two independent parts.

Lexicographic inference [1] is a well-known and popular approach                                                                                         An intuitively related problem that was somewhat surprisingly
to reasoning with non-monotonic conditionals, which has been                                                                                          shown to be independent of syntax splitting in [7] is the so-called
applied in description logics [2], probabilistic description logics                                                                                   drowning problem. It consists in the fact that under some inductive
[3] and richer preferential languages [4]. It is seen as a logic of                                                                                   inference relations, abnormal individuals do not inherit any prop-
very high-quality, as it extends rational closure (also known as                                                                                      erties. It is best illustrated using the canonical Tweety-example:
system Z) [5] and avoids the so-called drowning problem. This
                                                                                                                                                      Example 2 (The Drowning Problem). The drowning problem
high quality seems to come at a cost, as reasoning on the basis of
                                                                                                                                                      is illustrated by using the following conditional belief base
lexicographic inference is PNP -complete, even when restricted to
                                                                                                                                                      ∆ = {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝), (𝑒|𝑏)}, which represents the Tweety-
belief bases consisting of Horn-literal rules, i.e. rule bases where
                                                                                                                                                      example, i.e. that birds typically fly, penguins are typically birds,
every rule’s antecedent is a conjunction of atoms and every rule’s
                                                                                                                                                      and penguins typically don’t fly, together with the additional con-
consequent is a literal [6]. In previous work [7], we have shown
                                                                                                                                                      ditional “birds typically have beaks”. The drowning problem is
that lexicographic inference satisfies syntax splitting [8]. Syntax
                                                                                                                                                      constituted by the fact that some inductive inference operators,
splitting is a property of inference operators that requires that,
                                                                                                                                                      such as system 𝑍, do not allow to infer that penguins typically
for a belief base which can be split syntactically into two parts
                                                                                                                                                      have beaks (𝑝 |∼ 𝑍
                                                                                                                                                                       Δ 𝑏), i.e. the fact that penguins are abnormal when
(i.e. there exists two sub-signatures such that every conditional in
                                                                                                                                                      it comes to flying drowns inferences about penguins’ beaks. It is
the belief base is built up entirely one of the two sub-signatures),
                                                                                                                                                      well-known that lexicographic inference does not suffer from the
restricting attention to the sub-signature does not result in a loss
                                                                                                                                                      drowning problem.
or addition of inferences. In other words, syntax splitting ensures
we can safely restrict our attention to parts of the belief base         The drowning problem seems to be related to syntax splitting.
that share atoms with a given query, thus seriously lessening the     Intuitively, {(𝑒|𝑏)} is unrelated to the rest of the belief base, in the
computational strain for many concrete queries. However, this         sense that having beaks has nothing to do with flying or having
presupposed that parts of a conditional belief base are syntactically wings, as long as we know we are talking about birds. However,
independent, meaning that no common atoms are allowed. This           (unconditional) syntax splitting does not allow to capture this kind
might be an overly strong requirement, as the two parts of the        of independence, since the atom 𝑏 prohibits the belief base from
belief base might have common elements. Consider the following        being split into information about flying and wings on the one
example:                                                              hand, and information about beaks on the other hand. It is exactly
                                                                      this kind of conditional independencies between conditionals that
Example 1. Usually, bikes are chain-driven (𝑐|𝑏), usually chain-
                                                                      we seek to formally capture and study in this paper. In more detail,
driven bikes have multiple gears (𝑔|𝑐), and usually a bike frame
                                                                      the contributions of the paper are the following:
consists of four pipes (𝑓 |𝑏). The form of the frame is independent
of whether a bike is chain driven and how many gears it has.               1. we introduce and study the notion of conditional splitting
However, syntax splitting as defined in [8] does not allow us to              of a belief base, a property of conditional belief bases, and
restrict attention to {(𝑓 |𝑏)} when we want to make inferences                generalize the concept of syntax splitting, a property of
about the form of a bike frame, as the common atom 𝑏 prevents                 inductive inference operators, to conditional syntax split-
                                                                              ting, thus bringing the idea of conditional independence
NMR 2022: 20th International Workshop on Non-Monotonic Reasoning, Au-         into the realm of inductive inference operators;
gust 07–09, 2022, Haifa, Israel
*
 Corresponding author.                                                                                                                                    2. we show that lexicographic entailment satisfies conditional
   jesse.heyninck@ou.nl (J. Heyninck);                                                                                                                       syntax splitting;
gabriele.kern-isberner@cs.tu-dortmund.de (G. Kern-Isberner);
tmeyer@cair.org.za (T. Meyer)                                                                                                                             3. we argue that the drowning effect can be seen as a violation
� 0000-0002-3825-4052 (J. Heyninck)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0          of conditional syntax splitting; and
                                       International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)




                                                                                                                                                 61
�     4. we show how Lehmann’s so-called desirable closure prop- both antecedent and conclusion ((𝐵|𝐴)(𝜔) = 1); it falsifies, or
        erties [1] can be derived from conditional syntax splitting. violates it iff it satisfies the antecedence but not the conclusion
Outline of this Paper: We first state all the necessary preliminar- ((𝐵|𝐴)(𝜔) = 0); otherwise the conditional is not applicable, i. e.,
ies in Section 2 on propositional logic (Section 2.1), reasoning the interpretation does not satisfy the antecedent ((𝐵|𝐴)(𝜔) =
with non-monotonic conditionals (Section 2.2), inductive infer- 𝑢). We say that 𝜔 satisfies a conditional (𝐵|𝐴) iff it does not
ence (Section 2.3), System Z (Section 2.4) and lexicographic falsify it, i.e., iff 𝜔 satisfies its material counterpart 𝐴 → 𝐵.
inference (Section 2.5). In Section 3 we define and study the Given a total preorder (in short, TPO) ⪯ on possible            ′
                                                                                                                                  worlds,
concept of conditional syntax splitting. In Section 4, we show that  representing   relative   plausibility, 𝐴  ⪯ 𝐵 iff 𝜔 ⪯ 𝜔   for some
                                                                                                              ′
lexicographic inference satisfies conditional syntax splitting. In   𝜔  ∈  min  ⪯ (Mod(𝐴)) and some 𝜔 ∈ min⪯ (Mod(𝐵)). This

Sections 5 and 6, we show how properties of inductive inference allows for expressing the validity of defeasible inferences via
operators previously only discussed informally, namely the drown- stating that 𝐴 |∼   ′
                                                                                        ⪯ 𝐵 iff (𝐴∧𝐵) ≺ (𝐴∧¬𝐵) [10]. As is usual, we

ing effect (Section 5) and the properties introduced by Lehmann      denote  𝜔  ⪯  𝜔    and  𝜔 ′ ⪯ 𝜔 by 𝜔 ≈ 𝜔 ′ and 𝜔 ⪯ 𝜔 ′ and 𝜔 ′ ̸⪯ 𝜔
                                                                                ′
1995 (Section 6) can be seen as special cases of conditional syn- by 𝜔 ≺ 𝜔 (and similarly for formulas). We can marginalize total
tax splitting. Finally, we discuss related work in Section 7 and preorders and even inference relations, i.e., restricting them to
conclude in Section 8.                                               sublanguages, in a natural way: If Θ ⊆ Σ then any TPO ⪯ on
                                                                     Ω(Σ) induces uniquely a marginalized TPO ⪯|Θ on Ω(Θ) by
                                                                     setting
2. Preliminaries                                                                            𝜔1Θ ⪯|Θ 𝜔2Θ iff 𝜔1Θ ⪯ 𝜔2Θ .               (2)

In the following, we briefly recall some general preliminaries on          Note that on the right hand side of the iff condition above 𝜔1Θ , 𝜔2Θ
propositional logic, and technical details on inductive inference.         are considered as propositions in the superlanguage ℒ(Ω), hence
                                                                           𝜔1Θ ⪯ 𝜔2Θ is well defined [11].
                                                                              Similarly, any inference relation |∼ on ℒ(Σ) induces a
2.1. Propositional Logic                                                   marginalized inference relation |∼|Θ on ℒ(Θ) by setting
For a set At of atoms let ℒ(At) be the corresponding proposi-
tional language constructed using the usual connectives ∧ (and),                                  𝐴 |∼|Θ 𝐵 iff 𝐴 |∼ 𝐵                       (3)
∨ (or), ¬ (negation), → (material implication) and ↔ (mate-
rial equivalence). A (classical) interpretation (also called pos-         for any 𝐴, 𝐵 ∈ ℒ(Θ).
sible world) 𝜔 for a propositional language ℒ(At) is a function              An obvious implementation of total preorders are ordinal
𝜔 : At → {⊤, ⊥}. Let Ω(At) denote the set of all interpretations          conditional functions (OCFs), (also called ranking functions)
for At. We simply write Ω if the set of atoms is implicitly given.        𝜅 : Ω → N ∪ {∞} with 𝜅−1 (0) ̸= ∅. [12]. They express degrees
An interpretation 𝜔 satisfies (or is a model of) an atom 𝑎 ∈ At,          of (im)plausibility of possible worlds and propositional formulas
denoted by 𝜔 |= 𝑎, if and only if 𝜔(𝑎) = ⊤. The satisfaction              𝐴 by setting 𝜅(𝐴) := min{𝜅(𝜔) | 𝜔 |= 𝐴}. A conditional
relation |= is extended to formulas as usual. As an abbrevia-             (𝐵|𝐴) is accepted by 𝜅 iff 𝐴 |∼𝜅 𝐵 iff 𝜅(𝐴 ∧ 𝐵) < 𝜅(𝐴 ∧ ¬𝐵).
tion we sometimes identify an interpretation 𝜔 with its complete
conjunction, i. e., if 𝑎1 , . . . , 𝑎𝑛 ∈ At are those atoms that are       2.3. Inductive Inference Operators
assigned ⊤ by 𝜔 and 𝑎𝑛+1 , . . . , 𝑎𝑚 ∈ At are those propositions
that are assigned ⊥ by 𝜔 we identify 𝜔 by 𝑎1 . . . 𝑎𝑛 𝑎𝑛+1 . . . 𝑎𝑚        In this paper, we will be interested in inference relations |∼ Δ
(or any permutation of this). For 𝑋 ⊆ ℒ(At) we also define                 parametrized by a conditional belief base ∆. In more detail, such
𝜔 |= 𝑋 if and only if 𝜔 |= 𝐴 for every 𝐴 ∈ 𝑋. Define the set of            inference relations are induced by ∆, in the sense that ∆ serves as
models Mod(𝑋) = {𝜔 ∈ Ω(At) | 𝜔 |= 𝑋} for every formula                     a starting point for the inferences in |∼ Δ . We call such operators
or set of formulas 𝑋. A formula or set of formulas 𝑋1 entails              inductive inference operators:
another formula or set of formulas 𝑋2 , denoted by 𝑋1 |= 𝑋2 ,              Definition 1 ([8]). An inductive inference operator (from condi-
if Mod(𝑋1 ) ⊆ Mod(𝑋2 ). Where 𝜃 ⊆ Σ, and 𝜔 ∈ Ω(Σ), we                      tional belief bases) is a mapping C that assigns to each conditional
denote by 𝜔 𝜃 the restriction of 𝜔 to 𝜃, i.e. 𝜔 𝜃 is the interpretation    belief base ∆ ⊆ (ℒ|ℒ) an inference relation |∼ Δ on ℒ that satis-
over Σ𝜃 that agrees with 𝜔 on all atoms in 𝜃. Where Σ𝑖 , Σ𝑗 ⊆ Σ,           fies the following basic requirement of direct inference:
Ω(Σ𝑖 ) will also be denoted by Ω𝑖 for any 𝑖 ∈ N, and likewise
Ω𝑖,𝑗 we denote Ω(Σ𝑖 ∪ Σ𝑗 ) (for 𝑖, 𝑗 ∈ N). Likewise, for some              DI If ∆ is a conditional belief base and |∼ Δ is an inference
𝑋 ⊆ ℒ(Σ𝑖 ), we define Mod𝑖 (𝑋) = {𝜔 ∈ Ω𝑖 | 𝜔 |= 𝑋}.                              relation that is induced by ∆, then (𝐵|𝐴) ∈ ∆ implies
                                                                                 𝐴 |∼ Δ 𝐵.
2.2. Reasoning with Nonmonotonic                                      Examples of inductive inference operators include system P
     Conditionals                                                  [13], System Z ([5], see Section 2.4), lexicographic inference ([1],
Given a language ℒ, conditionals are objects of the form (𝐵|𝐴) see Section 2.5) and c-representations ([14].
where 𝐴, 𝐵 ∈ ℒ. The set of all conditionals based on a language       As already indicated in the previous subsection, inference rela-
ℒ is defined as: (ℒ|ℒ) = {(𝐵|𝐴) | 𝐴, 𝐵 ∈ ℒ}. We follow tions can be obtained on the basis of TPOs respectively OCFs:
the approach of [9] who considered conditionals as generalized Definition 2. A model-based inductive inference operator for
indicator functions for possible worlds resp. propositional inter- total preorders (on Ω) is a mapping C𝑡𝑝𝑜 that assigns to each
pretations 𝜔:                                                      conditional belief base ∆ a total preorder ⪯Δ on Ω s.t. 𝐴 |∼ ⪯Δ 𝐵
                           ⎧
                           ⎨ 1 : 𝜔 |= 𝐴 ∧ 𝐵                        for every (𝐵|𝐴) ∈ ∆ (i.e. s.t. DI is ensured). A model-based
           ((𝐵|𝐴))(𝜔) =        0 : 𝜔 |= 𝐴 ∧ ¬𝐵                (1) inductive inference operator for OCFs (on Ω) is a mapping C𝑜𝑐𝑓
                               𝑢 : 𝜔 |= ¬𝐴                         that assigns to each conditional belief base ∆ an OCF 𝜅Δ on Ω
                           ⎩
                                                                   s.t. ∆ is accepted by 𝜅Δ (i.e. s.t. DI is ensured).
where 𝑢 stands for unknown or indeterminate. In other words,
a possible world 𝜔 verifies a conditional (𝐵|𝐴) iff it satisfies




                                                                      62
�   Examples of inductive inference operators for OCFs System                Example 3. Let ∆ = {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝)} be a sub-base of
Z ([5], see Sec. 2.4) and c-representations ([14], whereas lexico-          belief base used in Example 2. This conditional belief base
graphic inference ([1], see Sec. 2.5) is an example of an inductive         has the following Z-partitioning: ∆0 = {(𝑓 |𝑏)} and ∆1 =
inference operator for TPOs.                                                {(𝑏|𝑝), (¬𝑓 |𝑝)}. This gives rise to the following 𝜅𝑍
                                                                                                                                Δ -ordering
   To define the property of syntax splitting [8], we assume a              over the worlds based on the signature {𝑏, 𝑓, 𝑝}:
conditional belief base ∆ that can be split into subbases ∆1 , ∆2
s.t. ∆𝑖 ⊂ (ℒ𝑖 |ℒ𝑖 ) with ℒ𝑖 = ℒ(Σ𝑖 ) for 𝑖 = 1, 2 s.t. Σ1 ∩Σ2 = ∅
and Σ1 ∪ Σ2 = Σ, writing:                                                           𝜔           𝜅𝑍
                                                                                                 Δ       𝜔       𝜅𝑍
                                                                                                                  Δ     𝜔     𝜅𝑍
                                                                                                                               Δ       𝜔          𝜅𝑍
                                                                                                                                                   Δ

                                   ⋃︁                                               𝑝𝑏𝑓         2        𝑝𝑏𝑓     1      𝑝𝑏𝑓   2        𝑝𝑏 𝑓       2
                        ∆ = ∆1           ∆2                                         𝑝𝑏𝑓         0        𝑝𝑏𝑓     1      𝑝𝑏𝑓   0        𝑝𝑏 𝑓       0
                                  Σ1 ,Σ2


whenever this is the case.                                                    As an example of a (non-)inference, observe that e.g. ⊤ |∼ 𝑍
                                                                                                                                         Δ ¬𝑝
                                                                            and 𝑝 ∧ 𝑓 ̸|∼ 𝑍
                                                                                          Δ 𝑏.
Definition 3 (Independence (Ind), [8]). An inductive inference
operator C satisfies (Ind) if for any ∆ = ∆1 Σ1 ,Σ2 ∆2 and for
                                             ⋃︀
any 𝐴, 𝐵 ∈ ℒ𝑖 , 𝐶 ∈ ℒ𝑗 (𝑖, 𝑗 ∈ {1, 2}, 𝑗 ̸= 𝑖),                             2.5. Lexicographic Entailment
                                                                        We recall lexicographic inference as introduced by [1]. For some
                     𝐴 |∼ Δ 𝐵 iff 𝐴𝐶 |∼ Δ 𝐵                             conditional belief base ∆, the order ⪯lex            Δ is defined as follows:
                                                                        Given 𝜔 ∈ Ω and ∆′ ⊆ ∆, 𝑉 (𝜔, ∆′ ) = |({(𝐵|𝐴) ∈ ∆′ |
Definition 4 (Relevance (Rel), [8]). An inductive inference oper-
                                                                        (𝐵|𝐴)(𝜔) = 0}|. Given a set of conditionals ∆ 𝑍-partitioned in
ator C satisfies (Rel) if for any ∆ = ∆1 Σ1 ,Σ2 ∆2 and for any
                                         ⋃︀
                                                                        (∆0 , . . . , ∆𝑛 ), the lexicographic vector for a world 𝜔 ∈ Ω is the
𝐴, 𝐵 ∈ ℒ𝑖 (𝑖 ∈ {1, 2}),
                                                                        vector lex(𝜔) = (𝑉 (𝜔, ∆0 ), . . . , 𝑉 (𝜔, ∆𝑛 )). Given two vectors
                      𝐴 |∼ Δ 𝐵 iff 𝐴 |∼ Δ𝑖 𝐵.                           (𝑥1 , . . . , 𝑥𝑛 ) and (𝑦1 , . . . , 𝑦𝑛 ), (𝑥1 , . . . , 𝑥𝑛 ) ⪯lex (𝑦1 , . . . , 𝑦𝑛 )
                                                                        iff there is some 𝑗 ⩽ 𝑛 s.t. 𝑥𝑘 = 𝑦𝑘 for every 𝑘 > 𝑗 and
                                                                                                    ′
Definition 5 (Syntax splitting (SynSplit), [8]). An inductive in-       𝑥𝑗 ⩽ 𝑦𝑗 . 𝜔 ⪯lex      Δ 𝜔 iff lex(𝜔) ⪯
                                                                                                                         lex
                                                                                                                               lex(𝜔 ′ ). The resulting
                                                                                                                   𝑡𝑝𝑜
ference operator C satisfies (SynSplit) if it satisfies (Ind) and       inductive inference operator 𝐶⪯lex will be denoted by 𝐶 lex to
(Rel).                                                                  avoid clutter.
                                                                           In [1], lexicographic inference was shown to be RC-extending
   Thus, Ind requires that inferences from one sub-language are         (for finite conditional belief bases):
independent from formulas over the other sublanguage, if the
belief base splits over the respective sublanguages. In other words,        Proposition 1 ([1, Theorem 3]). For any 𝐴 ∈ ℒ s.t. 𝜅𝑍
                                                                                                                                Δ (𝐴) is
information on the basis of one sublanguage does not influences             finite, then 𝐴 |∼ 𝑍                lex
                                                                                              Δ 𝐵 implies 𝐴 |∼ Δ 𝐵.
inferences made in the other sublanguage. Rel, on the other hand,
restricts the scope of inferences, by requiring that inferences in          Example 4 (Example 3 ctd.). For the Tweety belief base ∆ as in
a sublanguage can be made on the basis of the conditionals in a             Example 3 we obtain the following lex(𝜔)-vectors:
conditional belief base formulated on the basis of that sublanguage.
SynSplit combines these two properties.                                       𝜔         lex(𝜔)       𝜔         lex(𝜔)   𝜔     lex(𝜔)       𝜔      lex(𝜔)
                                                                              𝑝𝑏𝑓       (0,1)        𝑝𝑏𝑓       (1,0)    𝑝𝑏𝑓   (0,2)        𝑝𝑏 𝑓   (0,1)
2.4. System Z                                                                 𝑝𝑏𝑓       (0,0)        𝑝𝑏𝑓       (1,0)    𝑝𝑏𝑓   (0,0)        𝑝𝑏 𝑓   (0,0)

We present system 𝑍 defined in [5] as follows. A conditional
                                                                              The lex-vectors are ordered as follows:
(𝐵|𝐴) is tolerated by a finite set of conditionals ∆ if there is a
possible world 𝜔 with (𝐵|𝐴)(𝜔) = 1 and (𝐵 ′ |𝐴′ )(𝜔) ̸= 0 for                              (0, 0) ≺lex (1, 0) ≺lex (0, 1) ≺lex (0, 2).
all (𝐵 ′ |𝐴′ ) ∈ ∆, i.e. 𝜔 verifies (𝐵|𝐴) and does not falsify any
(other) conditional in ∆. The Z-partitioning (∆0 , . . . , ∆𝑛 ) of ∆           Observe that e.g. ⊤ |∼ lex
                                                                                                      Δ ¬𝑝 (since lex(⊤ ∧ ¬𝑝) = (0, 0) ≺
                                                                                                                                         lex

is defined as:                                                                                                 lex
                                                                            lex(⊤ ∧ 𝑝) = (1, 0)) and 𝑝 ∧ 𝑓 |∼ Δ 𝑏.
     • ∆0 = {𝛿 ∈ ∆ | ∆ tolerates 𝛿};
     • ∆1 , . . . , ∆𝑛 is the Z-partitioning of ∆ ∖ ∆0 .                    3. Conditional Syntax Splitting
For 𝛿 ∈ ∆ we define: 𝑍Δ (𝛿) = 𝑖 iff 𝛿 ∈ ∆𝑖 and (∆0 , . . . , ∆𝑛 ) We now introduce a conditional version of syntax splitting. A first
is the Z-partioning of ∆. Finally, the ranking function 𝜅𝑍      Δ is
                                                                     central idea is the syntactical notion of conditional splitting, a
defined via: 𝜅𝑍 Δ (𝜔) = max{𝑍(𝛿) | 𝛿(𝜔) = 0, 𝛿 ∈ ∆} + 1, with
                                                                     property of belief bases.
max ∅ = −1. The resulting inductive inference operator 𝐶𝜅𝑜𝑐𝑓    𝑍 is
                                                                Δ    Definition 6. We say a conditional belief base ∆ can be split
denoted by 𝐶 𝑍 .                                                     into subbases ∆1 ,∆2 conditional on a sub-alphabet Σ3 , if ∆𝑖 ⊂
   In the literature, system 𝑍 has also been called rational closure (ℒ(Σ𝑖 ∪ Σ3 ) | ℒ(Σ𝑖 ∪ Σ3 )) for 𝑖 = 1, 2 s.t. Σ1 , Σ2 and Σ3 are
[15]. An inference relation |∼ Δ based on ∆ s.t. 𝐴 |∼ 𝑍 Δ 𝐵 implies  pairwise disjoint and Σ = Σ1 ∪ Σ2 ∪ Σ3 , writing:
𝐴 |∼ Δ 𝐵 is called RC-extending [16]. An RC-extending inference                                       ⋃︁
relation has also been called a refinement of System 𝑍 [17]. We                           ∆ = ∆1           ∆2 | Σ3
call an inductive inference operator C RC-extending iff every                                       Σ1 ,Σ2
C(∆) is RC-extending.
   We now illustrate OCFs in general and System 𝑍 in particular         Intuitively, a conditional belief base can be split into Σ1 and
with the well-known “Tweety the penguin”-example.                    Σ2 conditional on Σ3 , if every conditional is built up from atoms
                                                                     in Σ1 ∪ Σ3 or atoms in Σ2 ∪ Σ3 .




                                                                       63
�   The above notion of conditional syntax splitting, however, is Proposition         2. Let a conditional belief base ∆                     =
too strong, in the sense that it does not warrant satisfaction of ∆1 Σ1 ,Σ2 ∆2 | Σ3 be given. If there is a 𝐶 ∈ ℒ(Σ3 ) s.t.
                                                                         ⋃︀
conditional variants of relevance and independence (we will define for every conditional in (𝐵|𝐴) ∈ ∆ = ∆1                       2
                                                                                                                    ⋃︀
                                                                                                                       Σ1 ,Σ2 ∆ | Σ3 :
them in formal detail below) for lexicographic inference. The
underlying problem is that toleration might not be respected by           1. 𝐵 ∈ ℒ(Σ1 ) ∪ ℒ(Σ2 ), or 𝐵 ≡ 𝐶.
conditional belief bases that conditionally split:                        2. 𝐴 ∈ ℒ(Σ1 ) ∪ ℒ(Σ2 ), or 𝐴 ≡ 𝐶.
Example 5. Let ∆ = {(𝑥|𝑏), (¬𝑥|𝑎), (𝑐|𝑎 ∧ 𝑏)}. Then                       3. (𝐺|𝐻)∈Δ𝑖 𝐻 → 𝐺 ̸⊢ {¬𝐹 | (𝐹 |𝐶 ′ ) ∈ ∆𝑖 , 𝐶 ′ ≡
                                                                              ⋀︀                         ⋁︀

                                                                              𝐶} for 𝑖 = 1, 2.1
                                  ⋃︁
          ∆ = {(𝑥|𝑏), (¬𝑥|𝑎)}          {(𝑐|𝑎 ∧ 𝑏)} | {𝑎, 𝑏}
                               {𝑥},{𝑐}                              Then ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 .
                                                                                     ⋃︀

However, this notion of purely syntactical conditional indepen-
dence is not reflected on the level of tolerance (and therefore Proof. Consider some          𝜔 3 ∈ Ω(Σ3 ). If 𝜔 3 ̸|= 𝐶 we are done.
                                                                                           3
entailment). Indeed, {(𝑐|𝑎 ∧ 𝑏)} (trivially) tolerates itself, i.e. Suppose
                                                                      2 ′3            ⋀︀ 𝜔 |= 𝐶. With the third condition,
                                                                               therefore
                                                                                                                     2 ′3
                                                                                                                                  there is an
𝑍{(𝑐|𝑎∧𝑏)} (𝑐|𝑎 ∧ 𝑏) = 0, yet ∆ does not tolerate (𝑐|𝑎 ∧ 𝑏), i.e.   𝜔   𝜔    ∈   Mod(   (𝐺|𝐻)∈Δ  2 𝐻   →   𝐺) s.t. 𝜔  𝜔     |=  𝐹  for every
𝑍Δ (𝑐|𝑎 ∧ 𝑏) = 1.                                                   (𝐹 |𝐶 ′ ) ∈ ∆2 s.t. 𝐶 ′ ≡ 𝐶. Notice that, in view of the first two
                                                                                                      𝑗          2
   This means that for system Z and lexicographic entaiment, conditions, for any (𝐵|𝐴) ∈ ∆ , either 𝜔 |= 𝐴 → 𝐵 or 𝐵 ≡ 𝐶
conditional relevance (now only introduced informally) is vio-      or 𝐴   ≡  𝐶.  Furthermore,  in the  case where  𝐴 ≡ 𝐶, 𝜔 2 𝜔 ′3 |= 𝐵.
                                                                                                                2 3
                                                                    Since 𝐵 ∈    ⋀︀ ℒ(Σ2 ) or 𝐵 ≡ 𝐶, also 𝜔 𝜔 |= 𝐵.             Altogether,
                 ⋃︀ base. In more detail, even though ∆ =
lated for this belief
                                                                      2 3                                                   1 ⋃︀s         2
{(𝑥|𝑏), (¬𝑥|𝑎)} {𝑥},{𝑐} {(𝑐|𝑎 ∧ 𝑏)} | {𝑎, 𝑏}, we have e.g. 𝜔 𝜔 |= (𝐺|𝐻)∈Δ2 𝐻 → 𝐺. Thus, ∆ = ∆ Σ1 ,Σ2 ∆ |
⊤ ̸|∼ lex                               lex
               ¬(𝑎 ∧ 𝑏) whereas ⊤ |∼ ¬(𝑎 ∧ 𝑏) (and likewise         Σ 3 .
     {(𝑐|𝑎∧𝑏)}                             Δ
for system 𝑍).                                                                  A simpler case of this condition is a belief base where all
   What happens here is that (𝑥|𝑏) and (¬𝑥|𝑎) act as “constraints”           antecedents derive from the common alphabet Σ3 and all conse-
on 𝑎 and 𝑏 being true together, which on its turn is needed for              quents derive from either Σ1 or Σ2 .
(𝑐|𝑎 ∧ 𝑏) to be tolerated. In other words, pure syntactic conditional           Notice that e.g. the conditional belief base from Example 2 has
splitting is not reflected on the semantic level (in contradistinction       the form described in Proposition 2:
to unconditional splitting). We can exclude such cases by using              Example 6. Consider again ∆ from Example 2, and let Σ1 =
the following weaker notion of safe conditional syntax splitting:            {𝑓, 𝑝}, Σ2 = {𝑒} and Σ3 = {𝑏}. Observe that:
Definition 7. A conditional belief base ∆ = ∆1 Σ1 ,Σ2 ∆2 |
                                                        ⋃︀
                                                                                                                  ⋃︁
Σ3 can be safely split into subbases ∆1 , ∆2 conditional on a                        ∆ = {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝)}    {(𝑒|𝑏)} | Σ3 .
                                                                                                                            Σ1 ,Σ2
sub-alphabet Σ3 , writing:
                                  s
                                 ⋃︁                                          Furthermore, the first two items in Proposition 2 are satisfied
                     ∆ = ∆1             ∆2 | Σ3 ,                            as every conditional is either completely on the basis of the al-
                               Σ1 ,Σ2                                        phabet {𝑓, 𝑝} or has as an antecedent or a consequent 𝑏. Fi-
                                                                             nally, the last condition is satisfied as {𝑏 → 𝑒} ̸|= ¬𝑒 and
if for every 𝜔 ∈ Ω(Σ𝑖 ∪Σ3 ), there is a 𝜔 𝑗 ∈ Ω(Σ𝑗 ) s.t. 𝜔 𝑗 𝜔 3 ̸|=
⋁︀
              3
                                                                             {𝑏 → 𝑓, 𝑝 → 𝑏, 𝑝 → ¬𝑓⋃︀} ̸|= 𝑓 ∨ ¬𝑏. We thus see that
   (𝐹 |𝐸)∈Δ𝑗 𝐸 ∧ ¬𝐹 (for 𝑖, 𝑗 = 1, 2 and 𝑖 ̸= 𝑗).                            ∆ = {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝)} sΣ1 ,Σ2 {(𝑒|𝑏)} | Σ3 .
   The notion of safe splitting is explained as follows: ∆ can be                 The bicycle example is also of this form:
safely split into ∆1 and ∆2 conditional on Σ3 if it can be split in
∆1 and ∆2 conditional on Σ3 , and additionally, for every world              Example 7. Consider again ∆ from Example 1. We see that:
𝜔 𝑖 𝜔 3 in the subsignature Σ𝑖 ∪ Σ3 , we can find a world 𝜔 𝑗 in                                                   s
                                                                                                                  ⋃︁
the subsignature Σ𝑗 (𝑖, 𝑗 = 1, 2 and 𝑗 ̸= 𝑖) s.t. no conditional                              {(𝑐|𝑏), (𝑔|𝑐)}            {(𝑓 |𝑏)} | {𝑏}.
𝛿 ∈ ∆𝑗 is falsified by 𝜔 𝑖 𝜔 𝑗 𝜔 3 (or, equivalently, by 𝜔 𝑗 𝜔 3 ). We                                         {𝑔,𝑐},{𝑓 }
will show some more syntactical formulated conditions that ensure            Remark 1. Note that a weaker prerequisite such as taking only
safe splitting below.                                                        the first two conditions in Proposition 2 does not work: in more de-
   We argue here that safe splitting faithfully captures indepen-
dences of two conditional belief bases conditional on a sub-                                         ⋃︀is a 𝐶 ∈ ℒ(Σ3 ) s.t. for every conditional
                                                                             tail, requiring that there
                                                                             in (𝐵|𝐴) ∈ ∆ = ∆1 Σ1 ,Σ2 ∆2 | Σ3 :
signature Σ3 . Indeed, safe splitting requires that (1) all condition-
                                                                                    1. 𝐵 ∈ ℒ(Σ1 ) ∪ ℒ(Σ2 ),
     ⋃︀ built up from the sub-signatures Σ1 ∪ Σ3 or Σ2 ∪ Σ3 (i.e.
als are
∆1 Σ1 ,Σ2 ∆2 | Σ3 ), and (2) that any information on Σ𝑖 ∪ Σ3 is                     2. 𝐴 ∈ ℒ(Σ1 ) ∪ ℒ(Σ2 ), or 𝐴 ≡ 𝐶.
compatible with ∆𝑗 , i.e. no world 𝜔 𝑖 𝜔 3 causes a conditional in
∆𝑗 to be violated. In other words, toleration with respect to ∆𝑗 is          In other words, these two conditions say that conditionals are
independent of ∆𝑖 .                                                          either fully from the language based on either Σ1 or Σ2 , or their
   We now delineate some more syntactic conditions that ensure               antecedent is fully based on Σ3 , and there is only a single formula
safe syntax splitting. These conditions are typically easier to              allowed to occur as such. However, this notion is not consistent
check, and might reasonably be expected to hold for certain                  with toleration. Consider ∆ = {(𝑦|⊤), (¬𝑦|𝑎), (𝑥|𝑎)}. Then
natural language    scenarios. For example, if it holds that (1)
                                                                                                                  ⋃︁
                                                                                        ∆ = {(𝑦|⊤), (¬𝑦|𝑎)}            {(𝑥|𝑎)} | {𝑎}
∆ = ∆1 Σ1 ,Σ2 ∆2 | Σ3 , (2) all antecedents and consequents
            ⋃︀
                                                                                                                     {𝑦},{𝑥}
(of conditionals in ∆) using elements of the common sub-alphabet
Σ3 are equivalent, and (3) all material versions of the conditional          and {(𝑥|𝑎)} tolerates itself (trivially), yet {(𝑦|⊤), (¬𝑦|𝑎), (𝑥|𝑎)}
sub-base ∆𝑖 are consistent with the set of consequents of the                does not tolerate (𝑥|𝑎). It is perhaps not surprising that such a
conditionals whose antecedent uses atoms in the common sub-                  purely syntactic condition is elusive.
alphabet Σ3 , then ∆ can be safely split into ∆1 and ∆2 condi-               1
                                                                                 Or, equivalently, {𝐻 → 𝐺 | (𝐺|𝐻) ∈ Δ𝑖 }∪{𝐹 | (𝐹 |𝐶 ′ ) ∈ Δ𝑖 , 𝐶 ′ ≡
tional on Σ3 .                                                                   𝐶} ̸⊢ ⊥.




                                                                        64
�   Safe conditional splitting of a conditional belief
                                                  ⋃︀ base is consis-    CInd𝑡𝑝𝑜      𝐴𝐷 ⪯Δ 𝐵𝐷 iff 𝐴𝐶𝐷 ⪯Δ 𝐵𝐶𝐷
tent with toleration, in the sense that ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3
                                                                        CRel𝑡𝑝𝑜      𝐴𝐷 ⪯Δ 𝐵𝐷 iff 𝐴𝐷 ⪯Δ𝑖 𝐵𝐷
implies that toleration of a conditional (𝐵|𝐴) by ∆ is equiva-
                                                               𝑖        CInd𝑜𝑐𝑓      𝜅Δ (𝐴𝐷) ⩽ 𝜅Δ (𝐵𝐷) iff 𝜅Δ (𝐴𝐶𝐷) ⩽ 𝜅Δ (𝐵𝐶𝐷)
lent to toleration of (𝐵|𝐴) by the conditional sub-base ∆ in
which it occurs. This gives further evidence to the fact that safe      CRel𝑜𝑐𝑓      𝜅Δ (𝐴𝐷) ⩽ 𝜅Δ (𝐵𝐷) iff 𝜅Δ𝑖 (𝐴𝐷) ⩽ 𝜅Δ𝑖 (𝐵𝐷)
conditional splitting adequately captures the notion of indepen-        We now connect CInd to the notion of conditional indepen-
dence of sub-bases: toleration of a conditional is independent of a dence of TPOs as known from belief revision. For this, we need
(conditionally) unrelated sub-base.                                  the following notion taken from [18]:
Proposition     3. Let a conditional  ⋃︀ belief base ∆        =         Definition 11 ([18]). Let ⪯ be a total preorder on Ω(Σ), and
∆1 Σ1 ,Σ2 ∆2 | Σ3 be given. ∆1 sΣ1 ,Σ2 ∆2 | Σ3 implies
    ⋃︀
                                                                        let Σ1 , Σ2 , Σ3 be three (disjoint) subsignatures of Σ. Then
(for any 𝑖 = 1, 2) that ∆𝑖 tolerates (𝐵|𝐴) ∈ ∆𝑖 iff ∆ tolerates         Σ1 and Σ2 are independent conditional on Σ3 , in symbols,
(𝐵|𝐴).                                                                                                1    1              2    2




                                                                            |=
                                                                        Σ1     ⪯ Σ2 |Σ3 , if for all 𝜔1 , 𝜔2 ∈ Ω(Σ1 ), 𝜔1 , 𝜔2 ∈ Ω(Σ2 ),
                                                                        and 𝜔 3 ∈ Ω(Σ3 ) holds that for all 𝑖, 𝑗 ∈ {2, 3}, 𝑖 ̸= 𝑗,
Proof. Suppose ∆1 sΣ1 ,Σ2 ∆2 | Σ3 . Suppose ∆𝑖 tolerates
                      ⋃︀

(𝐵|𝐴) ∈ ∆𝑖 . Wlog let 𝑖 = 1. This means there is an 𝜔 1 ∈ Ω(Σ1 )                       𝜔1𝑖 𝜔1𝑗 𝜔 3 ⪯ 𝜔2𝑖 𝜔1𝑗 𝜔 3 iff 𝜔1𝑖 𝜔 3 ⪯ 𝜔2𝑖 𝜔 3 .        (4)
and 𝜔 3 ∈ Ω(Σ3 ) s.t. 𝜔 1 𝜔 3 |= 𝐴 ∧ 𝐵 and 𝜔⋃︀1 𝜔 3 |= 𝐶 → 𝐷
for every (𝐷|𝐶) ∈ ∆1 . Since ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 ,                      Independence of two subsignatures Σ𝑖 and Σ𝑗 conditional on
𝜔 1 𝜔 2 𝜔 3 |= (𝐹 |𝐸)∈Δ2 𝐸 ∧ ¬𝐹 , i.e. 𝜔 1 𝜔 2 𝜔 3 |= 𝐸 → 𝐹 for
              ⋀︀                                                        Σ3 means that, in the context of fixed information about Σ3 ,
every (𝐹 |𝐸) ∈ ∆𝑗 . Thus, ∆ tolerates (𝐵|𝐴). The other direction        information about Σ𝑗 is irrelevant for the ordering of worlds
is immediate.                                                           based on Σ𝑖 : 𝜔1𝑗 can be “cancelled out”.

                                                                                                                           TPOs C𝑡𝑝𝑜 :
                                                                        Proposition 4. An inductive inference operator for⋃︀
   We now move to the formulation of conditional syntax splitting,
                                                                        ∆ ↦→⪯Δ on ℒ satisfies (CInd) iff for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 |
a property of inductive inference relations that expresses that the




                                                                                                |=
independencies between sub-bases of conditionals, as encoded in         Σ3 , it holds that Σ1 ⪯ Σ2 |Σ3 .

safe splitting, are respected by an inductive inference relation.
                                                                        Proof. For the  ⋃︀⇒-direction,     suppose that C𝑡𝑝𝑜 satisfies (CInd)
   Conditional independence (CInd) and safe conditional rele-                              s
                                                                        and ∆ = ∆1 Σ1 ,Σ2 ∆2 | Σ3 . Consider some 𝜔11 , 𝜔21 ∈ Ω(Σ1 ),
vance (CRel) are defined analogous to (Ind) and (Rel), but now
assuming that a conditional belief base can be safely split and         𝜔 2 ∈ Ω(Σ2 ) and 𝜔 3 ∈ Ω(Σ3 ) and suppose that 𝜔11 𝜔 3 ≺
taking into account we have full information on the “conditional        𝜔11 𝜔 3 . Thus, 𝜔11 𝜔 3 ∨ 𝜔21 𝜔 3 |∼ ⪯ ¬𝜔11 . Thus, with (CInd𝑡𝑝𝑜 ),
pivot” Σ3 :                                                             𝜔11 𝜔 2 𝜔 3 ∨ 𝜔21 𝜔 2 𝜔 3 |∼ ⪯ ¬𝜔11 and thus 𝜔11 𝜔 2 𝜔 3 ≺ 𝜔21 𝜔 2 𝜔 3 . The
                                                                        other direction of equation (4) is analogous.
Definition 8. An inductive  inference operator C satisfies (CInd)

                                                                                                                       |=
                                                                           For the ⋃︀⇐-direction, suppose Σ1            ⪯ Σ2 |Σ3 and suppose
if for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 , and for any 𝐴, 𝐵 ∈ ℒ(Σ𝑖 ),          ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 and 𝐴𝐷 |∼ Δ 𝐵 for some 𝐴, 𝐵 ∈
                   ⋃︀
𝐶 ∈ ℒ(Σ𝑗 ) (for 𝑖, 𝑗 ∈ {1, 2}, 𝑗 ̸= 𝑖) and a complete conjunction       ℒ(Σ1 ) and some complete conjunction 𝐷 ∈ ℒ(Σ3 ). Notice
𝐷 ∈ ℒ(Σ3 ),                                                             that since 𝐷 is a complete conjunction, there is a unique world
                  𝐴𝐷 |∼ Δ 𝐵 iff 𝐴𝐷𝐶 |∼ Δ 𝐵                              𝜔 3 ∈ Ω(Σ3 ) s.t. 𝜔 3 |= 𝐷. Then 𝐴𝐵𝐷 ≺ 𝐴𝐵𝐷. Consider
                                                                        now some arbitrary 𝐶 ∈ ℒ(Σ2 ). Notice that 𝐴𝐵𝐷 ⪯ 𝐴𝐵𝐶𝐷.
   Thus, an inductive inference operator satisfies conditional in-
                                                                        Take some 𝜔21 𝜔22 𝜔 3 ∈ min⪯ Mod(𝐴𝐵𝐶𝐷). For any 𝜔11 𝜔12 𝜔 3 ∈
dependence if, for any ∆ that safely splits into ∆1 and ∆2 con-
                                                                        min⪯ Mod(𝐴𝐵𝐷), 𝜔31 𝜔12 𝜔 3 ∈ Mod(𝐴𝐵𝐷) (as 𝐵 ∈ ℒ(Σ1 ),
ditional on Σ3 , whenever we have all the necessary information
                                                                        and thus 𝜔11 𝜔12 𝜔 3 ≺ 𝜔31 𝜔12 𝜔 3 (since 𝐴𝐵𝐷 ≺ 𝐴𝐵𝐷). With
about Σ3 , inferences from one sub-language are independent from
                                                                        independence, 𝜔11 𝜔 3 ≺ 𝜔21 𝜔 3 . Again with independence,
formulas over the other sub-language.
                                                                        𝜔11 𝜔22 𝜔 3 ≺ 𝜔21 𝜔22 𝜔 3 . Since 𝜔22 |= 𝐶, 𝜔11 𝜔22 𝜔 3 ∈ Mod(𝐴𝐵𝐶𝐷)
Definition 9. An inductive inference operator C satisfies (CRel)        and thus there is some 𝜔31 𝜔32 𝜔 3 ∈ min⪯ Mod(𝐴𝐵𝐶𝐷) with
if for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 , and for any 𝐴, 𝐵 ∈ ℒ(Σ𝑖 )
                    ⋃︀                                                  𝜔31 𝜔32 𝜔 3 ≺ 𝜔21 𝜔22 𝜔 3 . Since 𝜔21 𝜔22 𝜔 3 ∈ min⪯ Mod(𝐴𝐵𝐶𝐷),
(for 𝑖 ∈ {1, 2}) and a complete conjunction 𝐷 ∈ ℒ(Σ3 ),                 we have established that 𝐴𝐶𝐷 |∼ ⪯ 𝐵.

                   𝐴𝐷 |∼ Δ 𝐵 iff 𝐴𝐷 |∼ Δ𝑖 𝐵                                Proposition 4 establishes a correspondence between the prop-
                                                                        erty CInd of inductive inference operators, and the notion of
   Thus, CRel restricts the scope of inference by requiring that        conditional independence for TPOs, as already known from belief
inferences in the sub-language Σ1 ∪ Σ3 can be made on the basis         revision.
of the conditionals on the basis of that sub-language.
                                                                                                                           TPOs C𝑡𝑝𝑜 :
                                                                        Proposition 5. An inductive inference operator for⋃︀
   Syntax splitting (CSynSPlit) combines the two properties
                                                                        ∆ ↦→⪯Δ on ℒ satisfies (CRel) iff for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 |
(CInd) and (CRel):
                                                                        Σ3 , it holds that ⪯Δ𝑖 = ⪯Δ |Σ𝑖 .
Definition 10. An inductive inference operator C satisfies con-
ditional syntax splitting (CSynSPlit) if it satisfies (CInd) and Proof. 𝑡𝑝𝑜For the ⇒-direction, suppose⋃︀ that C𝑡𝑝𝑜 satisfies
(CInd).                                                            (CRel      ) and consider some ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 . Sup-
                                                                   pose 𝜔11 𝜔13 ≺Δ𝑖 𝜔21 𝜔23 . Then 𝜔11 𝜔13 ∨ 𝜔21 𝜔23 |∼ Δ ¬𝜔21 𝜔23 . With
   We now proceed with the study of conditional syntax splitting. (CRel), 𝜔 1 𝜔 3 ∨ 𝜔 1 𝜔 3 |∼ ¬𝜔 1 𝜔 3 and thus 𝜔 1𝑖𝜔 3 ≺ 𝜔 1 𝜔 3 ,
                                                                               1 1      2 2     Δ    2 2                1 1    Δ     2 2
We first analyse the properties of CInd and CRel for TPOs. We i.e. 𝜔 1 𝜔 3 ⪯          1 3
                                                                        1 1 Δ |Σ1 𝜔2 𝜔2 . The other direction is analogous.
first notice that CInd and CRel for inductive inference operators
                                                                     For the ⇐-direction, suppose that ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3
                                                                                                                       ⋃︀
for TPOs respectively for OCFs⋃︀is equivalent to the following two
properties (for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 and for 𝐴, 𝐵 ∈ and ⪯Δ𝑖 = ⪯Δ |Σ𝑖 . Suppose now 𝐴 |∼ Δ 𝐵. Then 𝐴𝐵 ≺Δ 𝐴𝐵
ℒ(Σ𝑖 ), complete conjunction 𝐷 ∈ ℒ(Σ3 ), 𝐶 ∈ ℒ(Σ𝑗 ), 𝑖, 𝑗 = and thus 𝐴𝐵 ≺Δ𝑖 𝐴𝐵, which implies 𝐴 |∼ Δ𝑖 𝐵.
1, 2 and 𝑖 ̸= 𝑗):




                                                                   65
�   We now analyze conditional syntax splitting for inductive infer- Lemma 9. Let a conditional belief base ∆1 sΣ1 ,Σ2 ∆2 | Σ3
                                                                                                                        ⋃︀
ence operators for OCFs. Thanks to the close relationship between with its corresponding Z-partition (∆0 , . . . , ∆𝑛 ) be given. Then
rankings and probabilities, there is a straightforward adaptation for every 0 ⩽ 𝑖 ⩽ 𝑛2 :
of conditional independence for OCFs [19, Chapter 7].
                                                                    𝑉 (𝜔, ∆𝑖 ) = 𝑉 (𝜔 1 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆1𝑖 )
Definition 12. Let Σ1 ∪̇ Σ2 ∪̇ Σ3 ⊆ Σ and let 𝜅 be an OCF.                     = 𝑉 (𝜔 1 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆2𝑖 )
Σ2 , Σ3 are conditionally independent given Σ1 with respect to 𝜅,
                                          1           2
                 |=


                   𝜅 Σ2 |Σ3 , if for all 𝜔 ∈ Ω(Σ1 ), 𝜔 ∈ Ω(Σ2 ), Proof. Take some 0 ⩽ 𝑖 ⩽ 𝑛. Noice that ∆1
                                                                                                                        ⋃︀s
in symbols Σ1                                                                                                                       2
      3                1   1 3            1 3                                                                              Σ1 ,Σ2 ∆ | Σ3 ,
and 𝜔 ∈ Ω(Σ3 ), 𝜅(𝜔 |𝜔 𝜔 ) = 𝜅(𝜔 |𝜔 ) holds.                        (𝐵|𝐴) ∈ ∆𝑖 iff (𝐵|𝐴) ∈ ∆𝑖 ∖ ∆𝑖 or (𝐵|𝐴) ∈ ∆𝑖 ∖ ∆1𝑖 or
                                                                                                    1    2                       2
                                                                               1    2
   As for probabilities, conditional independence for OCFs ex- (𝐵|𝐴) ∈ ∆𝑖 ∩ ∆𝑖 . Thus 𝑉 (𝜔, ∆𝑖 )
presses that information on Σ3 is redundant for Σ2 if full infor-
                                                                                          =      |{(𝐵|𝐴) ∈ ∆𝑖 | 𝜔 |= 𝐴 ∧ ¬𝐵}|
mation on Σ1 is available and used.
                                                                                          =      |{(𝐵|𝐴) ∈ ∆1𝑖 | 𝜔 1 𝜔 3 |= 𝐴 ∧ ¬𝐵}|
                                                        𝑜𝑐𝑓
                                          ⋃︀s for OCFs C
Proposition 6. An inductive inference operator              :                                    +|{(𝐵|𝐴) ∈ ∆2𝑖 | 𝜔 2 𝜔 3 |= 𝐴 ∧ ¬𝐵}|
∆ ↦→ 𝜅Δ satisfies CInd iff for any ∆ = ∆1 Σ1 ,Σ2 ∆2 | Σ3 we
                                                                                                 −|{(𝐵|𝐴) ∈ ∆1𝑖 ∩ ∆2𝑖 | 𝜔 1 𝜔 2 𝜔 3 |= 𝐴 ∧ ¬𝐵}|.
          |=




have Σ1    𝜅 Σ2 |Σ3 .

                                                                  Since ∆1𝑖 ∩ ∆2𝑖 = ∆𝑖 ∩ (ℒ(Σ3 )|ℒ(Σ3 )), and for any
Proof. We first recall the following Lemma from [18]
                                                                  (𝐵|𝐴) ∈ ∆𝑖 ∩ (ℒ(Σ3 )|ℒ(Σ3 )), 𝑍Δ ((𝐵|𝐴)) = 𝑍Δ1 ((𝐵|𝐴)) =
Lemma 7. Let Σ1 , Σ2 , Σ3 be disjoint subsignatures of Σ, let 𝜅 𝑍Δ2 ((𝐵|𝐴)) (with Fact 1) we have:
                                             1               2
                         |=




be an OCF. Then Σ1     𝜅 Σ2 |Σ3 iff for all 𝜔 ∈ Ω(Σ1 ), 𝜔 ∈
                                                                                   1 3    1          2 3  2        3   1
Ω(Σ2 ), and 𝜔 3 ∈ Ω(Σ3 ), we have 𝜅(𝜔 1 𝜔 2 𝜔 3 ) = 𝜅(𝜔 1 𝜔 3 ) + 𝑉 (𝜔, ∆𝑖 ) = 𝑉 (𝜔 𝜔 , ∆𝑖 ) + 𝑉 (𝜔 𝜔 , ∆𝑖 ) − 𝑉 (𝜔 , ∆𝑖 )
    2 3         3                                                                  1 3    1         2 3   2        3   2
𝜅(𝜔 𝜔 ) − 𝜅(𝜔 ).                                                             = 𝑉 (𝜔 𝜔 , ∆𝑖 ) + 𝑉 (𝜔 𝜔 , ∆𝑖 ) − 𝑉 (𝜔 , ∆𝑖 ).
   We now show that: (†) for any 𝐴 ∈ ℒ(Σ𝑖 ) and 𝜔 3 ∈ ℒ(Σ3 ),
𝜅(𝐴𝜔 3 ) = min{𝜅(𝜔 1 𝜔 3 ) | 𝜔 𝑖 𝜔 2 𝜔 3 |= 𝐴}. Wlog let 𝑖 = 1 and
𝑗 = 2. Observe that by definition, 𝜅(𝐴𝜔 3 ) = min{𝜅(𝜔 1 𝜔 2 𝜔 3 ) |            Lemma 10. Where ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 , 𝑖 ∈ {1, 2}, 𝜑 ∈
                                                                                                           ⋃︀
𝜔 1 𝜔 2 𝜔 3 |= 𝐴}. Consider some 𝜔 1 𝜔 2 𝜔 3 ∈ Mod1,2,3 (𝐴𝜔 3 )                ℒ(Σ1 ) and 𝜔 3 ∈ Ω(Σ3 ), 𝜔 ∈ min⪯lex (Mod(𝜔 3 ∧ 𝜑)) iff 𝜔 1 ∈
s.t. 𝜅(𝜔 1 𝜔 2 𝜔 3 ) = 𝜅(𝐴𝜔 3 ). With Lemma 7, 𝜅(𝜔 1 𝜔 2 𝜔 3 ) =                                                                 Δ
                                                                               min⪯lex (Mod1,3 (𝜔 3 ∧ 𝜑)) and 𝜔 2 ∈ min⪯lex Mod2,3 (𝜔 3 ).
𝜅(𝜔 1 𝜔 3 ) + 𝜅(𝜔 2 𝜔 3 ) − 𝜅(𝜔 3 ). Suppose now towards a contra-                       Δ1                                               Δ2

diction that 𝜅(𝜔 1 𝜔 3 ) > 𝜅(𝐴𝜔 3 ), i.e. there is some 𝜔⋆1 𝜔⋆2 𝜔 3 s.t.
𝜔⋆1 𝜔 3 |= 𝐴𝜔 3 and 𝜅(𝜔⋆1 𝜔 3 ) + 𝜅(𝜔⋆2 𝜔 3 ) < 𝜅(𝜔 1 𝜔 3 ) + 𝜅(𝜔 2 𝜔 3 ).     Proof. For the ⇒-direction, suppose 𝜔 ∈ min⪯lex (Mod(𝜔 3 )).
                                                                                                                                                 Δ

Suppose first that 𝜅(𝜔⋆2 𝜔 3 ) > 𝜅(𝜔 2 𝜔 3 ). Then 𝜅(𝜔⋆1 𝜔 3 ) <               Suppose now towards a contradiction that either (a) 𝜔 2 𝜔 3 ̸∈
𝜅(𝜔 1 𝜔 3 ) and thus 𝜅(𝜔⋆1 𝜔 2 𝜔 3 ) < 𝜅(𝜔 1 𝜔 2 𝜔 3 ), contradiction.         min⪯lex (Mod2,3 (𝜔 3 )) or (b) 𝜔 1 ̸∈ min⪯lex (Mod1,3 (𝜔 3 ∧ 𝜑)).
                                                                                         Δ2                                            Δ1
Suppose therefore that 𝜅(𝜔⋆2 𝜔 3 ) ⩽ 𝜅(𝜔 2 𝜔 3 ). Then we can derive
that 𝜅(𝜔⋆1 𝜔 2 𝜔 3 ) < 𝜅(𝜔 1 𝜔 2 𝜔 3 ), contradiction.                         ad. (b) Suppose that 𝜔 𝜔 ∈ min⪯lex (Mod2,3 (𝜔 3 )) (the case
                                                                                                                2   3
                                                                                                                                 Δ2
   From the †, it follows immediately that 𝜅(𝐴𝐵𝜔 3 ) < 𝜅(𝐴𝐵𝜔 3 )                           where also 𝜔 2 𝜔 3 ̸∈ min⪯lex (Mod2,3 (𝜔 3 )) is similar).
iff 𝜅(𝐴𝐵𝐶𝜔 3 ) < 𝜅(𝐴𝐵𝐶𝜔 3 ) for any 𝐴, 𝐵 ∈ ℒ(Σ𝑖 ), 𝐶 ∈                                                                         Δ2

ℒ(Σ𝑗 ) and 𝜔 3 ∈ ℒ(Σ3 ) (for 𝑖, 𝑗 = 1, 2 and 𝑖 ̸= 𝑗).                                      𝜔 1 ̸∈ min⪯lex (Mod1,3 (𝜔 3 ∧ 𝜑)) implies that there is
                                                                                                           Δ1

                                                                    𝑜𝑐𝑓
                                                                                           some 𝜔 ′1 𝜔 3 ∈ Mod1,3 (𝜔 3 ∧𝜑) s.t. 𝜔 ′1 𝜔 3 ≺lex          1 3
                                                                                                                                                Δ1 𝜔 𝜔 , i.e.
                                           ⋃︀ for OCFs C
Proposition 8. An inductive inference operator               :                             there is some 𝑖 ⩾ 0 s.t. for every 𝑗 > 𝑖, 𝑉 (𝜔 ′1 𝜔 3 , ∆1𝑗 ) =
∆ ↦→ 𝜅Δ satisfies CRel iff for any ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 , it                             𝑉 (𝜔 1 𝜔 3 , ∆1𝑗 ) and 𝑉 (𝜔 ′1 𝜔 3 , ∆1𝑖 ) < 𝑉 (𝜔 1 𝜔 3 , ∆1𝑖 ) (with
holds that 𝜅Δ𝑖 = 𝜅Δ |Σ1 ∪Σ3 .                                                              Lemma 9) this implies:
Proof. Similar to the proof of Proposition 5.                                                         𝑉 (𝜔 ′1 𝜔 2 𝜔 3 , ∆𝑗 )
                                                                                                 =    𝑉 (𝜔 ′1 𝜔 3 , ∆1𝑗 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑗 ) − 𝑉 (𝜔 3 , ∆1𝑗 )
4. Lexicographic Inference Satisfies                                                             =    𝑉 (𝜔 1 𝜔 2 𝜔 3 , ∆𝑗 )
   Conditional Syntax Splitting                                                                  =    𝑉 (𝜔 1 𝜔 3 , ∆1𝑗 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑗 ) − 𝑉 (𝜔 3 , ∆1𝑗 )

In this section, we show that for any conditional belief base that                         and
safely splits conditionally, conditional syntax splitting is satisfied.
We first need to show some intermediate results.                                                      𝑉 (𝜔 ′1 𝜔 2 𝜔 3 , ∆𝑖 )
                                                                                                 =    𝑉 (𝜔 ′1 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆1𝑖 )
Fact 1. Where ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 , 𝑖 ∈ {1, 2} and
                              ⋃︀
                                                                                                 <    𝑉 (𝜔 1 𝜔 2 𝜔 3 , ∆𝑖 )
(𝐵|𝐴) ∈ ∆𝑖 , 𝑍Δ ((𝐵|𝐴)) = 𝑍Δ𝑖 ((𝐵|𝐴))
                                                                                                 =    𝑉 (𝜔 1 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆1𝑖 )
Proof. Immediate from Proposition 3.
                                                                     which (since 𝜔 ′1 𝜔 2 𝜔 3 ∈ Mod(𝜔 3 )) contradicts 𝜔 ∈
   The following Lemma shows that the components of vectors          min⪯lex (𝜔 3 ).
lex(𝜔) can be simply combined by summation over disjoint sub-             Δ

languages (taking into account double counting):              ad. (a) Similar.
                                                                                     The ⇐-direction is similar.

                                                                               2
                                                                                   Notice that it follows from Fact 1 that, given the Z-partition (Δ1 , . . . , Δ𝑛 )
                                                                                   of Δ and Σ𝑖 ⊆ Σ, Δ𝑖𝑗 = Δ𝑗 ∩ (ℒ𝑖 |ℒ𝑖 ) for any 0 ⩽ 𝑗 ⩽ 𝑛.




                                                                          66
�Proposition 11. 𝐶 lex satisfies CInd.                                         basis of examples such as the Tweety-example, but no generic
                                                                              formal description has been given.
Proof. Suppose that ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 . We show that
                                   ⋃︀
                                                                                 In this paper, we have developed the necessary tools to talk
Σ1 and Σ2 are independent w.r.t. ⪯lex   Δ conditional on Σ3 , i.e. for        about the drowning effect in a formally precise manner. Indeed,
any 𝑖 ∈ {1, 2}, for any 𝜔1𝑖 , 𝜔2𝑖 ∈ Ω𝑖 , 𝜔 3 ∈ Ω3 , 𝑗 ∈ {1, 2}, 𝑗 ̸= 𝑖,       the first crucial notion is that of unrelatedness of propositions.
𝜔1𝑖 𝜔 3 ⪯lex  𝑖 3        𝑖 𝑗 3
          Δ 𝜔2 𝜔 iff 𝜔1 𝜔 𝜔         ⪯lex    𝑖 𝑗 3
                                      Δ 𝜔2 𝜔 𝜔 for all 𝜔
                                                             𝑗
                                                               ∈ Ω𝑗 .         This notion is formally captured by safe splitting into subbases
With Proposition 4 this is sufficient to show the proposition. For            (Definition 7): given a belief base ∆, a proposition 𝐴 is unrelated
simplicity, we let 𝑖 = 1 and 𝑗 = 2, the other case follows by                 to a proposition 𝐶 iff ∆ can be safely split into ⋃︀
                                                                                                                                subbases ∆1 , ∆2
symmetry.                                                                     conditional on a sub-alphabet Σ3 , i.e. ∆ = ∆1 sΣ1 ,Σ2 ∆2 | Σ3 ,
   For the ⇒-direction, suppose that 𝜔11 𝜔 3 ⪯lex      1 3
                                                 Δ 𝜔2 𝜔 . We show             and 𝐴 ∈ ℒ(Σ2 ) and 𝐶 ∈ ℒ(Σ1 ∪ Σ3 ). This means that the
        1 2 3    lex  1 2 3              2
that 𝜔1 𝜔 𝜔 ⪯Δ 𝜔2 𝜔 𝜔 for all 𝜔 ∈ Ω2 . We first make the                      abstract situation of the drowning problem can be precisely de-
following observation that follows in view of Lemma 10. For any               scribed by conditional syntax splitting. We see that the drowning
𝜔⋆2 ∈ min⪯lex (Mod2,3 (𝜔 3 )) and 𝑖 = 1, 2:                                   effect is nothing else than a violation of the postulate of condi-
             Δ2
                                                                              tional independence (CInd): if we know that a typical property
                         𝜔𝑖1 𝜔 3 ≈lex 1 2 3
                                  Δ 𝜔𝑖 𝜔⋆ 𝜔 .
                                                                              𝐵 of 𝐴𝐷-individuals (𝐴𝐷 |∼ Δ 𝐵) is unrelated to an exceptional
                                                                              subclass 𝐶 of 𝐴𝐷, then we can also derive that if something is
Thus, for any 𝜔 2 ∈ min⪯lex (Mod2,3 (𝜔 3 )):                                  𝐴𝐷𝐶 is typically 𝐵 (𝐴𝐷𝐶 |∼ Δ 𝐵). We illustrate this with the
                              Δ2
                                                                              Tweety-example:
                        𝜔11 𝜔 2 𝜔 3 ⪯lex 1 2 3
                                     Δ 𝜔2 𝜔 𝜔 .                               Example 8 (Example 2 ctd.). We already    established in Example
                                                                                                                 ⋃︀
                                                                              6 that ∆ = {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝)} {𝑓,𝑝},{𝑒} {(𝑒|𝑏)} | {𝑏}. It is
This means (with Lemma 9) that there is some 𝑖 ⩾ 0
                                                                              now not hard to see that any inductive inference operator C that
s.t. 𝑉 (𝜔11 𝜔 3 , ∆1𝑗 ) = 𝑉 (𝜔21 𝜔 3 , ∆1𝑗 ) for every 𝑗 > 𝑖 and
                                                                              satisfies (DI) and (CInd) avoids the drowning effect. In more
𝑉 (𝜔11 𝜔 3 , ∆1𝑖 ) ⩽ 𝑉 (𝜔21 𝜔 3 , ∆1𝑖 ). Thus, for any 𝜔 2 ∈ Ω2 , it
                                                                              detail, we have:
holds that (for any 𝑗 > 𝑖):
                                                                                                  𝑏 |∼ Δ 𝑒      by DI                         (5)
           𝑉 (𝜔11 𝜔 3 , ∆1𝑗 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑗 ) − 𝑉 (𝜔 3 , ∆1𝑗 )
                                                                                              𝑏 ∧ 𝑝 |∼ Δ 𝑒      by CInd and (5)               (6)
       =   𝑉 (𝜔21 𝜔 3 , ∆1𝑗 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑗 ) − 𝑉 (𝜔23 , ∆1𝑗 )
                                                                              For any inductive inference operator that additionally satisfies
and:                                                                          Cut (i.e. from 𝐴 |∼ 𝐵 and 𝐴 ∧ 𝐵 |∼ 𝐶 derive 𝐴 |∼ 𝐶), a postulate
                                                                              that holds for any inductive inference operator based on TPOs
           𝑉 (𝜔11 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆1𝑗 )           [13], we obtain:
       ⩽   𝑉 (𝜔21 𝜔 3 , ∆1𝑖 ) + 𝑉 (𝜔 2 𝜔 3 , ∆2𝑖 ) − 𝑉 (𝜔 3 , ∆1𝑗 ).
                                                                                         𝑝 |∼ Δ 𝑏     by DI                          (7)
With Lemma 9 we have that 𝑉 (𝜔𝑘1 𝜔 2 𝜔 3 , ∆𝑙 ) = 𝑉 (𝜔𝑘1 𝜔 3 , ∆1𝑙 ) +                   𝑝 |∼ Δ 𝑒     by Cut, (6) and (7)            (8)
𝑉 (𝜔 2 𝜔 3 , ∆2𝑙 ) − 𝑉 (𝜔 3 , ∆3𝑙 ) for 𝑘 = 1, 2 and 𝑙 ⩽ 𝑛 and thus
𝜔11 𝜔 2 𝜔 3 ⪯lex   2 2 3                2
              Δ 𝜔1 𝜔 𝜔 (for any 𝜔 ∈ Ω2 ).
                                                                          Summarizing, we can express our findings as follows:
   The ⇐-direction is similar.                                         Proposition 13. Any inductive inference operator that satis-
Proposition 12. 𝐶 lex satisfies Rel.                                   fies (CInd) does not ⋃︀   show the drowning problem (for ∆ =
                                                                       {(𝑓 |𝑏), (𝑏|𝑝), (¬𝑓 |𝑝)} {𝑓,𝑝},{𝑒} {(𝑒|𝑏)} | {𝑏}).
Proof. This follows immediately from Lemma 10. Indeed, we
                                                                          We note that the drowning effect has not been defined in the
have that (for 𝐴, 𝐵 ∈ ℒ1 and a complete conjunction 𝐷 ∈ ℒ3 ):
         lex                                                           literature in a general sense. Above, we provided, to the best
𝐴𝐷 |∼ Δ 𝐵 iff for all 𝜔 ∈ min⪯lex (𝐴𝐷), 𝜔 1 𝜔 3 |= 𝐵. With
                                          Δ                            of our knowledge, the first general definition of the drowning
Lemma 10, 𝜔 ∈ min⪯lex (𝐴𝐷) iff 𝜔 1 ∈ min⪯lex (𝐴𝐷). Since problem. As we defined the drowning effect as a violation of
                            Δ                        Δ1
𝐵 ∈ ℒ1 , this implies that if 𝜔1 𝜔 3 ∈ min⪯lex (𝐴𝐷) then the postulate of conditional independence, it is trivial that any
                                                       Δ1              inductive inference operator that satisfies (Cind) does not show
𝜔 1 𝜔 3 |= 𝐵.
                                                                       the drowning problem interpreted in this general, formal sense.
  From Proposition 11 and 12, we can immediately obtain the
main result of this section:                                                  6. Lehmann’s desirable closure
Theorem 1. 𝐶 lex satisfies SynSplit.                                             properties
                                                                              Lehmann 1995 remarks that in addition to the properties encoded
5. The Drowning Effect as                                                     by rational closure, other properties for inductive inference op-
   Conditional Independence                                                   erators might be desirable. In particular, he lists four properties:
                                                                              presumption of typicality, presumption of independence, priority
As mentioned in the introduction, the drowning effect, illustrated            of typicality and respect for specificity. All of these properties
by Example 2, is intuitively related to syntax splitting. In more             are only explained informally and illustrated using examples in
detail, the drowning effect is constituted by the fact that according         [1], and to the best of our knowledge, no attempts to formalize
to some inductive inference operators (e.g. system 𝑍), exceptional            or generalize these notions has been made. In this section, we
subclasses (e.g. penguins) do not inherit any properties of the               show how the desired behaviour for all but one of the examples
superclass (e.g. birds), even if these properties are unrelated to the        given by [1] can be straightforwardly derived by assuming con-
reason for the subclass being exceptional (e.g. having beaks). To             ditional inference relations satisfy (conditional) syntax splitting.
the best of our knowledge, discussion of the drowning effect in               We now describe the four properties from [1] and their relation
the literature has been restricted to informal discussions on the             with conditional syntax splitting:




                                                                         67
�Presumption of Typicality For a conditional belief base for                   This section thus shows that all four properties proposed in [1]
which (𝑥|𝑝) ∈ ∆ implies, for any inference relation |∼ Δ that              are subsumed by (conditional) syntax splitting. Hence, any infer-
satisfies rational monotonicity (i.e. to derive from 𝐴 |∼ 𝐵 and            ence relation satisfying conditional syntax splitting also satisfies
𝐴 ̸|∼ 𝐶 that 𝐴𝐶 |∼ 𝐵), 𝑝∧𝑞 |∼ Δ 𝑥 or 𝑝 |∼ Δ ¬𝑞. The presumption            these properties.
of typicality obliges us, “in absence of a convincing reason to
accept the latter” [1], to derive 𝑝 ∧ 𝑞 |∼ 𝑥. Lehmann does not
elaborate on what constitutes “a convincing reason”, but does              7. Related Work
state that for ∆1 = {(𝑥|𝑝)}, 𝑝 ∧ 𝑞 |∼ Δ1 𝑥 should hold. It is clear
that this behaviour follows from (Ind):                                    The phenomenon of syntax splitting has been observed as early
                                                                           as 1980 in [20] under the name of “system independence”. The
Fact 2. Let an inductive inference operator satisfying Ind and             name syntax splitting was coined in [21] who studied it in the
∆1 = {(𝑥|𝑝)} be given. Then 𝑝 ∧ 𝑞 |∼ Δ1 𝑥                                  context of belief revision. Later, it was studied for other forms of
                                                                           belief revision in [22, 11], and for inductive inference operators
    ⋃︀ Let Σ1 = {𝑝, 𝑥} and Σ2 = {𝑞}. Then clearly ∆1 =
Proof.
                                                                           in [8]. Our paper is a direct continuation of the work done in
∆1 Σ1 ,Σ2 ∅ and thus with (Ind) we have: 𝑝 |∼ Δ1 𝑥 iff 𝑝 ∧
𝑞 |∼ Δ1 𝑥. Since (𝑥|𝑝) ∈ ∆1 , with (DI), 𝑝 |∼ Δ1 𝑥 and thus                [7], where we have shown that lexicographic inference satisfies
𝑝 ∧ 𝑞 |∼ Δ1 𝑥.                                                             syntax splitting, and that the drowning effect is independent of
                                                                           syntax splitting. This work thus solves an important open question,
Presumption of independence The presumption of inde-                       namely whether generalization of syntax splitting as studied in
pendence states that “even if typicality is lost with respect to one       [8, 7] can say something about the drowning effect.
consequent, we may still presume typicality with respect to an-               Conditional independence for ranking functions has been stud-
other, unless there is a reason to the contrary” [1]. It is illustrated    ied in [12], for belief revision in [23], and for conditional belief
using the conditional belief base ∆2 = {(𝑥|𝑝), (¬𝑞|𝑝)}. Then               revision in [18]. To the best of our knowledge, it has not been con-
presumption of independence justifies us in deriving 𝑝∧𝑞 |∼ Δ2 𝑥.          sidered for inductive inference operators. We connect inductive
Fact 3. Let the inductive inference operator satisfying CInd               inference operators with these works, as we show that the same
and ∆2 = {(𝑥|𝑝), (¬𝑞|𝑝)} be given. Then 𝑝 ∧ 𝑞 |∼ Δ2 𝑥.                     conditions of conditional independence as studied in [12, 18] on
                                                                           the total preorders respectively OCFs underlying inductive in-
Proof. It⋃︀ can be easily verified that ∆2            =                    ference operators (Definition 11) guarantee conditional syntax
{(𝑥|𝑝)} s{𝑥},{𝑞} {(¬𝑞|𝑝)} | {𝑝}.        With DI, we have                   splitting.
𝑝 |∼ Δ2 𝑥. With CInd we have 𝑝 ∧ 𝑞 |∼ Δ2 𝑥.                                   In [16], the class of RC-extending inference relations is de-
                                                                           scribed. In future work, we plan to give a complete characterisa-
Priority of typicality Priority of typicality gives, in situations
                                                                           tion of the subclass of RC-extending inference relations satisfying
where the presumption of typicality and the presumption of inde-
                                                                           syntax splitting.
pendence clash, priority to the former. It is illustrated using the
conditional belief base ∆3 = {(𝑥|𝑝), (¬𝑥|𝑝 ∧ 𝑞)}. The presump-
tion of typicality justifies us in deriving 𝑝 ∧ 𝑞 ∧ 𝑟 |∼ Δ3 ¬𝑥 (since 8. Conclusion
no reason can be found for accepting 𝑝 ∧ 𝑞 |∼ Δ3 ¬𝑟), whereas
we can derive both 𝑝 ∧ 𝑞 ∧ 𝑟 |∼ ¬𝑥 and 𝑝 ∧ 𝑞 ∧ 𝑟 |∼ Δ3 𝑥 with The main contributions of this paper are the following: (1) we
the presumption of independence. Priority of typicality demands define the concept of conditional syntax splitting for inductive
that priority is given to 𝑝 ∧ 𝑞 ∧ 𝑟 |∼ Δ3 ¬𝑥.                         inference operators, thus bringing a notion of conditional indepen-
Fact 4. Let the inductive inference operator satisfying Ind and dence between sub-signatures to the realm of inductive inference
∆3 = {(𝑥|𝑝), (¬𝑥|𝑝 ∧ 𝑞)} be given. Then 𝑝 ∧ 𝑞 ∧ 𝑟 |∼ Δ3 ¬𝑥.           operators; (2) we show that lexicographic inference satisfies con-
                                                                      ditional syntax splitting; (3) we show how the drowning effect
Proof.   Let Σ 1 = {𝑝, 𝑞, 𝑥} and Σ2 = {𝑟}. Then clearly ∆3 =          can be seen as a violation of conditional syntax splitting, and (4)
    ⋃︀
∆3 Σ1 ,Σ2 ∅ and thus with (Ind) we have: 𝑝 ∧ 𝑞 |∼ Δ3 ¬𝑥 iff we show how Lehman’s desirable properties can be derived from
𝑝 ∧ 𝑞 ∧ 𝑟 |∼ Δ3 ¬𝑥. Since (¬𝑥|𝑝 ∧ 𝑞) ∈ ∆3 , with (DI), we have (conditional) syntax splitting.
𝑝 ∧ 𝑞 |∼ Δ3 ¬𝑥.                                                          There are several main avenues for further work. Firstly, it
Lehmann 1995 uses a second example to illustrate priority of          will be interesting to investigate whether other inductive infer-
typicality:                                                           ence operators that satisfy (non-conditional) syntax splitting, such
                                                                      as c-representations ([14, 8] and system W [24, 25], also satisfy
Example 9 (Example 5 in [1]). Let ∆4                               = conditional syntax splitting. Secondly, we want to develop algo-
{(𝑥|𝑝), (𝑞|⊤), (¬𝑥|𝑞)} and argues that here, 𝑝 ∧ 𝑞 |∼ Δ4 𝑥 rithms for deciding whether and how a conditional belief base
should hold. Here, Lehmann argues that (𝑞|⊤) allows us to can be safely split, and investigate their computational complexity.
infer 𝑝 |∼ Δ4 𝑞 and thus 𝑝 is “defeasibly more specific than 𝑞”. Thirdly, we plan to apply our results to applications of lexico-
Therefore, 𝑝 ∧ 𝑞 |∼ Δ4 𝑥 should hold instead of 𝑝 ∧ 𝑞 |∼ Δ4 ¬𝑥. graphic inference to (probabilistic) description logics, and take
However, we argue that this belief base and the resulting desir- advantage of them for the development of efficient implementa-
able inferences cannot be explained in terms of independence, as tions of lexicographic inference.
∆4 cannot be safely split into {(𝑥|𝑝)} and {(𝑞|⊤), (¬𝑥|𝑞)}.
Respect for specificity The final property, respect for speci-
ficity, gives guidelines on how to decide when the two presump-
tions clash: in that case the inference based on the assertion with
a more specific antecedent should be used to guide the inferential
process. These properties are illustrated in [1] using ∆3 and ∆4
as well.




                                                                      68
�Acknowledgments                                                        [20] J. Shore, R. Johnson, Axiomatic derivation of the principle
                                                                            of maximum entropy and the principle of minimum cross-
The work of Jesse Heyninck was partially supported by Fonds                 entropy, IEEE Transactions on Information Theory IT-26
Wetenschappelijk Onderzoek – Vlaanderen (project G0B2221N).                 (1980) 26–37.
                                                                       [21] R. Parikh, Beliefs, belief revision, and splitting languages,
                                                                            Logic, language and computation 2 (1999) 266–268.
References                                                             [22] T. Aravanis, P. Peppas, M.-A. Williams, Full characteriza-
                                                                            tion of parikh’s relevance-sensitive axiom for belief revision,
 [1] D. Lehmann, Another perspective on default reasoning,
                                                                            Journal of Artificial Intelligence Research 66 (2019) 765–
     Annals of mathematics and artificial intelligence 15 (1995)
                                                                            792.
     61–82.
                                                                       [23] M. J. Lynn, J. P. Delgrande, P. Peppas, Using conditional
 [2] G. Casini, U. Straccia, Lexicographic closure for defeasi-
                                                                            independence for belief revision, in: Proceedings AAAI-22,
     ble description logics, in: Proc. of Australasian Ontology
                                                                            2022.
     Workshop, volume 969, 2012, pp. 28–39.
                                                                       [24] C. Komo, C. Beierle, Nonmonotonic reasoning from con-
 [3] T. Lukasiewicz, Expressive probabilistic description logics,
                                                                            ditional knowledge bases with system w, Annals of Mathe-
     Artificial Intelligence 172 (2008) 852–883.
                                                                            matics and Artificial Intelligence (2021) 1–38.
 [4] R. Booth, G. Casini, T. A. Meyer, I. J. Varzinczak, On the en-
                                                                       [25] J. Haldimann, C. Beierle, Inference with system w satisfies
     tailment problem for a logic of typicality, in: Twenty-Fourth
                                                                            syntax splitting, arXiv preprint arXiv:2202.05511 (2022).
     International Joint Conference on Artificial Intelligence,
     2015.
 [5] M. Goldszmidt, J. Pearl, Qualitative probabilities for default
     reasoning, belief revision, and causal modeling, AI 84
     (1996) 57–112.
 [6] T. Eiter, T. Lukasiewicz, Default reasoning from conditional
     knowledge bases: Complexity and tractable cases, Artificial
     Intelligence 124 (2000) 169–241.
 [7] J. Heyninck, G. Kern-Isberner, T. Meyer, Lexicographic
     entailment, syntax splitting and the drowning problem, in:
     Accepted for IJCAI 2022, 2022.
 [8] G. Kern-Isberner, C. Beierle, G. Brewka, Syntax splitting=
     relevance+ independence: New postulates for nonmonotonic
     reasoning from conditional belief bases, in: Proceedings of
     the International Conference on Principles of Knowledge
     Representation and Reasoning, volume 17, 2020, pp. 560–
     571.
 [9] B. de Finetti, Theory of probability (2 vols.), 1974.
[10] D. Makinson, General theory of cumulative inference,
     in: International Workshop on Non-Monotonic Reasoning
     (NMR), Springer, 1988, pp. 1–18.
[11] G. Kern-Isberner, G. Brewka, Strong syntax splitting for
     iterated belief revision, in: C. Sierra (Ed.), Proceedings
     International Joint Conference on Artificial Intelligence,
     IJCAI 2017, ijcai.org, 2017, pp. 1131–1137.
[12] W. Spohn, Ordinal conditional functions: A dynamic theory
     of epistemic states, in: Causation in decision, belief change,
     and statistics, Springer, 1988, pp. 105–134.
[13] S. Kraus, D. Lehmann, M. Magidor, Nonmonotonic reason-
     ing, preferential models and cumulative logics, Artificial
     intelligence 44 (1990) 167–207.
[14] G. Kern-Isberner, Handling conditionals adequately in un-
     certain reasoning and belief revision, Journal of Applied
     Non-Classical Logics 12 (2002) 215–237.
[15] D. Lehmann, M. Magidor, What does a conditional knowl-
     edge base entail?, Artificial intelligence 55 (1992) 1–60.
[16] G. Casini, T. Meyer, I. Varzinczak, Taking defeasible en-
     tailment beyond rational closure, in: European Conference
     on Logics in Artificial Intelligence, Springer, 2019, pp. 182–
     197.
[17] M. Ritterskamp, G. Kern-Isberner, Preference-based default
     reasoning., in: FLAIRS Conference, 2008, pp. 672–677.
[18] G. Kern-Isberner, C. Beierle, J. Heyninck, Conditional
     independence for iterated belief revision, in: Accepted for
     IJCAI 2022, 2022.
[19] W. Spohn, The laws of belief: Ranking theory and its philo-
     sophical applications, Oxford University Press, 2012.




                                                                  69
�