Vol-3197/paper7

From BITPlan ceur-ws Wiki
Jump to navigation Jump to search

Paper

Paper
edit
description  scientific paper published in CEUR-WS Volume 3197
id  Vol-3197/paper7
wikidataid  Q117341748→Q117341748
title  Modelling Agents Roles in the Epistemic Logic L-DINF
pdfUrl  https://ceur-ws.org/Vol-3197/paper7.pdf
dblpUrl  https://dblp.org/rec/conf/nmr/Costantini0P22
volume  Vol-3197→Vol-3197
session  →

Modelling Agents Roles in the Epistemic Logic L-DINF

load PDF

Modelling Agents Roles in the Epistemic Logic L-DINF
Stefania Costantini1,3 , Andrea Formisano2,3,* and Valentina Pitoni1
1
  Università di L’Aquila (DISIM), Via Vetoio, 1, L’Aquila, 67100, Italy
2
  Università di Udine (DMIF), Via delle Scienze, 206, Udine, 33100, Italy
3
  INdAM — GNCS, Piazzale Aldo Moro, 5, Roma, 00185, Italy


                                       Abstract
                                       In this paper, we further advance a line of work aimed to formally model via epistemic logic (aspects of) the group dynamics of
                                       cooperative agents. In fact, we have previously proposed and here extend a particular logical framework (the Logic of “Inferable”
                                       L-DINF), where a group of cooperative agents can jointly perform actions. I.e., at least one agent of the group can perform the
                                       action, either with the approval of the group or on behalf of the group. In this paper, we introduce agents’ roles within a group.
                                       We choose to model roles in terms of the actions that each agent is enabled by its group to perform. We extend the semantics
                                       and the proof of strong completeness of our logic, and we show the usefulness of the new extension via a significant example.

                                       Keywords
                                       Epistemic Logic, Multi Agent System, Cooperation and Roles



1. Introduction                                                                                           logics where agents belong to groups, and it is possible
                                                                                                          for agents to reason about what their group of agents can
This paper falls within a research effort whose overall do and what they themselves are able to do or prefer to do
objective is to devise a comprehensive framework based (in terms of actions to perform) and which cost they are
upon epistemic logic, so as to allows a designer to formal- able to pay for the execution of a costly action, whereas
ize and formally verify agents and Multi-Agent Systems however, in case of insufficient budget, an agent can be
(MAS). We have been particularly interested in modelling supported by its group.
the capability to construct and execute joint plans within a                                                 In this paper, we introduce roles that agents may assume
group of agents. However, such a logical framework will within the group (concerning which actions they are both
really be useful if it will be immersed (either fully or in able and enabled to perform). I.e., within a group, an
parts) into a real agent-oriented programming language.1 action can be performed only by agents which are allowed
To this aim, we have taken all along into particular ac- by the group to do so (supposedly, because they have the
count the connection between theory and practice, so as right competences).
to make our logic actually usable by a system’s designers.                                                   This paper continues, in fact, a long-lasting line of work
    Cooperation among agents in a MAS allow agents to aimed to formally model via epistemic logic (aspects of)
achieve better and faster results, and it is often the case the group dynamics of cooperative agents via the Logic of
that a group can fulfil objectives that are out of reach for “Inferable” L-DINF (first introduced in [4]). As mentioned,
the single agent. Often, each participating agent is not able in past work we have taken into consideration actions’ cost
to solve a whole problem or to reach an overall goal by [5], and the preferences that each agent can have for what
itself, but can only cope with a small subproblem/subgoal concerns performing each action [6].
for which it has the required competence. The overall                                                        The key assumption underlying our approach is that, al-
result/goal is accomplished by means of cooperation with though logic has proved a good tool to express the seman-
other agents. This is the motivation that led us to develop tics underlying (aspects of) agent-oriented programming
                                                                                                          languages, in order to foster a practical adoption there are,
NMR 2022: 20th International Workshop on Non-Monotonic Reason- at least, the following requirements. (i) It is important to
ing, August 07–09, 2022, Haifa, Israel
*
   Corresponding author.
                                                                                                          keep the complexity of the logic low enough to be practi-
    stefania.costantini@univaq.it (S. Costantini);                                                        cally manageable. (ii) It is important to ensure modularity,
andrea.formisano@uniud.it (A. Formisano);                                                                 as it allows programmers to better organize the definition
valentina.pitoni@univaq.it (V. Pitoni)                                                                    of the application at hand, and allows an agent-systems’
� 0000-0002-5686-6124 (S. Costantini); 0000-0002-6755-9314                                                definition to be more flexible and customizable. Notice,
(A. Formisano); 0000-0002-4245-4073 (V. Pitoni)
             © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License moreover, that modularity can be an advantage for ex-
             Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
             CEUR Workshop Proceedings (CEUR-WS.org)
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                                                                                          plainability, in the sense of making the explanation itself
1
  Note that several agent-oriented programming languages and sys- modular. (iii) It is important not to overload syntax, as a
  tems exist, where, since our approach is logic-based, we are inter- cumbersome syntax can discourage practical adoption.
  ested in those of them which are based upon computational logic                                            So, our approach tries to join the rigour of logic and a
  (cf., e.g., [1, 2, 3] for recent surveys on such languages), and thus
                                                                                                          attention to practical aspects. Thus, we allow a designer
    endowed (at least in principle) with a logical semantics.




                                                                                               70
�to define in a separate way at the semantic level which                            where 𝑝 ranges over Atm, 𝑖 ∈ Agt, 𝜑𝐴 ∈ Atm 𝐴 , and
actions are allowed for each agent to perform at each                              𝑑 ∈ N. (Other Boolean operators are defined from ¬
stage, with which degree of preference and, now, taking                            and ∧ in the standard manner. Moreover, for simplicity,
the agent’s role within the group into account. So far in                          whenever 𝐺 = {𝑖} we will write 𝑖 as subscript in place
fact, the specification was missing about which actions                            of {𝑖}.) The language of inferential actions of type 𝛼
an agent is allowed to perform: in practical situations                            is denoted by ℒACT . The static part L-INF of L-DINF,
in fact, it will hardly be the case that every agent can                           includes only those formulas not having sub-formulas of
perform every action, meaning being “able to perform”                              type 𝛼, namely, no inferential operation are admitted.
and “allowed to perform”. This is in fact the new feature                             The expression intend 𝑖 (𝜑𝐴 ) indicates the intention of
that we introduce here.                                                            agent 𝑖 to perform the physical action 𝜑𝐴 in the sense
   For an in-depth discussion on the relationship of logic L-                      of the BDI agent model [10]. This intention can be part
DINF with related work, the reader may refer to [5]. One                           of an agent’s knowledge base from the beginning, or it
may notice that the logic presented here has no explicit                           can be derived later. We do not cope with the formaliza-
time. We tackled however the issue of time in previous                             tion of BDI, for which the reader may refer, e.g., to [11].
work, discussed in [7, 8, 9].                                                      So, we treat intentions rather informally, assuming also
   The paper is organized as follows. In Sect. 2 we intro-                         that intend 𝐺 (𝜑𝐴 ) holds whenever all agents in group 𝐺
duce syntax and semantics of L-DINF, together with an                              intend to perform the action 𝜑𝐴 .
axiomatization of the proposed logical system.2 In Sect. 3                            The formula doi (𝜑𝐴 ) indicates actual execution of
we discuss an example of application of the new logic. In                          the action 𝜑𝐴 by the agent 𝑖. This fact is automatically
Sect. 4 we present our definition of canonical model of an                         recorded by the new belief 𝑑𝑜𝑃     𝑖 (𝜑𝐴 ) (postfix “𝑃 ” stand-
L-DINF theory. Finally, in Sect. 5 we conclude.                                    ing for “past” action). By precise choice, doi and doiP
                                                                                                                 P
                                                                                   (and similarly doG and doG      ) are not axiomatized. In fact,
                                                                                   we assume they are realized in a way that is unknown at
2. Logical framework                                                               the logical level. Hence, the axiomatization concerns only
                                                                                   the relationship between doing and being enabled to do.
The logic L-DINF consists of a static component and a                                 The expressions can_doi (𝜑𝐴 ) and pref _do 𝑖 (𝜑𝐴 , 𝑑)
dynamic one. The former, called L-INF, is a logic of ex-                           are closely related to doi (𝜑𝐴 ). In fact, can_doi (𝜑𝐴 )
plicit beliefs and background knowledge. The dynamic                               is to be seen as an enabling condition, indicating that
component, called L-DINF, extends the static one with dy-                          agent 𝑖 is enabled to execute action 𝜑𝐴 , while in-
namic operators capturing the consequences of the agents’                          stead pref _doi (𝜑𝐴 , 𝑑) indicates the level 𝑑 of pref-
inferential actions on their explicit beliefs as well as a                         erence/willingness of agent 𝑖 to perform that action.
dynamic operator capturing what an agent can conclude                              pref _doG (𝑖, 𝜑𝐴 ) indicates that agent 𝑖 exhibits the maxi-
by performing inferential actions in its repertoire.                               mum level of preference on performing action 𝜑𝐴 within
                                                                                   all members of its group 𝐺. Notice that, if a group of
2.1. Syntax                                                                        agents intends to perform an action 𝜑𝐴 , this will entail
                                                                                   that the entire group intends to do 𝜑𝐴 , that will be enabled
Let Atm = {𝑝, 𝑞, . . .} be a countable set of atomic propo-
                                                                                   to be actually executed only if at least one agent 𝑖 ∈ 𝐺
sitions. By 𝑃 𝑟𝑜𝑝 we denote the set of all propositional for-
                                                                                   can do it, i.e., it can derive can_doi (𝜑𝐴 ).
mulas, i.e. the set of all Boolean formulas built out of the
                                                                                      Formulas of the form B𝑖 𝜙 represent beliefs of agent 𝑖,
set of atomic propositions Atm. A set Atm 𝐴 represents
                                                                                   while those of the form K𝑖 𝜙 express background knowl-
the physical actions that an agent can perform, includ-
                                                                                   edge. Explicit beliefs, i.e., facts and rules acquired via
ing “active sensing” actions (e.g., “let’s check whether it
                                                                                   perceptions during an agent’s operation are kept in the
rains”, “let’s measure the temperature”). The language of
                                                                                   working memory of the agent. Unlike explicit beliefs, an
L-DINF, denoted by ℒL-DINF , is defined as follows:
                                                                                   agent’s background knowledge is assumed to satisfy om-
     𝜙, 𝜓     ::=     𝑝 | ¬𝜙 | 𝜙 ∧ 𝜓 | B𝑖 𝜙 | K𝑖 𝜙 |                               niscience principles, such as closure under conjunction
                      𝑑𝑜𝑖 (𝜑𝐴 ) | 𝑑𝑜𝑃                                              and known implication, and closure under logical con-
                                     𝑖 (𝜑𝐴 ) | 𝑐𝑎𝑛_𝑑𝑜𝑖 (𝜑𝐴 ) |
                      𝑑𝑜𝐺 (𝜑𝐴 ) | 𝑑𝑜𝑃                                              sequence, and introspection. In fact, K𝑖 is actually the
                                      𝐺 (𝜑𝐴 ) | 𝑐𝑎𝑛_𝑑𝑜𝐺 (𝜑𝐴 ) |
                      intend 𝑖 (𝜑𝐴 ) | intend 𝐺 (𝜑𝐴 ) |                            well-known S5 modal operator often used to model/rep-
                      exec 𝑖 (𝛼) | exec 𝐺 (𝛼) | [𝐺 : 𝛼] 𝜙 |                        resent knowledge. The fact that background knowledge
                      pref _do 𝑖 (𝜑𝐴 , 𝑑) | pref _do 𝐺 (𝑖, 𝜑𝐴 )                    is closed under logical consequence is justified because
                                                                                   we conceive it as a kind of stable reliable knowledge base,
         𝛼    ::=     ⊢(𝜙,𝜓) | ∩(𝜙,𝜓) | ↓(𝜙, 𝜓) | ⊣(𝜙, 𝜓)                          or long-term memory. We assume the background knowl-
2                                                                                  edge to include: facts (formulas) known by the agent from
    Note that the part on syntax is reported almost literally from previous
    work, as the enhancements presented here lead to modifications of              the beginning, and facts the agent has later decided to
    the semantics only.                                                            store in its long-term memory (by means of some decision




                                                                              71
�mechanism not treated here) after having processed them              possibility of execution or actual execution of physical ac-
in its working memory. We therefore assume background                tion 𝜑𝐴 . In fact, we assume that when inferring 𝑑𝑜𝑖 (𝜑𝐴 )
knowledge to be irrevocable, in the sense of being stable            (from 𝑐𝑎𝑛_𝑑𝑜𝑖 (𝜑𝐴 ) and possibly other conditions) then
over time.                                                           the action is actually executed, and the corresponding
   A formula of the form [𝐺 : 𝛼] 𝜙, with 𝐺 ∈ 2Agt , and              belief 𝑑𝑜𝑃𝑖 (𝜑𝐴 ) is asserted, possibly augmented with a
where 𝛼 must be an inferential action, states that “𝜙 holds          time-stamp. Actions are supposed to succeed by default;
after action 𝛼 has been performed by at least one of the             in case of failure, a corresponding failure event will be
agents in 𝐺, and all agents in 𝐺 have common knowledge               perceived by the agent. The 𝑑𝑜𝑃  𝑖 beliefs constitute a his-
about this fact”.                                                    tory of the agent’s operation, so they might be useful for
                                                                     the agent to reason about its own past behaviour, and/or,
Remark 1. If an inferential action is performed by an                importantly, they may be useful to provide explanations to
agent 𝑖 ∈ 𝐺, the others agents belonging to the same                 human users.
group 𝐺 have full visibility of this action and, therefore,
as we suppose agents to be cooperative, it is as if they had Remark 3. Explainability in our approach can be di-
performed the action themselves.                             rectly obtained from proofs. Let us assume for simplicity
                                                             that inferential actions can be represented in infix form
   Borrowing from [12], we distinguish four types of infer-
                                                             as 𝜙𝑛 𝑂𝑃 𝜙𝑛+1 . Also, execi (𝛼) means that the mental
ential actions 𝛼 which allow us to capture some of the dy-
                                                             action 𝛼 is executable by agent 𝑖 and it is indeed executed.
namic properties of explicit beliefs and background knowl-
                                                             If, for instance, the user wants an explanation of why the
edge: ↓(𝜙, 𝜓), ∩(𝜙,𝜓), ⊣(𝜙, 𝜓), and ⊢(𝜙,𝜓), These ac-
                                                             physical action 𝜑𝐴 has been performed, the system can
tions characterize the basic operations of forming explicit
                                                             respond by exhibiting the proof that has lead to 𝜑𝐴 , put
beliefs via inference:
                                                             in the explicit form:
  ↓(𝜙, 𝜓): this inferential action infers 𝜓 from 𝜙 in case 𝜙 (execi (𝜙1 𝑂𝑃1 𝜙2 ) ∧ . . . ∧ execi (𝜙𝑛−1 𝑂𝑃𝑛 𝜙𝑛 )∧
        is believed and, according to agent’s background execi (𝜙𝑛 𝑂𝑃𝑛 𝑐𝑎𝑛_𝑑𝑜𝑖 (𝜑𝐴 )) ∧ 𝑖𝑛𝑡𝑒𝑛𝑑𝑖 (𝜑𝐴 )) ⊢ 𝑑𝑜𝑖 (𝜑𝐴 )
        knowledge, 𝜓 is a logical consequence of 𝜙. If where each 𝑂𝑃𝑖 is one of the (mental) actions discussed
        the execution succeeds, the agent starts believing above. The proof can possibly be translated into natural
        𝜓.                                                   language, and declined either top-down or bottom-up.

 ∩(𝜙,𝜓): this action closes the explicit beliefs 𝜙 and 𝜓          As said in the Introduction, we model agents which, to
      under conjunction. I.e., 𝜙 ∧ 𝜓 is deduced from 𝜙         execute an action, may have to pay a cost, so they must
      and 𝜓.                                                   have a consistent budget available. Our agents, moreover,
                                                               are entitled to perform only those physical actions that
  ⊣(𝜙, 𝜓): this inferential action performs a simple form they conclude they can do. In our approach, an action can
        of “belief revision”. It removes 𝜓 from the work- be executed by a group of agents if at least one agent in
        ing memory in case 𝜙 is believed and, according the group can do, and the group has the necessary bud-
        to agent’s background knowledge, ¬𝜓 is logical get available, sharing the cost according to some policy.
        consequence of 𝜙. Both 𝜓 and 𝜙 are required to Being our agent cooperative, among the agents that are
        be ground atoms.                                       able to do some physical action, one is selected (with any
  ⊢ (𝜙,𝜓): this inferential action adds 𝜓 to the work- deterministic rule) among those which best prefer to per-
        ing memory in case 𝜙 is believed and, according form that action. We assume that agents are aware of and
        to agent’s working memory, 𝜓 is logical conse- agree with the cost-sharing policy.
        quence of 𝜙. Notice that, unlike ∩(𝜙,𝜓), this             We have not introduced costs and budget, feasibility
        action operates directly on the working memory         of actions and willingness to perform them, in the lan-
        without retrieving anything from the background guage for two reasons: to keep the complexity of the logic
        knowledge.                                             reasonable, and to make such features customizable in
                                                               a modular way.3 So, as seen below, costs and budget
   Formulas of the forms execi (𝛼) and exec 𝐺 (𝛼) express are coped with at the semantic level which easily allows
executability of inferential actions either by agent 𝑖, or by modular modification, for instance to define modalities of
a group 𝐺 of agents (which is a consequence of any of the cost sharing are different from the one shown here, where
group members being able to execute the action). It has group members share an action cost in equal parts. For
to be read as: “𝛼 is an inferential action that agent 𝑖 (resp. 3
                                                                 We intend to use this logic in practice, to formalize memory in DALI
an agent in 𝐺) can perform”.
                                                                      agents, where DALI is a logic-based agent-oriented programming
                                                                      language [13, 14, 15]. So, computational effectiveness and modu-
Remark 2. In the mental actions ⊢(𝜙,𝜓) and ↓(𝜙, 𝜓),                   larity are crucial. Assuming that agents share the cost is reasonable
the formula 𝜓 which is inferred and asserted as a new                 when agents share resources, or cooperate to a common goal, as
belief can be 𝑐𝑎𝑛_𝑑𝑜𝑖 (𝜑𝐴 ) or 𝑑𝑜𝑖 (𝜑𝐴 ), which denote the            discussed, e.g., in [16, 17].




                                                                72
�brevity we introduce a single budget function, and thus,              (G2) if 𝑤𝑅𝑖 𝑣 then 𝐻(𝑖, 𝑤) = 𝐻(𝑖, 𝑣);
implicitly, a single resource to be spent. Several budget         ∙ 𝑃 : Agt ×𝑊 ×Atm 𝐴 −→ N is a preference function
functions, each one concerning a different resource, might         for physical actions 𝜑𝐴 such that, for each 𝑖 ∈ Agt
be plainly defined.                                                and 𝑤, 𝑣 ∈ 𝑊 , it holds that:
                                                                      (H1) if 𝑤𝑅𝑖 𝑣 then 𝑃 (𝑖, 𝑤, 𝜑𝐴 ) = 𝑃 (𝑖, 𝑣, 𝜑𝐴 );
2.2. Semantics
                                                                  ∙ 𝑉 : 𝑊 −→ 2Atm is a valuation function.
Definition 2.1 introduces the notion of L-INF model,
which is then used to introduce semantics of the static     To simplify the notation, let 𝑅𝑖 (𝑤) denote the set {𝑣 ∈
fragment of the logic.                                   𝑊 | 𝑤𝑅𝑖 𝑣}, for 𝑤 ∈ 𝑊 . The set 𝑅𝑖 (𝑤) identifies the
   Notice that many relevant aspects of an agent’s be-   situations that agent 𝑖 considers possible at world 𝑤. It is
haviour are specified in the definition of L-INF model,  the epistemic state of agent 𝑖 at 𝑤. In cognitive terms, it
including which mental and physical actions an agent can can be conceived as the set of all situations that agent 𝑖
perform, which is the cost of an action and which is the can retrieve from its long-term memory and reason about.
budget that the agent has available, which is the preferenceWhile 𝑅𝑖 (𝑤) concerns background knowledge,
degree of the agent to perform each action. This choice  𝑁 (𝑖, 𝑤) is the set of all facts that agent 𝑖 explicitly be-
has the advantages of keeping the complexity of the logiclieves at world 𝑤, a fact being identified with a set of
under control, and of making these aspects modularly     worlds. Hence, if 𝑋 ∈ 𝑁 (𝑖, 𝑤) then, the agent 𝑖 has
modifiable. In this paper, we introduce new function 𝐻   the fact 𝑋 under the focus of its attention and believes it.
that, for each agent 𝑖 belonging to a group, enables the 𝑁 (𝑖, 𝑤) is the explicit belief set of agent 𝑖 at world 𝑤.
                                                            The executability of inferential actions is determined
agent to perform a certain set of actions, so, in this way, it
specifies the role of 𝑖 within the group.                by the function 𝐸. For an agent 𝑖, 𝐸(𝑖, 𝑤) is the set of in-
   As before let Agt be the set of agents.               ferential actions that agent 𝑖 can execute at world 𝑤. The
                                                         value 𝐵(𝑖, 𝑤) is the budget the agent has available to per-
Definition 2.1. A model is a tuple 𝑀 = (𝑊, 𝑁, ℛ, 𝐸, form inferential actions. Similarly, the value 𝐶(𝑖, 𝛼, 𝑤)
𝐵, 𝐶, 𝐴, 𝐻, 𝑃, 𝑉 ) where:                                is the cost to be paid by agent 𝑖 to execute the inferential
 ∙ 𝑊 is a set of worlds (or situations);                 action 𝛼 in the world 𝑤. The executability of physical
 ∙ ℛ = {𝑅𝑖 }𝑖∈Agt is a collection of equivalence rela- actions is determined by the function 𝐴. For an agent 𝑖,
   tions on 𝑊 : 𝑅𝑖 ⊆ 𝑊 × 𝑊 for each 𝑖 ∈ Agt;             𝐴(𝑖, 𝑤) is the set of physical actions that agent 𝑖 can ex-
                           𝑊
 ∙ 𝑁 : Agt × 𝑊 −→ 22 is a neighborhood function ecute at world 𝑤. 𝐻(𝑖, 𝑤) instead is the set of physical
   such that, for each 𝑖 ∈ Agt, each 𝑤, 𝑣 ∈ 𝑊 , and each actions that agent 𝑖 is enabled by its group to perform.
   𝑋 ⊆ 𝑊 these conditions hold:                          Which means, 𝐻 defines the role of an agent in its group,
                                                         via the actions that it is allowed to execute.
      (C1) if 𝑋∈𝑁 (𝑖, 𝑤) then 𝑋⊆{𝑣 ∈ 𝑊 | 𝑤𝑅𝑖 𝑣},
                                                            Agent’s preference on executability of physical actions
      (C2) if 𝑤𝑅𝑖 𝑣 then 𝑁 (𝑖, 𝑤) = 𝑁 (𝑖, 𝑣);            is determined by the function 𝑃 . For an agent 𝑖, and
 ∙ 𝐸 : Agt × 𝑊 −→ 2ℒACT is an executability function a physical action 𝜑𝐴 , 𝑃 (𝑖, 𝑤, 𝜑𝐴 ) is an integer value 𝑑
   of mental actions such that, for each 𝑖 ∈ Agt and indicating the degree of willingness of agent 𝑖 to execute
   𝑤, 𝑣 ∈ 𝑊 , it holds that:                             𝜑𝐴 at world 𝑤.
      (D1) if 𝑤𝑅𝑖 𝑣 then 𝐸(𝑖, 𝑤) = 𝐸(𝑖, 𝑣);                 Constraint (C1) imposes that agent 𝑖 can have explicit
                                                         in its mind only facts which are compatible with its cur-
 ∙ 𝐵 : Agt × 𝑊 −→ N is a budget function such that, rent epistemic state. Moreover, according to constraint
   for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , the following holds (C2), if a world 𝑣 is compatible with the epistemic state
      (E1) if 𝑤𝑅𝑖 𝑣 then 𝐵(𝑖, 𝑤) = 𝐵(𝑖, 𝑣);              of agent 𝑖 at world 𝑤, then agent 𝑖 should have the same
 ∙ 𝐶 : Agt × ℒACT × 𝑊 −→ N is a cost function such explicit beliefs at 𝑤 and 𝑣. In other words, if two situa-
   that, for each 𝑖 ∈ Agt, 𝛼 ∈ ℒACT , and 𝑤, 𝑣 ∈ 𝑊 , it tions are equivalent as concerns background knowledge,
   holds that:                                           then they cannot be distinguished through the explicit be-
                                                         lief set. This aspect of the semantics can be extended in
      (F1) if 𝑤𝑅𝑖 𝑣 then 𝐶(𝑖, 𝛼, 𝑤) = 𝐶(𝑖, 𝛼, 𝑣);        future work to allow agents make plausible assumptions.
 ∙ 𝐴 : Agt × 𝑊 −→ 2Atm 𝐴 is an executability function Analogous properties are imposed by constraints (D1),
   for physical actions such that, for each 𝑖 ∈ Agt and (E1), and (F1). Namely, (D1) imposes that agent 𝑖 always
   𝑤, 𝑣 ∈ 𝑊 , it holds that:                             knows which inferential actions it can perform and those it
      (G1) if 𝑤𝑅𝑖 𝑣 then 𝐴(𝑖, 𝑤) = 𝐴(𝑖, 𝑣);              cannot. (E1) states that agent 𝑖 always knows the available
                                                         budget in a world (potentially needed to perform actions).
 ∙ 𝐻 : Agt × 𝑊 −→ 2Atm 𝐴 is an enabling function
                                                         (F1) determines that agent 𝑖 always knows how much
   for physical actions such that, for each 𝑖 ∈ Agt and
                                                         it costs to perform an inferential action. (G1) and (H1)
   𝑤, 𝑣 ∈ 𝑊 , it holds that:




                                                             73
�determine that an agent 𝑖 always knows which physical             conditions, and by which agent(s), an action may be per-
actions it can perform and those it cannot, and with which        formed:
degree of willingness, where (G2) specifies that an agent          enabled 𝑤 (𝐺, 𝛼) : ∃𝑗 ∈ 𝐺 (𝛼 ∈ 𝐸(𝑗, 𝑤)∧
also knows whether its group gives it the permission to                                    𝐶(𝑗,𝛼,𝑤)
                                                                                                     ≤ minℎ∈𝐺 𝐵(ℎ, 𝑤)).
                                                                                             |𝐺|
execute a certain action or not, i.e., if that action pertains
to its role in the group.                                        In the above particular formulation (that is not fixed, but
   Truth values of L-DINF formulas are inductively de-           can be customized to the specific application domain)
fined as follows.                                                if at least an agent can perform it; and if the “payment”
   Given a model 𝑀 = (𝑊, 𝑁, ℛ, 𝐸, 𝐵, 𝐶, 𝐴, 𝐻, 𝑃, 𝑉 ),            due by each agent, obtained by dividing the action’s cost
𝑖 ∈ Agt, 𝐺 ⊆ Agt, 𝑤 ∈ 𝑊 , and a formula 𝜙 ∈ ℒL-INF ,             equally among all agents of the group, is within each
we introduce the following shorthand notation:                   agent’s available budget. In case more than one agent
                                                                 in 𝐺 can execute an action, we implicitly assume the
      ‖𝜙‖𝑀
         𝑖,𝑤 = {𝑣 ∈ 𝑊 : 𝑤𝑅𝑖 𝑣 and 𝑀, 𝑣 |= 𝜙}                     agent 𝑗 performing the action to be the one correspond-
                                                                 ing to the lowest possible cost. Namely, 𝑗 is such that
whenever 𝑀, 𝑣 |= 𝜙 is well-defined (see below). Then,            𝐶(𝑗, 𝛼, 𝑤) = minℎ∈𝐺 𝐶(ℎ, 𝛼, 𝑤). This definition re-
we set:                                                          flects a parsimony criterion reasonably adoptable by co-
 (t1) 𝑀, 𝑤 |= 𝑝 iff 𝑝 ∈ 𝑉 (𝑤)                                    operative agents sharing a crucial resource such as, e.g.,
 (t2) 𝑀, 𝑤 |= execi (𝛼) iff 𝛼 ∈ 𝐸(𝑖, 𝑤)                          energy or money. Other choices might be viable, so varia-
 (t3) 𝑀, 𝑤 |= exec 𝐺 (𝛼) iff ∃𝑖 ∈ 𝐺 with 𝛼 ∈ 𝐸(𝑖, 𝑤)             tions of this logic can be easily defined simply by devising
 (t4) 𝑀, 𝑤 |= can_do 𝑖 (𝜑𝐴 ) iff 𝜑𝐴 ∈𝐴(𝑖, 𝑤)∩𝐻(𝑖, 𝑤)             some other enabling condition and, possibly, introducing
 (t5) 𝑀, 𝑤 |= can_do 𝐺 (𝜑𝐴 ) iff ∃𝑖 ∈ 𝐺                          differences in neighborhood update. Notice that the def-
            with 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) ∩ 𝐻(𝑖, 𝑤)                          inition of the enabling function basically specifies the
 (t6) 𝑀, 𝑤 |= pref _do 𝑖 (𝜑𝐴 , 𝑑) iff 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) and           “concrete responsibility” that agents take while concurring
            𝑃 (𝑖, 𝑤, 𝜑𝐴 ) = 𝑑                                    with their own resources to actions’ execution. Also, in
 (t7) 𝑀, 𝑤 |= pref _do 𝐺 (𝑖, 𝜑𝐴 ) iff                            case of specification of various resources, different corre-
            𝑀, 𝑤 |= pref _do 𝑖 (𝜑𝐴 , 𝑑)                          sponding enabling functions might be defined.
            for 𝑑 = max{𝑃 (𝑗, 𝑤, 𝜑𝐴 ) |                             Our contribution to modularity is that functions 𝐴, 𝑃
            𝑗 ∈ 𝐺 ∧ 𝜑𝐴 ∈ 𝐴(𝑗, 𝑤) ∩ 𝐻(𝑗, 𝑤)}                      and 𝐻, i.e., executability of physical actions, preference
 (t8) 𝑀, 𝑤 |= ¬𝜙 iff 𝑀, 𝑤 ̸|= 𝜙                                  level of an agent about performing each action, and per-
                                                                 mission concerning which actions to actually perform,
 (t9) 𝑀, 𝑤 |= 𝜙 ∧ 𝜓 iff 𝑀, 𝑤 |= 𝜙 and 𝑀, 𝑤 |= 𝜓
                                                                 are not meant to be built-in. Rather, they can be defined
 (t10) 𝑀, 𝑤 |= B𝑖 𝜙 iff ||𝜙||𝑀𝑖,𝑤 ∈ 𝑁 (𝑖, 𝑤)
                                                                 via separate sub-theories, possibly defined using different
 (t11) 𝑀, 𝑤 |= K𝑖 𝜙 iff 𝑀, 𝑣 |= 𝜙 for all 𝑣 ∈ 𝑅𝑖 (𝑤)
                                                                 logics, or, in a practical approach, even via pieces of code.
   As seen above, a physical action can be performed by a        This approach can be extended to function 𝐶, i.e., the cost
group of agents if at least one agent of the group can do it,    of mental actions instead of being fixed may in principle
and the level of preference for performing this action is        vary, and be computed upon need.
set to the maximum among those of the agents enabled to
do this action. For any inferential action 𝛼 performed by
any agent 𝑖, we set:
                                                               2.3. Belief Update
                                                               In the logic defined so far, updating an agent’s beliefs
           𝑀, 𝑤 |= [𝐺 : 𝛼]𝜙 iff 𝑀 [𝐺:𝛼] , 𝑤 |= 𝜙               accounts to modify the neighborhood of the present world.
                                                               The   updated neighborhood 𝑁 [𝐺:𝛼] resulting from execu-
With 𝑀 [𝐺:𝛼] =⟨𝑊, 𝑁 [𝐺:𝛼] , ℛ, 𝐸, 𝐵 [𝐺:𝛼] , 𝐶, 𝐴, 𝐻, 𝑃, 𝑉 ⟩.
                                                               tion of a mental action 𝛼 is specified as follows.
Such model 𝑀 [𝐺:𝛼] represents the fact that the execution
of an inferential action 𝛼 affects the sets of beliefs of        ∙ If 𝛼 is ↓(𝜓, 𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 ,
agent 𝑖 and modifies the available budget. Such operation                 𝑁 [𝐺:↓(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜒||𝑀
                                                                                                                 𝑖,𝑤 }
can add new beliefs by direct perception, by means of
one inference step, or as a conjunction of previous beliefs.       if 𝑖∈𝐺 and enabled 𝑤 (𝐺, ↓(𝜓, 𝜒)) and 𝑀, 𝑤 |=
Hence, when introducing new beliefs (i.e., performing              B𝑖 𝜓 ∧ K𝑖 (𝜓 → 𝜒). Otherwise, the neighborhood
mental actions), the neighborhood must be extended                 does not change (i.e., 𝑁 [𝐺:↓(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)).
accordingly.                                                     ∙  If 𝛼 is ∩(𝜓,𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 ,
   The following property enabled 𝑤 (𝐺, 𝛼) (for a world                𝑁 [𝐺:∩(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜓 ∧ 𝜒||𝑀 𝑖,𝑤 }
𝑤, an action 𝛼 and a group of agents 𝐺) concerns a key
aspect in the definition of the logic. Specifically, it states     if 𝑖∈𝐺 and enabled 𝑤 (𝐺, ∩(𝜓,𝜒)) and 𝑀, 𝑤 |=
when an inferential action is enabled, i.e., under which           B𝑖 𝜓 ∧ B𝑖 𝜒. Otherwise, the neighborhood remains
                                                                   unchanged (i.e., 𝑁 [𝐺:∩(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)).




                                                             74
� ∙ If 𝛼 is ⊣ (𝜓, 𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 ,           |=L-INF (K𝑖 (𝜙 → ¬𝜓)) ∧ B𝑖 𝜙) → [𝐺 : ⊣(𝜙, 𝜓)] ¬B𝑖 𝜓 .
                                                                        Namely, if the agent 𝑖 has 𝜙 among beliefs and
         𝑁 [𝐺:⊣(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∖ {||𝜒||𝑀
                                                𝑖,𝑤 }                   K𝑖 (𝜙 → ¬𝜓) in its background knowledge (for
  if 𝑖∈𝐺 and enabled 𝑤 (𝐺, ⊣(𝜓, 𝜒)) and 𝑀, 𝑤 |=                         𝜙, 𝜓 ground atoms), then, as a consequence of the
  B𝑖 𝜓 ∧ K𝑖 (𝜓 → ¬𝜒). Otherwise, the neighborhood                       action ↓(𝜙, 𝜓) the agent 𝑖 and its group 𝐺 stop
  does not change (i.e., 𝑁 [𝐺:⊣(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤)).               believing 𝜓.
 ∙ If 𝛼 is ⊢(𝜓,𝜒), then, for each 𝑖 ∈ Agt and 𝑤 ∈ 𝑊 ,             |=L-INF (B𝑖 𝜙 ∧ B𝑖 𝜓) → [𝐺 : ∩(𝜙, 𝜓)]B𝑖 (𝜙 ∧ 𝜓).

         𝑁 [𝐺:⊢(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤) ∪ {||𝜒||𝑀                       Namely, if the agent 𝑖 has 𝜙 and 𝜓 as beliefs, then
                                                𝑖,𝑤 }
                                                                        as a consequence of the action ∩(𝜙, 𝜓) the agent
   if 𝑖∈𝐺 and enabled 𝑤 (𝐺, ⊢(𝜓,𝜒)) and 𝑀, 𝑤 |=                         𝑖 and its group 𝐺 start believing 𝜙 ∧ 𝜓.
   B𝑖 𝜓 ∧ B𝑖 (𝜓 → 𝜒). Otherwise, the neighborhood
                                                                  |=L-INF (B𝑖 (𝜙 → 𝜓)) ∧ B𝑖 𝜙) → [𝐺 : ⊢(𝜙, 𝜓)] B𝑖 , 𝜓 .
   remains unchanged: 𝑁 [𝐺:⊢(𝜓,𝜒)] (𝑖, 𝑤) = 𝑁 (𝑖, 𝑤).
                                                                        Namely, if the agent 𝑖 has 𝜙 among its beliefs and
  Notice that, after an inferential action 𝛼 has been per-
                                                                        B𝑖 (𝜙 → 𝜓) in its working memory, then as a
formed by an agent 𝑗 ∈ 𝐺, all agents 𝑖 ∈ 𝐺 see the
                                                                        consequence of the action ⊢(𝜙, 𝜓) the agent 𝑖 and
same update in the neighborhood. Conversely, for any
                                                                        its group 𝐺 start believing 𝜓.
agent ℎ ̸∈ 𝐺 the neighborhood remains unchanged (i.e.,
𝑁 [𝐺:𝛼] (ℎ, 𝑤) = 𝑁 (ℎ, 𝑤)). However, even for agents in
𝐺, the neighborhood remains unchanged if the required            2.4. Axiomatization
preconditions, on explicit beliefs, knowledge, and budget,
                                                                 Below we introduce the axiomatization of our logic. The
do not hold (and hence the action is not executed).
                                                                 L-INF and L-DINF axioms and inference rules are the
  Notice also that we might devise variations of the logic
                                                                 following, together with the usual axioms of propositional
by making different decisions about neighborhood up-
                                                                 logic (where 𝐺 ⊆ Agt and 𝑖 ∈ Agt):
date to implement, for instance, partial visibility within a
group.                                                               1. (K𝑖 𝜙 ∧ K𝑖 (𝜙 → 𝜓)) → K𝑖 𝜓;
  Since each agent in 𝐺 has to contribute to cover the               2. K𝑖 𝜙 → 𝜙;
costs of execution by consuming part of its available bud-           3. ¬K𝑖 (𝜙 ∧ ¬𝜙);
get, an update of the budget function is needed. For an              4. K𝑖 𝜙 → K𝑖 K𝑖 𝜙;
action 𝛼 we assume that 𝑗 ∈ 𝐺 executes 𝛼. Hence, for
                                                                     5. ¬K𝑖 𝜙 → K𝑖 ¬K𝑖 𝜙;
each 𝑖 ∈ Agt and each 𝑤 ∈ 𝑊 , we set
                                                                     6. B𝑖 𝜙 ∧ K𝑖 (𝜙 ↔ 𝜓) → B𝑖 𝜓;
      𝐵 [𝐺:𝛼] (𝑖, 𝑤) = 𝐵(𝑖, 𝑤) − 𝐶(𝑗, 𝛼, 𝑤)/|𝐺|,                     7. B𝑖 𝜙 → K𝑖 B𝑖 𝜙;
                                                                         𝜙
                                                                     8. K 𝜙 ;
                                                                           𝑖
if 𝑖∈𝐺 and enabled 𝑤 (𝐺, 𝛼) and, depending on 𝛼,
                                                                     9. [𝐺 : 𝛼]𝑝 ↔ 𝑝;
 𝑀, 𝑤 |= B𝑖 𝜓 ∧ K𝑖 (𝜓 → 𝜒)           if 𝛼 is ↓(𝜓, 𝜒), or            10. [𝐺 : 𝛼]¬𝜙 ↔ ¬[𝐺 : 𝛼]𝜙;
 𝑀, 𝑤 |= B𝑖 𝜓 ∧ B𝑖 𝜒                 if 𝛼 is ∩ (𝜓,𝜒)), or           11. exec 𝐺 (𝛼) → K𝑖 (exec 𝐺 (𝛼));
 𝑀, 𝑤 |= B𝑖 𝜓 ∧ K𝑖 (𝜓 → ¬𝜒)          if 𝛼 is ⊣(𝜓, 𝜒), or            12. [𝐺 : 𝛼](𝜙 ∧ 𝜓) ↔ [𝐺 : 𝛼]𝜙 ∧ [𝐺 : 𝛼]𝜓;
 𝑀, 𝑤 |= B𝑖 𝜓 ∧ B𝑖 (𝜓 → 𝜒)           if 𝛼 is ⊢(𝜓,𝜒).                13. [𝐺 : 𝛼]K𝑖 𝜙 ↔ K𝑖 ([𝐺 : 𝛼]𝜙);
                                                                    14. [𝐺 : ↓(𝜙, 𝜓)]B𝑖 𝜒 ↔ (︀B𝑖 ([𝐺 : ↓(𝜙, 𝜓)]𝜒) ∨
Otherwise, 𝐵 [𝐺:𝛼] (𝑖, 𝑤)=𝐵(𝑖, 𝑤), i.e., the budget is pre-                                    (B𝑖 𝜙 ∧ K𝑖 (𝜙 → 𝜓)) ∧
served.
                                                                                                                        )︀
                                                                                            K𝑖 ([𝐺 : ↓(𝜙, 𝜓)]𝜒 ↔ 𝜓) ;
   We write |=L-DINF 𝜙 to denote that 𝑀, 𝑤 |= 𝜙 holds               15. [𝐺 : ∩(𝜙, 𝜓)]B𝑖 𝜒 ↔ B𝑖 ([𝐺(︀: ∩(𝜙, 𝜓)]𝜒) ∨
for all worlds 𝑤 of every model 𝑀 .                                                                    (B𝑖 𝜙 ∧ B𝑖 𝜓) ∧
   We introduce below relevant consequences of our for-                                K𝑖 [𝐺 : ∩(𝜙, 𝜓)]𝜒 ↔ (𝜙 ∧ 𝜓) ;
                                                                                                                        )︀
malization. For lack of space we omit the proof, that can
                                                                    16. [𝐺 : ⊢(𝜙, 𝜓)]B𝑖 𝜒 ↔ (︀B𝑖 ([𝐺 : ⊢(𝜙, 𝜓)]𝜒) ∨
be developed analogously to what done in [5]. As con-
                                                                                               (B𝑖 𝜙 ∧ B𝑖 (𝜙 → 𝜓)) ∧
sequence of previous definitions, for any set of agents 𝐺                                                               )︀
                                                                                           K𝑖 ([𝐺 : ⊢(𝜙, 𝜓)]𝜒 ↔ 𝜓) ;
and each agent 𝑖 ∈ 𝐺, we have the following:
                                                                    17. [𝐺 : ⊣(𝜙, 𝜓)]¬B𝑖 𝜒(︀↔ B𝑖 ([𝐺 : ⊣(𝜙, 𝜓)]𝜒) ∨
 |=L-INF (K𝑖 (𝜙 → 𝜓)) ∧ B𝑖 𝜙) → [𝐺 : ↓(𝜙, 𝜓)] B𝑖 𝜓 .                                         (B𝑖 𝜙 ∧ K𝑖 (𝜙 → ¬𝜓)) ∧
                                                                                                                        )︀
       Namely, if the agent 𝑖 has 𝜙 among beliefs and                                      K𝑖 ([𝐺 : ⊣(𝜙, 𝜓)]𝜒 ↔ 𝜓) ;
       K𝑖 (𝜙 → 𝜓) in its background knowledge, then                 18. intendG (𝜑A ) ↔ ∀𝑖 ∈ 𝐺 intendi (𝜑A );
       as a consequence of the action ↓(𝜙, 𝜓) the agent             19. do 𝐺 (𝜑𝐴 ) → can_do 𝐺 (𝜑𝐴 );
       𝑖 and its group 𝐺 start believing 𝜓.                         20. do 𝑖 (𝜑𝐴 ) → can_do 𝑖 (𝜑𝐴 ) ∧ pref _doG (i, 𝜑A );




                                                            75
�           𝜓↔𝜒                                                         diagnose_patient
   21. 𝜙↔𝜙[𝜓/𝜒] .
                                                                       administer _urgent_treatment
                                                                       measure_vital _signs
We write L-DINF ⊢ 𝜙 to denote that 𝜙 is a theorem of                   pneumothorax _aspiration
L-DINF. It is easy to verify that the above axiomatization             local _anesthesia
is sound for the class of L-INF models, namely, all axioms             bandage_wounds
are valid and inference rules preserve validity. In particu-           drive_to_patient
lar, soundness of axioms 14–17 immediately follows from                drive_to_hospital .
the semantics of [𝐺:𝛼]𝜙, for each inferential action 𝛼, as       The group will now be required to perform a planning
previously defined.                                            activity. Assume that, as a result of the planning phase,
   Notice that, by abuse of notation, we have axiomatized the knowledge base of each agent 𝑖 contains the following
the special predicates concerning intention and action rule, that specifies how to reach the intended goal in terms
enabling. Axioms 18–20 concern in fact physical actions, of actions to perform and sub-goals to achieve:
stating that: what is intended by a group of agents is                  (︀
intended by them all; and, neither an agent nor a group of         K𝑖 intendG (rescue_patient) →
agents can do what it is not enabled to do. Such axioms                               intendG (drive_to_patient) ∧
are not enforced by the semantics, but are supposed to be                            intend   G (diagnose_patient) ∧

enforced by a designer’s/programmer’s encoding of parts                                intend  G (stabilize_patient)∧)︀
of an agent’s behaviour. In fact, axiom 18 enforces agents                           intendG (drive_to_hospital ) .
in a group to be cooperative. Axioms 19 and 20 ensure            By axiom 18 listed in previous section, every agent will
that agents will attempt to perform actions only if their also have the specialized rule (for 𝑖 ≤ 4)
preconditions are satisfied, i.e., if they can do them.                 (︀
                                                                   K𝑖 intendG (rescue_patient) →
   We do not handle such properties in the semantics as                                 intendi (drive_to_patient) ∧
done, e.g., in dynamic logic, because we want agents’                                  intendi (diagnose_patient) ∧
definition to be independent of the practical aspect, so we                              intendi (stabilize_patient)∧
explicitly intend to introduce flexibility in the definition                           intendi (drive_to_hospital ) .
                                                                                                                     )︀
of such parts.
                                                                 Then, the following is entailed for each of the agents:
                                                                    (︀
                                                                 K𝑖 intendi (rescue_patient) →
3. Problem Specification and                                                              intendi (drive_to_patient)
                                                                                                                       )︀

      Inference: An example
                                                                    (︀
                                                                 K𝑖 intendi (rescue_patient) →                          )︀
                                                                    (︀                    intendi (diagnose_patient)
In this section, we propose an example to explain the            K𝑖 intendi (rescue_patient) →                        )︀
usefulness of the new extension. For the sake of simplicity         (︀                    intendi (stabilize_patient)
of illustration and of brevity, the example is in “skeletal”     K𝑖 intendi (rescue_patient) →                           )︀
form.                                                                                     intendi (drive_to_hospital ) .
   Consider a group of four agents, who are the crew of an       While driving to the patient and then back to the hospi-
ambulance, including a driver, two nurses, and a medical tal are actions, intend (stabilize_patient) is a goal.
                                                                                         G
doctor. The driver is the only one enabled to drive the          Assume now that the knowledge base of each agent 𝑖
ambulance. The nurses are enabled to perform a number contains also the following general rules, stating that the
of tasks, such as, e.g., administer a pain reliever, or clean, group is available to perform each of the necessary actions.
disinfect and bandage a wound, measure vital signs. It Which agent will in particular perform each action 𝜑 ?
                                                                                                                         𝐴
is however the task of a doctor to make a diagnosis, to          According to items (t4) and (t7) in the definition of truth
prescribe medications, to order, perform, and interpret values, for L-DINF formulas, this agent will be chosen as
diagnostic tests, and to perform complex medical proce- the one which best prefers to perform this action, among
dures.                                                         those that can do it. Formally, in the present situation,
   Imagine that the hospital received notice of a car acci- pref _do (i, 𝜑 ) returns the agent 𝑖 in the group with
                                                                           G    A
dent with an injured person. Then, it will inform the group the highest degree of preference on performing 𝐴, and
of the fact that a patient needs help (how exactly is not can_do (𝜑_A) is true if there is some agent 𝑖 in the
                                                                         G
treated here, because this depends on how the multi-agent group which is able and allowed to perform 𝜑 , i.e., 𝜑 ∈
                                                                                                              𝐴          𝐴
system is implemented, but a message exchange will pre- 𝐴(𝑖, 𝑤) ∧ 𝜑 ∈ 𝐻(𝑖, 𝑤).
                                                                              𝐴
sumably suffice). The group will reason, and devise the               (︀
intention/goal K𝑖 (intendG (rescue_patient)).                     K𝑖 intendG (drive_to_patient) ∧
   Among the physical actions that agents in the group               can_doG (drive_to_patient)∧
can perform are for instance the following:                          pref _doG (i, drive_to_patient)                    )︀
                                                                                           → doG (drive_to_patient)




                                                            76
�       (︀
    K𝑖 intendG (diagnose_patient)∧                              preferences) will measure vital signs, administer local
      can_doG (diagnose_patient)∧                               anesthesia and bandage the wounds. The new function 𝐻,
      pref _doG (i, diagnose_patient) →          )︀             in a sensitive domain such as healthcare, guarantees that
                         doG (diagnose_patient)                 each procedure is administered by one who is capable to
     (︀
   K𝑖 intendG (drive_to_hospital )∧                             (function 𝐴) but also enabled (function 𝐻), and so can
     can_doG (drive_to_hospital )∧                              take responsibility for the action.
     pref _doG (i, drive_to_hospital ) →                           An interesting point concerns derogation, i.e., for in-
                                                )︀
                        doG (drive_to_hospital ) .              stance, life or death situations where, unfortunately, no-
                                                                one who is enabled to perform some urgently needed ac-
   As before, by axiom 18 such rules can be specialized         tion is available; in such situations perhaps, anyone who
to each single agent.                                           is capable to perform this action might perform it. For
       (︀                                                       instance, a nurse, in absence of a doctor, might attempt
    K𝑖 intendi (drive_to_patient) ∧                             urgent pneumothorax aspiration.
       can_doi (drive_to_patient)∧                                 From such perspective, semantics could be modified as
       pref _doi (i, drive_to_patient) →          )︀            follows:
       (︀                 doG (drive_to_patient)
    K𝑖 intendi (diagnose_patient)∧                          (t4’) 𝑀, 𝑤 |= able_do 𝑖 (𝜑𝐴 ) iff 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤)
       can_doi (diagnose_patient)∧                          (t4”) 𝑀, 𝑤 |= enabled _do 𝑖 (𝜑𝐴 ) iff 𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) ∩
       pref _doi (i, diagnose_patient) →         )︀               𝐻(𝑖, 𝑤)
       (︀                 doi (diagnose_patient)        (t4-new) 𝑀, 𝑤 |= can_do 𝑖 (𝜑𝐴 ) iff (𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) ∩
    K𝑖 intendi (drive_to_hospital )∧                              𝐻(𝑖, 𝑤)) ∨ (𝜑𝐴 ∈ 𝐴(𝑖, 𝑤) ∧ ̸ ∃ 𝑗∈𝐺 : 𝜑𝐴 ∈
       can_doi (drive_to_hospital )∧                              𝐴(𝑗, 𝑤) ∩ 𝐻(𝑗, 𝑤))
       pref _doi (i, drive_to_hospital ) →          )︀  (t5-new) 𝑀, 𝑤 |= can_do 𝐺 (𝜑𝐴 ) iff ∃ 𝑖 ∈ 𝐺 s.t. 𝑀, 𝑤 |=
                           doi (drive_to_hospital )
                                                                  can_do 𝑖 (𝜑𝐴 )
    So, for each action 𝜑𝐴 required by the plan, there will
be some agent (let us assume for simplicity only one),          4. Canonical Model and Strong
for which doi (𝜑A )) will be concluded. In our case, the
agent driver will conclude doi (drive_to_patient)) and             Completeness
doi (drive_to_hospital )); the agent doctor will conclude
doi (stabilize_patient)).                                       In this section, we introduce the notion of canonical model
    As previously stated, when an agent derives doi (𝜑A )       of our logic, and we outline the proof of strong complete-
for any physical action 𝜑𝐴 , the action is supposed to have     ness w.r.t. the proposed class of models (by means of a
been performed via some kind of semantic attachment             standard canonical-model argument). As before, let Agt
which links the agent to the external environment.              be a set of agents.
    Since intendG (stabilize_patient) is not an action but
                                                                Definition 4.1. A canonical L-INF model is a tuple
a sub-goal, the group will have to devise a plan to achieve
                                                                𝑀𝑐 = ⟨𝑊𝑐 , 𝑁𝑐 , ℛ𝑐 , 𝐸𝑐 , 𝐵𝑐 , 𝐶𝑐 , 𝐴𝑐 , 𝐻𝑐 , 𝑃𝑐 , 𝑉𝑐 ⟩ where:
it. This will imply sensing actions and forms of reasoning
not shown here. Assume that the diagnosis has been               ∙ 𝑊𝑐 is the set of all maximal consistent subsets of
pneumothorax, and that the patient has also some wounds           ℒL-INF ;
which are bleeding. Upon completion of the planning              ∙ ℛ𝑐 = {𝑅𝑐,𝑖 }𝑖∈Agt is a collection of equivalence rela-
phase, the knowledge base of each agent 𝑖 contains the            tions on 𝑊𝑐 such that, for every 𝑖 ∈ Agt and 𝑤, 𝑣 ∈
following rule, that specifies how to reach the intended          𝑊𝑐 , 𝑤𝑅𝑐,𝑖 𝑣 if and only if (for all 𝜙, K𝑖 𝜙 ∈ 𝑤 im-
goal in terms of actions to perform:                              plies 𝜙 ∈ 𝑣);
       (︀                                                        ∙ For 𝑤 ∈ 𝑊𝑐 , 𝜙 ∈ ℒL-INF let 𝐴𝜙 (𝑖, 𝑤) =
    K𝑖 intendG (stabilize_patient) →                              {𝑣 ∈ 𝑅𝑐,𝑖 (𝑤) | 𝜙 ∈ 𝑣}. Then, we put
                    intendG (measure_vital _signs) ∧              𝑁𝑐 (𝑖, 𝑤)={𝐴𝜙 (𝑖, 𝑤) | B𝑖 𝜙 ∈ 𝑤};
                         intendG (local _anesthesia) ∧           ∙ 𝐸𝑐 : Agt × 𝑊𝑐 −→ 2ℒACT is such that, for each
                        intendG (bandage_wounds) )︀∧              𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊𝑐 , if 𝑤𝑅𝑐,𝑖 𝑣 then 𝐸𝑐 (𝑖, 𝑤) =
              intendG (pneumothorax _aspiration) .                𝐸𝑐 (𝑖, 𝑣);
                                                                 ∙ 𝐵𝑐 : Agt × 𝑊𝑐 −→ N is such that, for each 𝑖 ∈ Agt
   As before, these rules will be instantiated and elabo-         and 𝑤, 𝑣 ∈ 𝑊𝑐 , if 𝑤𝑅𝑐,𝑖 𝑣 then 𝐵𝑐 (𝑖, 𝑤) = 𝐵𝑐 (𝑖, 𝑣);
rated by the single agents, and there will be some agent
                                                                 ∙ 𝐶𝑐 : Agt × ℒACT × 𝑊𝑐 −→ N is such that, for each
who will finally perform each action. Specifically, the doc-
                                                                  𝑖 ∈ Agt, 𝛼 ∈ ℒACT , and 𝑤, 𝑣 ∈ 𝑊𝑐 , if 𝑤𝑅𝑐,𝑖 𝑣 then
tor will be the one to perform pneumothorax aspiration,
                                                                  𝐶𝑐 (𝑖, 𝛼, 𝑤) = 𝐶𝑐 (𝑖, 𝛼, 𝑣);
and the nurses (according to their competences and their




                                                           77
� ∙ 𝐴𝑐 : Agt × 𝑊𝑐 −→ 2Atm 𝐴 is such that, for each                 complexity of the extended logic has no reason to be
  𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊𝑐 , if 𝑤𝑅𝑐,𝑖 𝑣 then 𝐴𝑐 (𝑖, 𝑤) =             higher than that of the original L-DINF. In future work,
  𝐴𝑐 (𝑖, 𝑣);                                                      we mean to extend our logic in the direction of describing
 ∙ 𝐻𝑐 : Agt × 𝑊𝑐 −→ 2Atm 𝐴 is such that, for each                 multiple groups of agents and their interactions. We also
  𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊𝑐 , if 𝑤𝑅𝑐,𝑖 𝑣 then 𝐻𝑐 (𝑖, 𝑤) =             mean to introduce in L-DINF, based on our past work, an
  𝐻𝑐 (𝑖, 𝑣);                                                      explicit notion of time and time intervals.
 ∙ 𝑃𝑐 : Agt × 𝑊𝑐 × Atm 𝐴 −→ N is such that,
  for each 𝑖 ∈ Agt and 𝑤, 𝑣 ∈ 𝑊 , if 𝑤𝑅𝑐,𝑖 𝑣 then
  𝑃𝑐 (𝑖, 𝑤, 𝜑𝐴 ) = 𝑃𝑐 (𝑖, 𝑣, 𝜑𝐴 );                                References
 ∙ 𝑉𝑐 : 𝑊𝑐 −→ 2Atm is such that 𝑉𝑐 (𝑤) = Atm ∩ 𝑤.
                                                                   [1] R. H. Bordini, L. Braubach, M. Dastani, A. E. F.
Note that, analogously to what done before, 𝑅𝑐,𝑖 (𝑤)                   Seghrouchni, J. J. Gómez-Sanz, J. Leite, G. M. P.
denotes the set {𝑣 ∈ 𝑊𝑐 | 𝑤𝑅𝑐,𝑖 𝑣}, for each                           O’Hare, A. Pokahr, A. Ricci, A survey of program-
𝑖 ∈ Agt. It is easy to verify that 𝑀𝑐 is an L-INF                      ming languages and platforms for multi-agent sys-
model as defined in Def. 2.1, since, it satisfies condi-               tems, Informatica (Slovenia) 30 (2006) 33–44.
tions (C1),(C2),(D1),(E1),(F1),(G1),(G2),(H1). Hence,              [2] R. Calegari, G. Ciatto, V. Mascardi, A. Omicini,
it models the axioms and the inference rules 1–17 and                  Logic-based technologies for multi-agent systems:
21 introduced before (while, as mentioned in Section 2.4,              a systematic literature review, Auton. Agents
axioms 18–20 are assumed to be enforced by the specifi-                Multi Agent Syst. 35 (2021) 1. doi:10.1007/
cation of agents behaviour). Consequently, the following               s10458-020-09478-3.
properties hold too. Let 𝑤 ∈ 𝑊𝑐 , then:                            [3] A. Garro, M. Mühlhäuser, A. Tundis, M. Baldoni,
                                                                       C. Baroglio, F. Bergenti, P. Torroni, Intelligent
 ∙ given 𝜙 ∈ ℒL-INF , it holds that K𝑖 𝜙 ∈ 𝑤 if and only
                                                                       agents: Multi-agent systems, in: S. Ranganathan,
  if ∀𝑣 ∈ 𝑊𝑐 such that 𝑤𝑅𝑐,𝑖 𝑣 we have 𝜙 ∈ 𝑣;
                                                                       M. Gribskov, K. Nakai, C. Schönbach (Eds.),
 ∙ for 𝜙∈ℒL-INF , if B𝑖 𝜙∈𝑤 and 𝑤𝑅𝑐,𝑖 𝑣 then B𝑖 𝜙∈𝑣.
                                                                       Encyclopedia of Bioinformatics and Computational
   Thus, 𝑅𝑐,𝑖 -related worlds have the same knowledge                  Biology - Volume 1, Elsevier, 2019, pp. 315–
and 𝑁𝑐 -related worlds have the same beliefs, i.e. there               320. doi:10.1016/b978-0-12-809633-8.
can be 𝑅𝑐,𝑖 -related worlds with different beliefs.                    20328-2.
   By proceeding similarly to what done in [12], we obtain         [4] S. Costantini, V. Pitoni, Towards a logic of "infer-
the proof of strong completeness. For lack of space, we                able" for self-aware transparent logical agents, in:
list the main theorems but omit lemmas and proofs, that                C. Musto, D. Magazzeni, S. Ruggieri, G. Semer-
we have however developed analogously to what done in                  aro (Eds.), Proc. XAI.it@AIxIA 2020, volume 2742
previous work [5].                                                     of CEUR Workshop Proceedings, CEUR-WS.org,
Theorem 4.1. L-INF is strongly complete for the class                  2020, pp. 68–79. URL: http://ceur-ws.org/Vol-2742/
of L-INF models.                                                       paper6.pdf.
                                                                   [5] S. Costantini, A. Formisano, V. Pitoni, An epistemic
Theorem 4.2. L-DINF is strongly complete for the class                 logic for multi-agent systems with budget and costs,
of L-INF models.                                                       in: W. Faber, G. Friedrich, M. Gebser, M. Morak
                                                                       (Eds.), Logics in Artificial Intelligence - 17th Eu-
                                                                       ropean Conference, JELIA 2021, Proceedings, vol-
5. Conclusions                                                         ume 12678 of LNCS, Springer, 2021, pp. 101–115.
                                                                       doi:10.1007/978-3-030-75775-5\_8.
In this paper, we discussed the last advances of a line of
                                                                   [6] S. Costantini, A. Formisano, V. Pitoni, An epis-
work concerning how to exploit a logical formulation for
                                                                       temic logic for modular development of multi-agent
a formal description of the cooperative activities of groups
                                                                       systems, in: N. Alechina, M. Baldoni, B. Logan
of agents. In past work, we had introduced beliefs about
                                                                       (Eds.), EMAS 2021, Revised Selected papers, vol-
physical actions concerning whether they could, are, or
                                                                       ume 13190 of LNCS, Springer, 2022, pp. 72–91.
have been executed, preferences in performing actions,
                                                                       doi:10.1007/978-3-030-97457-2_5.
single agent’s and group’s intentions. So far however, a
                                                                   [7] S. Costantini, A. Formisano, V. Pitoni, Timed mem-
limitation was missing about which actions an agent is
                                                                       ory in resource-bounded agents, in: C. Ghidini,
allowed to perform; in practical situations, in fact, it will
                                                                       B. Magnini, A. Passerini, P. Traverso (Eds.), AI*IA
hardly be the case that every agent can perform every
                                                                       2018 - Advances in Artificial Intelligence - XVI-
action.
                                                                       Ith International Conference of the Italian Associa-
   We have listed some useful properties of the extended
                                                                       tion for Artificial Intelligence, Proceedings, volume
logic, that we have indeed proved, mainly strong com-
                                                                       11298 of LNCS, Springer, 2018, pp. 15–29.
pleteness. Since the extension is small and modular, the




                                                             78
� [8] S. Costantini, V. Pitoni, Memory management in
     resource-bounded agents, in: M. Alviano, G. Greco,
     F. Scarcello (Eds.), AI*IA 2019 - Advances in Artifi-
     cial Intelligence - XVIIIth International Conference
     of the Italian Association for Artificial Intelligence,
     volume 11946 of LNCS, Springer, 2019, pp. 46–58.
 [9] V. Pitoni, S. Costantini, A temporal module for
     logical frameworks, in: B. Bogaerts, E. Erdem,
     P. Fodor, A. Formisano, G. Ianni, D. Inclezan, G. Vi-
     dal, A. Villanueva, M. De Vos, F. Yang (Eds.),
     Proc. of ICLP 2019 (Tech. Comm.), volume 306
     of EPTCS, 2019, pp. 340–346.
[10] A. S. Rao, M. Georgeff, Modeling rational agents
     within a BDI architecture, in: Proc. of the Second
     Int. Conf. on Principles of Knowledge Representa-
     tion and Reasoning (KR’91), Morgan Kaufmann,
     1991, pp. 473–484.
[11] H. V. Ditmarsch, J. Y. Halpern, W. V. D. Hoek,
     B. Kooi, Handbook of Epistemic Logic, College
     Publications, 2015. Editors.
[12] P. Balbiani, D. F. Duque, E. Lorini, A logical the-
     ory of belief dynamics for resource-bounded agents,
     in: Proceedings of the 2016 International Confer-
     ence on Autonomous Agents & Multiagent Systems,
     AAMAS 2016, ACM, 2016, pp. 644–652.
[13] S. Costantini, A. Tocchio, A logic programming
     language for multi-agent systems, in: S. Flesca,
     S. Greco, N. Leone, G. Ianni (Eds.), Proc. of JELIA-
     02, volume 2424 of LNAI, Springer, 2002, pp. 1–13.
     doi:10.1007/3-540-45757-7_1.
[14] S. Costantini, A. Tocchio, The DALI logic program-
     ming agent-oriented language, in: J. J. Alferes, J. A.
     Leite (Eds.), Proc. of JELIA-04, volume 3229 of
     LNAI, Springer, 2004, pp. 685–688. doi:10.1007/
     978-3-540-30227-8_57.
[15] G. De Gasperis, S. Costantini, G. Nazzicone, DALI
     multi agent systems framework, doi 10.5281/zen-
     odo.11042, DALI GitHub Software Reposi-
     tory, 2014. DALI: github.com/AAAI-DISIM-UnivAQ/
     DALI.
[16] S. Costantini, G. De Gasperis, Flexible goal-directed
     agents’ behavior via DALI mass and ASP modules,
     in: 2018 AAAI Spring Symposia, Stanford Univer-
     sity, Palo Alto, California, USA, March 26-28, 2018,
     AAAI Press, 2018.
[17] S. Costantini, G. De Gasperis, G. Nazzicone, DALI
     for cognitive robotics: Principles and prototype im-
     plementation, in: Y. Lierler, W. Taha (Eds.), Prac-
     tical Aspects of Declarative Languages - 19th Int.
     Symp. PADL 2017, Proceedings, volume 10137
     of LNCS, Springer, 2017, pp. 152–162. doi:10.
     1007/978-3-319-51676-9\_10.




                                                           79
�