Vol-3194/paper49

From BITPlan ceur-ws Wiki
Revision as of 17:52, 30 March 2023 by Wf (talk | contribs) (edited by wikiedit)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Paper

Paper
edit
description  
id  Vol-3194/paper49
wikidataid  Q117344877→Q117344877
title  Investigating Binary Partition Power in Metric Query
pdfUrl  https://ceur-ws.org/Vol-3194/paper49.pdf
dblpUrl  https://dblp.org/rec/conf/sebd/0001DV22
volume  Vol-3194→Vol-3194
session  →

Investigating Binary Partition Power in Metric Query

load PDF

Investigating Binary Partition Power in Metric Query
Richard Connor1 , Alan Dearle1 and Lucia Vadicamo2
1
    School of Computer Science, University of St Andrews, St Andrews, KY16 9SS, Scotland
2
    Institute of Information Science and Technologies, CNR, Pisa, Italy


                                         Abstract
                                         It is generally understood that, as dimensionality increases, the minimum cost of metric query tends
                                         from 𝑂(log 𝑛) to 𝑂(𝑛) in both space and time, where 𝑛 is the size of the data set. With low dimension-
                                         ality, the former is easy to achieve; with very high dimensionality, the latter is inevitable.
                                              We previously described BitPart as a novel mechanism suitable for performing exact metric search in
                                         “high(er)” dimensions. The essential tradeoff of BitPart is that its space cost is linear with respect to the
                                         size of the data, but the actual space required for each object may be small as log2 𝑛 bits, which allows
                                         even very large data sets to be queried using only main memory. Potentially the time cost still scales
                                         with 𝑂(log 𝑛). Together these attributes give exact search which outperforms indexing structures if
                                         dimensionality is within a certain range.
                                              In this article, we reiterate the design of BitPart in this context. The novel contribution is an in-
                                         depth examination of what the notion of “high(er)” means in practical terms. To do this we introduce
                                         the notion of exclusion power, and show its application to some generated data sets across different
                                         dimensions.

                                         Keywords
                                         similarity search, metric indexing, metric search, exclusion power




1. Introduction
Searching a database for objects that are most similar to a query object is a fundamental task in
many database application domains, for example data mining, knowledge discovery, information
extraction, and machine learning [1].
   In [2] we described BitPart as a mechanism suitable for performing exact metric search in
“high(er)” dimensions. While we did not define “high(er)”, the evidence in the paper shows
that it gives its best relative performance for spaces with dimensionality of between 10 and 30.
BitPart’s space cost is linear with respect to the size of the data, but the actual space required
for each object may be small as log2 𝑛 bits, which allows representations of very large data sets
to fit in main memory. Potentially the time cost still scales with 𝑂(log 𝑛) time. Together these
attributes give exact search which outperforms indexing structures as dimensionality increases
within a certain range1 .


SEBD 2022: The 30th Italian Symposium on Advanced Database Systems, June 19-22, 2022, Tirrenia (PI), Italy
" rchc@st-andrews.ac.uk (R. Connor); al@st-andrews.ac.uk (A. Dearle); lucia.vadicamo@isti.cnr.it (L. Vadicamo)
� 0000-0003-4734-8103 (R. Connor); 0000-0002-1157-2421 (A. Dearle); 0000-0001-7182-7038 (L. Vadicamo)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR

        CEUR Workshop Proceedings (CEUR-WS.org)
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073




                  1
    Note that due to the curse of dimensionality [3] exact metric search rarely outperforms a sequential scan when
dimensionality exceeds 10 [4]. For high-dimensional data, approximate methods are usually employed, e.g [5, 6].
�   The BitPart mechanism uses a set of binary partitions defined over the finite data set, where
each partition is represented by a bitmap. If the total set of bitmaps is perfectly balanced and
perfectly orthogonal, then the probability of a selection of 𝑘 bitmaps being able to exclude an
arbitrary element of the data is 1− 21𝑘 , and the expected number of non-excluded values becomes
very small if 𝑘 ≈ log2 𝑛. While we can come close to achieving this for low-dimensional data,
this becomes increasingly challenging as dimensionality increases.
   In [2] two issues are noted as requiring further investigation. The first is how to select
appropriate pivots in order to find orthogonal exclusion partitions. The second is the interaction
between partition balance and the efficacy of exclusion; this is the subject of this article.
   To do this we introduce the notion of exclusion power, and show its application to some
generated data sets across different dimensions. Exclusion power provides the ability to: fun-
damentally understand the exclusions that are possible in different datasets, to understand
when one approach will outperform another and, to tune the BitPart algorithm to optimally
accommodate data sets of different dimensions.
   The novel contribution of this article is the quantification of exclusion power, which can be
measured for a given binary partition structure with respect to a balancing factor 𝜏 . We observe
that, in low dimensions, partitions are best balanced evenly, approximately splitting the search
space into two equal parts. In higher dimensions however such balanced partitions perform
very badly, and highly skewed partitions have much greater power. We are not aware of any
previous work which attempts to quantify this effect, although it is well known within the
“folklore” of metric search researchers.


2. Formal Context
Formally we are interested in searching a (large) finite set of objects 𝑆 which is a subset of
an infinite set 𝑈 , where (𝑈, 𝑑) is a metric space [1]. The general requirement is to efficiently
find members of 𝑆 which are similar to an arbitrary member of 𝑈 given as a query, where
the distance function 𝑑 gives the only way by which any two objects may be compared. The
simplest type of similarity query is the range search query: for some threshold 𝑡, based on a
query 𝑞 ∈ 𝑈 , the solution set is 𝒬(𝑞, 𝑡) = {𝑠 ∈ 𝑆| 𝑑(𝑞, 𝑠) ≤ 𝑡}.
   The essence of metric search is to spend time pre-processing the finite set 𝑆 (i.e., indexing
it) so that solutions to queries can be efficiently calculated. In all cases distances between
members of 𝑆 and selected reference objects (pivot) are calculated during pre-processing. At
query time the relative distances between the query and the same pivot objects can be used to
make deductions about which data values may, or may not, be candidate solutions to the query.
   Many metric indexes employ binary partitions of the space 𝑈 , i.e., partitions of the form
𝒫 = {𝒫0 , 𝒫1 } where 𝒫0 ∪ 𝒫1 = 𝑈 and 𝒫0 ∩ 𝒫1 = ∅. In the context of metric search, binary
partitions are defined by partition rules that typically rely on computing the distance 𝑑 of the
data objects to a set of pivots. Notable examples include ball and sheet partitioning (see, e.g.,
Figure 1). A ball partition is defined by a pivot 𝑝 ∈ 𝑈 and a covering radius 𝜏 ∈ R+ , so that

                 𝒫0 = {𝑢 ∈ 𝑈 | 𝑑(𝑢, 𝑝) > 𝜏 }       𝒫1 = {𝑢 ∈ 𝑈 | 𝑑(𝑢, 𝑝) ≤ 𝜏 }

A sheet partition is defined by two pivots 𝑝1 , 𝑝0 ∈ 𝑈 so that the boundary of the partition
�                                         𝒫0                𝒫1             𝒫0
                                  𝒫1
                                 𝑝 𝜏                      𝑝1
                                                                     𝑝0


                        (a) Ball Partitioning           (b) Sheet Partitioning
Figure 1: Examples of a Ball Partitioning defined by a pivot 𝑝 ∈ 𝑈 and a covering radius 𝜏 ∈ R+ , and
a Sheet Partitioning associated to two pivots 𝑝1 , 𝑝0 ∈ 𝑈 .


regions is the hyperplane equidistant from the two reference objects:

         𝒫0 = {𝑢 ∈ 𝑈 | 𝑑(𝑢, 𝑝1 ) > 𝑑(𝑢, 𝑝0 )}        𝒫1 = {𝑢 ∈ 𝑈 | 𝑑(𝑢, 𝑝1 ) ≤ 𝑑(𝑢, 𝑝0 )}

Informally we refer to 𝒫0 as "outside" and 𝒫1 as "inside".
   Under certain circumstances it is possible to deduce that, for a given query, one of the
subsets of the partition cannot (or conversely must) contain solutions to the query. Typically,
the triangle inequality and the distances to the pivots are exploited to determine bounds on
distances between data objects (distance constraints), which are used at query time for excluding
certain regions from the searching process.
   The exclusion of a region of a partition occurs when the query ball does not intersect the
partition boundary. For a ball region, the condition for non-intersection of the query ball and
the boundary is
                                        |𝑑(𝑝, 𝑞) − 𝜏 | > 𝑡                                     (1)
For a sheet region, the non-intersection condition is

                                     |𝑑(𝑝1 , 𝑞) − 𝑑(𝑝0 , 𝑞)| > 2𝑡                                 (2)

If the respective non-intersection condition occurs, the implication is that any possible solution
to the query lies on one or other side of the partition boundary, and therefore either 𝒫0 or 𝒫1
may be excluded from the search for solutions.


3. The BitPart Mechanism
The core of the technique is to define a large set of binary partitions over the original space;
the data is stored as a set of bitmaps according to containment with these regions. At query
time, the query is assessed against this containment information, but without reference to the
original data representation.
   For each region, one of three possible conditions may be determined: (a) the solution set
𝒬(𝑞, 𝑡) must be fully contained in the region, or (b) there is no intersection between the region
and the solution set, or (c) neither of these is the case. In either case (a) or (b), the containment
information stored may be useful with respect to solving the query, and is used as part of the
query computation. In case (c) however the containment information is of no value, and is not
�                                                          Point         Partition bitmaps
                  𝑠6
                                                                    𝒜    ℬ 𝒞 𝒟 ℰ ℱ
         𝑝1           𝑡    𝑝2   𝑠5
               𝑠1 𝑞                                        𝑠1       1    1 0 1 1 0
                                           𝑞𝑡              𝑠2       1    0 0 0 1 1
                      𝑠3
          𝑝3      𝑠2 𝑝                                     𝑠3       0    0 0 0 0 0
                      4
                                                           𝑠4       0    0 0 1 0 1
                                                           𝑠5       0    1 0 1 0 0
                          𝑠3                               𝑠6       1    1 0 1 0 0
         𝒜1 ℬ1 𝒞1 𝒟1 ℰ1 ℱ1                                𝒬(𝑞, 𝑡)   -    1 0 1         -  0
                 (a)                    (b)                               (c)
Figure 2: (a) A partitioning of a data space along with a query 𝑞 with threshold 𝑡; (b) the regions
containing the solution set; (c) the data representation according to containment of regions.




accessed as a part of the computation. This approach maximises the effectiveness of memory
used for calculation against the original data representation: the amount of memory required
depends on the cardinality of the original data, but not on the size of its individual objects.

An Example. Figure 2 shows a simple example within the 2D plane, comprising four reference
objects 𝑝1 to 𝑝4 and a set of six regions defined by them. The regions are respectively: 𝒜1 , the
area to the left of the line between 𝑝1 and 𝑝2 ; ℬ1 , the area above the line between 𝑝1 and 𝑝3 ;
and the areas within the variously sized circles drawn around each 𝑝1 to 𝑝4 , labeled 𝒞1 , 𝒟1 , ℰ1
and ℱ1 respectively. Note that each regional boundary defines a binary partition of the total
space of the form 𝒫 = {𝒫1 , 𝒫0 }, where 𝒫0 = 𝑈 ∖ 𝒫1 . Thus in Figure 2, six binary partitions
(𝒜 = {𝒜1 , 𝒜0 }, ℬ = {ℬ1 , ℬ0 }, 𝒞 = {𝒞1 , 𝒞0 }, etc) are considered. For each partition 𝒫 we
generate a bitmap of 𝑛 bits, where 𝑛 is the number of data points, which stores the logical
containment information of each data object 𝑠𝑖 , that is 1 if 𝑠𝑖 ∈ 𝒫1 and 0 if 𝑠𝑖 ∈ / 𝒫1 (Figure 2c).
Thus a bit being set (to 1) corresponds to that object being in the region and a member of 𝑃1 .
   Figure 2a also shows a range query 𝑞 drawn with a threshold 𝑡. It can be seen that all solutions
to this query must lie within the area highlighted in Figure 2b. The circle around the query
intersects with two regional boundaries (𝒜1 and ℰ1 ), and so no information is available with
respect to these; however it is completely contained within two of the defined regions (ℬ1 and
𝒟1 ), and fails to intersect with the final two (𝒞1 and ℱ1 ). Such containment and intersection is
derivable only from the measurement of distances between the query and the reference objects,
the definition of the regions, and the search radius. Here for example the possible solution area
shown is determined using only the four distance calculations 𝑑(𝑞, 𝑝1 ).. 𝑑(𝑞, 𝑝4 ). It can be seen
that the only possible solutions to the query are those which match on all non-intersecting fields;
in this case, the objects 𝑠1 , 𝑠5 and 𝑠6 depicted in Figure 2. This is equivalent to the Conjunctive
Normal Form (CNF) expression ℬ ∧ ¬𝒞 ∧ 𝒟 ∧ ¬ℱ. This expression covers the set of all possible
solutions to the query and hints at how the data can be stored using bit vectors.
�3.1. Data structures
Before describing the query algorithm, we describe the data structures used and how they are
initialised. A set of enumerated pivots 𝑝1 to 𝑝𝑚 is selected from 𝑆. We define a set of binary
partitions within 𝑈 , each of the form 𝒫 = {𝒫0 , 𝒫1 }, defined using the distance function 𝑑.
There may be many more   (︀ )︀ regions than reference objects; for example a set of 𝑚 reference
objects defines at least 𝑚 2 sheet regions, plus 𝑚 ball regions.
   We now define the notion of an exclusion zone (EZ) as a containment map of 𝑆 based on a
given partition; this is the information we will use at query time to perform exclusions and
derive a candidate set of solutions. We impose an ordering on 𝑆, then for each 𝑠𝑖 map whether
it is a member of the region 𝒫1 or otherwise. This logical containment information is best
stored in a bitmap of 𝑛 bits, where 𝑛 = |𝑆|. One such exclusion zone is generated per partition.

3.2. Overview of the Query Algorithm
The query process comprises three distinct phases:
   Phase 1 in which the query is checked against the regional definitions, and the two types of
regions with non-intersecting boundaries are identified. Initially, the distance from the query 𝑞
to each pivot 𝑝𝑖 is measured. For each partition, it can be established if the query ball intersects
with the boundary of the partition. If there is no intersection, any solution to the query must lie
on one or other side of the partition’s boundary, so the exclusion zone is brought into the query
calculation in one of two sets depending on whether the query solutions are fully contained
within 𝒫1 or 𝒫0 . We name these sets of bitmaps 𝐵1 and 𝐵0 , which are carried forwards to the
computation in Phase 2.
   Phase 2 in which the regions identified are used to conduct a series of logical operations,
thus identifying a set of candidate solutions for the query.
   The second phase comprises the manipulation of the bitmaps deriving from the first phase
to identify a set of candidate solutions. This may be efficiently achieved by a series of bitwise
operations over these bitmaps2 . It can be seen now that any solution is guaranteed to be
identifiable from the bitmap deriving from the bitwise calculation:
                                   ⎛       ⎞ ⎛ ⎛             ⎞⎞
                                     ⋀︁                 ⋁︁
                                   ⎝      𝑏⎠ ∧ ⎝¬ ⎝         𝑏⎠⎠                                  (3)
                                       𝑏∈𝐵1               𝑏∈𝐵0

   Phase 3 in which the candidate solutions are checked against the original data set to identify
the exact solution to the query. This requires a sequential scan of the single bitmap resulting
from Phase 2 and checking the data corresponding to any remaining set bits using the original
space and distance metric in order to produce an exact solution to the query.




    2
    On modern processors many of these bitwise operations can be performed in single instructions, each of
which may be performed in parallel.
�4. Partition analysis
To unify our treatment of different kinds of binary partitions with their associated distance
constraints, we introduce the concept of a binary partition characterised by a single partition
function 𝑓 : 𝑈 → R and a balancing factor 𝜏 ∈ R with the following properties:

Property 1: 𝒫0 = {𝑠 ∈ 𝑈 |𝑓 (𝑠) > 𝜏 } and 𝒫1 = {𝑠 ∈ 𝑈 |𝑓 (𝑠) ≤ 𝜏 }

Property 2: 𝑑(𝑠, 𝑠′ ) ≥ |𝑓 (𝑠) − 𝑓 (𝑠′ )| for all 𝑠, 𝑠′ ∈ 𝑈 (the distance lower-bound property)

   Note that if 𝑓 (𝑠) = 𝜏 , then 𝑠 is on the partition boundary and by convention we include the
partition boundary in 𝒫1 . The function 𝑓 is used both for determining the regions 𝒫0 , 𝒫1 , and
for deriving the exclusion rules to be used at query time. By exploiting the distance lower-bound
provided by the function 𝑓 , we have that |𝑓 (𝑞) − 𝜏 | > 𝑡 is a sufficient condition to exclude one
of the two partitions from the search. Specifically,

    • if 𝑓 (𝑞) ≤ 𝜏 − 𝑡 then 𝒬(𝑞, 𝑡) ⊂ 𝒫1 and 𝒫0 can be excluded
    • if 𝑓 (𝑞) > 𝜏 + 𝑡 then 𝒬(𝑞, 𝑡) ⊂ 𝒫0 and 𝒫1 can be excluded

Note that different functions 𝑓 may be used to determine the same regions 𝒫0 , 𝒫1 but with
different distance lower-bounds, and thus different exclusion rules. For example, let’s consider
                                                                                        2         2
                                                                                     1 ) −𝑑(𝑠,𝑝0 )
the Euclidean space (R𝑛 , ℓ2 ), 𝜏 = 0, 𝑓1 (𝑠) = 𝑑(𝑠,𝑝1 )−𝑑(𝑠,𝑝
                                                        2
                                                               0)
                                                                  and 𝑓2 (𝑠) = 𝑑(𝑠,𝑝2𝑑(𝑝  1 ,𝑝0 )
                                                                                                    for
two fixed pivot 𝑝1 and 𝑝0 . Both the functions 𝑓1 and 𝑓2 produce the same sheet partitioning,
i.e., 𝒫1 = {𝑠 ∈ R𝑛 |𝑑(𝑠, 𝑝1 ) ≤ 𝑑(𝑠, 𝑝0 )} and 𝒫0 = {𝑠 ∈ R𝑛 |𝑑(𝑠, 𝑝1 ) > 𝑑(𝑠, 𝑝0 )}. However, in
[7] we proved that the exclusion rule determined by 𝑓1 (Hyperbolic exclusion) is weaker than
the one determined by 𝑓2 (Hilbert Exclusion), and the latter is valid only on the large class of
Supermetric Spaces [8].
   In the following we show how general ball partitioning and sheet partitioning can be described
within this formalism.

Ball Partitions A ball partition can be characterised by the function 𝑓 (𝑠) = 𝑑(𝑠, 𝑝) and
the constant 𝜏 . In this case 𝒫1 = {𝑠 ∈ 𝑈 |𝑓 (𝑠) = 𝑑(𝑠, 𝑝) ≤ 𝜏 }, and 𝒫0 = {𝑠 ∈ 𝑈 |𝑓 (𝑠) =
𝑑(𝑠, 𝑝) > 𝜏 }, so Property 1 is clear. Properties 2 follows from the triangle inequality:

                𝑑(𝑠, 𝑝) ≤ 𝑑(𝑠, 𝑠′ ) + 𝑑(𝑠′ , 𝑝),   and 𝑑(𝑠′ , 𝑝) ≤ 𝑑(𝑠′ , 𝑠) + 𝑑(𝑠, 𝑝)

which give that 𝑑(𝑠, 𝑠′ ) ≥ |𝑑(𝑠, 𝑝) − 𝑑(𝑠′ , 𝑝)| = |𝑓 (𝑠) − 𝑓 (𝑠′ )| for any 𝑠, 𝑠′ ∈ 𝑈 .
   The balance of the partition is affected by selecting different values for 𝜏 which we refer to
as different balances.

Sheet Partitions Sheet partitions are defined with respect to two reference points 𝑝1 and 𝑝0 .
Usually, a simple binary partition is defined according to whichever of 𝑝1 and 𝑝0 is the nearer
value with respect to the metric 𝑑. However it has been noted [9, 10] that such partitions may
also be subject to a balancing factor 𝛼, so that

  𝒫1     =    {𝑠 ∈ 𝑈 | 𝑑(𝑠, 𝑝1 ) ≤ 𝑑(𝑠, 𝑝0 ) + 𝛼},𝒫0      =     {𝑠 ∈ 𝑈 | 𝑑(𝑠, 𝑝1 ) > 𝑑(𝑠, 𝑝0 ) + 𝛼}
�  In this case, we can use 𝑓 (𝑠) = (𝑑(𝑠, 𝑝1 ) − 𝑑(𝑠, 𝑝0 ))/2 which gives property 1 above for the
constant 𝜏 = 𝛼/2. Moreover from the triangle inequality of the space we have that

                𝑑(𝑠, 𝑝1 ) ≤ 𝑑(𝑠, 𝑠′ ) + 𝑑(𝑠′ , 𝑝1 )     and 𝑑(𝑠′ , 𝑝0 ) ≤ 𝑑(𝑠, 𝑠′ ) + 𝑑(𝑠, 𝑝0 )
                                                                           ′         ′
for any 𝑠, 𝑠′ ∈ 𝑈 , which gives 𝑑(𝑠, 𝑠′ ) ≥ 𝑑(𝑠,𝑝1 )−𝑑(𝑠,𝑝0 )−𝑑(𝑠
                                                              2
                                                                  ,𝑝1 )+𝑑(𝑠 ,𝑝0 )
                                                                                  = 𝑓 (𝑠) − 𝑓 (𝑠′ ). By
symmetry, we also have 𝑑(𝑠, 𝑠′ ) > 𝑓 (𝑠′ ) − 𝑓 (𝑠), therefore 𝑑(𝑠, 𝑠′ ) > |𝑓 (𝑠) − 𝑓 (𝑠′ )|.
  Like ball partitions, the balance of the partition is affected by selecting different values for 𝜏 .

4.1. Exclusion power
The exclusion power of a partition gives a quantification of the effectiveness of that partition
with respect to a given metric search task. It is clear that this is likely to be quite different
depending on the partition type, the selection of reference objects, and the chosen value 𝜏 .
   The meaning of exclusion power is based on the probability, for arbitrarily selected objects
𝑞 and 𝑠, and a distance 𝑡, of being able to demonstrate that 𝑑(𝑞, 𝑠) > 𝑡. As we will show, for
low dimensional spaces the maximum exclusion power of any partition structure is likely to
coincide with a value of 𝜏 which causes the partition to bisect the finite search space 𝑆, i.e.
|𝒫0 | ≈ |𝒫1 |. However as we will see, in higher dimensional spaces this may be a very bad
choice.
   For a range query 𝒬(𝑞, 𝑡) we define the exclusion power of the partition 𝒫 = {𝒫0 , 𝒫1 } as
the probability of excluding one generic element 𝑠 on the basis only of the data partition to
which it belongs:

              𝑃 (𝑠 ∈ 𝒫0 ) · 𝑃 (𝒬(𝑞, 𝑡) ∩ 𝒫0 = ∅) + 𝑃 (𝑠 ∈ 𝒫1 ) · 𝑃 (𝒬(𝑞, 𝑡) ∩ 𝒫1 = ∅)                            (4)
                     = 𝑃 (𝑠 ∈ 𝒫0 ) · 𝑃 (𝒬(𝑞, 𝑡) ⊂ 𝒫1 ) + 𝑃 (𝑠 ∈ 𝒫1 ) · 𝑃 (𝒬(𝑞, 𝑡) ⊂ 𝒫0 )                         (5)

   The exact calculation of the probabilities 𝑃 (𝒬(𝑞, 𝑡) ⊂ 𝒫𝑖 ), for 𝑖 = 0 or 1, is not straightfor-
ward for a general binary partition in a metric space, however, if we know that the partition
can be characterised by a function 𝑓 and a balancing factor 𝜏 , we can estimate a lower-bound
of these probabilities3 and thus obtain an approximation of the exclusion power as

                 𝑃 (𝑓 (𝑠) > 𝜏 ) · 𝑃 (𝑓 (𝑞) ≤ 𝜏 − 𝑡) + 𝑃 (𝑓 (𝑠) ≤ 𝜏 ) · 𝑃 (𝑓 (𝑞) > 𝜏 + 𝑡)                         (6)

  Let CDF(𝑥) be the cumulative distribution function of 𝑓 (𝑠) for 𝑠 ∈ 𝑆 then the exclusion
power can be expressed as

                 𝑔(𝜏, 𝑡) = (1 − CDF(𝜏 )) · CDF(𝜏 − 𝑡) + CDF(𝜏 ) · (1 − CDF(𝜏 + 𝑡))                               (7)

where 𝜏 is the partition balancing factor and 𝑡 in the query distance threshold
   Estimates of exclusion power can be made with respect to a representative sample of the data
set, and indeed of the query set if that is different4 . A large enough sample will give a good
estimate of the probability of an individual datum falling within each of the subsets.
     3
       In general, 𝑓 (𝑞) ≤ 𝜏 − 𝑡 ⇒ 𝒬(𝑞, 𝑡) ⊂ 𝒫1 , therefore 𝑃 (𝑓 (𝑞) ≤ 𝜏 − 𝑡) ≤ 𝑃 (𝒬(𝑞, 𝑡) ⊂ 𝒫1 ). Similarly, 𝑓 (𝑞) >
𝜏 + 𝑡 ⇒ 𝒬(𝑞, 𝑡) ⊂ 𝒫0 , and thus 𝑃 (𝑓 (𝑞) > 𝜏 + 𝑡) ≤ 𝑃 (𝒬(𝑞, 𝑡) ⊂ 𝒫0 ). The equality holds for some cases, e.g., the
ball partitioning, where it can be proved that 𝑓 (𝑞) ≤ 𝜏 − 𝑡 ⇔ 𝒬(𝑞, 𝑡) ⊂ 𝒫1 and 𝑓 (𝑞) > 𝜏 + 𝑡 ⇔ 𝒬(𝑞, 𝑡) ⊂ 𝒫𝑜 .
     4
       We assume from here on that the distribution of data and query is the same
�5. Power graphs
To understand the effect of different values of 𝜏 an exclusion power graph may be constructed
which is plotted across the range of 𝜏 for a fixed value of 𝑡. This allows the optimum value of 𝜏
to be deduced for a range query with threshold 𝑡.
   We illustrate the construction of power graphs for 5 dimensional data in Section 5.1 and for
20 dimensional data in 5.2.

5.1. Example: single-pivot ball partitioning over 5 dimensional data

                          0.09                                                                           0.2

                          0.08                                                                          0.18
   Normalized Frequency




                                                                                 Normalized Frequency
                                                                                                        0.16
                          0.07
                                                                                                        0.14
                          0.06
                                                                                                        0.12
                          0.05
                                                                                                         0.1
                          0.04
                                                                                                        0.08
                          0.03
                                                                                                        0.06
                          0.02
                                                                                                        0.04

                          0.01                                                                          0.02

                            0                                                                             0
                                 0.2   0.4   0.6     0.8   1   1.2   1.4   1.6                                 0.14   0.16   0.18     0.2   0.22   0.24   0.26   0.28

                                       Distance from ball centre                                                  Distance of a query from its 5NN

                                                   (a)                                                                              (b)
Figure 3: Typical inter-point and query threshold distances from 5-dimensional Euclidean data


   Figure 3 shows histograms of two sets of sampled distances from a randomly generated
five-dimensional Euclidean space. The left-hand side of the figure shows the distribution of
5,000 witness points in terms of distance from a randomly selected pivot point, 𝑝. The right-hand
side shows the distribution of the 5NN (fifth nearest-neighbour) distances for a set of 1,000
queries; this distance representing a query threshold necessary to return a small proportion of
the data, and thus useful values for 𝑡.
   Based on this information, we will pick example values of 𝜏^ = 0.89 and ^𝑡 = 0.19, being the
median values in the distributions of 𝜏 and 𝑡 values, respectively. We can now consider the
exclusion power of these values applied to the partition thus formed, centered around 𝑝.
   The left-hand side of Figure 4 shows the Cumulative Probability Density function (CDF) with
the X-axis values of 𝜏^, 𝜏^ − ^𝑡, and 𝜏^ + ^𝑡 highlighted. Given that 𝜏^ has been selected as the median
distance from 𝑝, the value of the CDF at 𝜏^ is 0.5. Therefore if an exclusion occurs, half of the
data set will be excluded and its exclusion power is: CDF(𝜏^ − ^𝑡) · 0.5 + (1 − CDF(𝜏^ + ^𝑡)) · 0.5.
The more general form of this equation (i.e., Equation 7) hints how the exclusion power can
be displayed for a single partition, as shown on the right-hand side of Figure 4. Here, for a
fixed value of 𝑡 (again 0.19), the power of the partition is calculated over the whole range of
distances that have been measured between the witness points and the point 𝑝. As can be seen,
the maximum power is achieved when the partition is split at the median distance, i.e. 𝜏^ = 0.89
in this example. With this partition, and this value of 𝜏 , the probability of excluding an arbitrary
datum for an arbitrary query is around 0.21.
�                           1                                                                                      0.25

                         0.9

                         0.8                                                                                       0.2

                         0.7

                         0.6                                                                                      0.15

                         0.5

                         0.4                                                                                       0.1

                         0.3

                         0.2                                                                                      0.05

                         0.1

                           0                                                                                        0
                                0       0.2     0.4         0.6         0.8    1      1.2         1.4   1.6              0                           0.2   0.4      0.6       0.8      1     1.2     1.4      1.6         1.8




            (a) CDF of 𝑓 (𝑠) = 𝑑(𝑠, 𝑝) varying 𝑠                                                              (b) Exclusion power graph over 𝜏 (fixing
                                                                                                                  𝑡 = ^𝑡). The exclusion power for 𝜏 = 𝜏^
                                                                                                                  is 0.23.
Figure 4: CDF and Power Graph for 5-dimensional data


5.2. Example: single-pivot ball partitioning over 20 dimensional data

                          0.1                                                                                                                       0.12

                         0.09
                                                                                                                                                     0.1
  Normalized Frequency




                                                                                                                             Normalized Frequency
                         0.08

                         0.07
                                                                                                                                                    0.08
                         0.06

                         0.05                                                                                                                       0.06

                         0.04
                                                                                                                                                    0.04
                         0.03

                         0.02
                                                                                                                                                    0.02
                         0.01

                           0                                                                                                                          0
                                    1   1.2    1.4    1.6         1.8     2   2.2   2.4     2.6                                                            0.9    0.95    1     1.05   1.1   1.15   1.2    1.25     1.3
                                              Distance from ball centre                                                                                          Distance of a query from its 5NN


                                                      (a)                                                                                                                      (b)
Figure 5: Typical inter-point and query threshold distances from 20-dimensional data


   The left-hand side of Figure 5 shows a histogram of 5,000 distances measured with respect to
a randomly selected element 𝑝 of a 20-dimensional uniformly generated Euclidean space. The
right-hand side shows a histogram (for 1,000 queries selected from the same set) of the distances
to the 5th-nearest neighbour with respect to this same set of 5,000 values. Figure 6 shows the
same information in the form of cumulative probability density functions. The left-hand side of
Figure 6 shows the CDF function superimposed with a line corresponding to the medium value
for 𝜏 . The value of t for this data is 1.07.
   Finally, although we have no space here to provide in-depth analysis, Figure 7 shows exclusion
power graphs using sheet partitions over both 5D and 20D spaces. It can be seen that the graph
patterns are quite similar.
�                                                                                   -4
                                                                              10
     1                                                              1.5

    0.9

    0.8

    0.7
                                                                     1
    0.6

    0.5

    0.4
                                                                    0.5
    0.3

    0.2

    0.1

     0                                                               0
          0   0.5   1   1.5   2   2.5   3                                 1             1.5   2   2.5   3




 (a) CDF of 𝑓 (𝑠) = 𝑑(𝑠, 𝑝) varying 𝑠                          (b) Exclusion power graph over 𝜏 (fix-
                                                                   ing 𝑡 = ^𝑡). The exclusion power
                                                                   for 𝜏 = 𝜏^ is 0
Figure 6: CDF and Power Graph for 20-dimensional data




                         (a) A sheet partition in 5D space     (b) A sheet partition of 20 dimensions
Figure 7: Exclusion Power graphs for sheet partitions in 5 and 20 dimensional Euclidean space


5.3. Use of Exclusion Power analysis
Figures 4 and 6 demonstrate the exclusion power in 5 and 20 Dimensions respectively. In the
5 dimensional case we can observe that setting a 𝜏^ value of 0.89 (close to the mean of the
distribution of measured distances) will work extremely well. With such a value the exclusion
power is maximised. By contrast, choosing the mean in the case of 20D data will work very badly.
As can be seen in Figure 6 such a value is unlikely to result in any successful exclusions. These
diagrams explain behaviour observed by many researchers into metric search that choosing
unbalanced indexing structures often works better for higher dimensional data.
   The establishment of a unified partition interface using 𝑓 and 𝜏 allows similar graphs to be
produced for sheet partitions (e.g., Figure 7). The mechanistic value of these graphs is that the
value or values of 𝜏 corresponding to the maxiumum power can be automatically extracted on
a per-partition basis, and used in an implementation of BitPart. In the case of low dimensional
data a single offset can be used for each ball where as for high(er) dimensional data two offsets
can be used. Furthermore, as described earlier a particular implementation can use as few or as
many exclusion zones that as necessary, as may be predicted by the power value at the optimum
�value.
  In Section 6 these offsets are applied to a BitPart implementation to demonstrate their effect.


6. Experimental Validation using BitPart




                   (a) 5 dimensional space                                 (b) 20 dimensional space
Figure 8: Empirical Inverse distances per query for 5D and 20D data for different values of 𝜏 . These
charts plot both ball and sheet partitions, which have different ranges of 𝜏 ; we have therefore nor-
malised these within the offset axis. The Y axis shows the inverse of the number of residual distance
computations, i.e. those not excluded by use of the BitPart mechanism (higher is better).


   In order to validate the above results we ran experiments against a BitPart implementation5
using uniformly generated data in 5 and 20 dimensions with Euclidean distance. Each contained
one million data points, and 1000 queries were executed. In all experiments fixed threshold
distances 𝑡 were used and set to values which correspond to one-millionth of the data volume.
In order to measure exclusion power, the 1000 queries were run 25 times for differing values of
𝜏 for both sheet and ball partitions. The 5D data uses 20 pivot points and(︀ the
                                                                               )︀ 20D data 100.
   In [2] we described how a selection of 𝑚 reference points allows for 𝑚    2 sheet and 𝑚 ball
partitions to be produced. After the above analysis, we used this number for the 5D space, but
in the 20D space we generated twice that number, corresponding to the twin peaks in the power
graphs. Note that this does not require any extra distance calculations at query time, as the
number of reference points is not increased. Furthermore there is not necessarily any increase
in memory usage, as partitions are typically stored in secondary memory and only fetched
when the query ball does not intersect with the partition, which requires only the evaluation of
the 𝑓 function to determine. For each experiment a suitable range is calculated for 𝜏 for both
balls and sheets, ensuring that for all partitions both 𝒫0 and 𝒫1 are non-empty, and the whole
calculation was executed with different values of 𝜏 across this range.
   Results are shown in Figure 8. In order to visualize these results, the number of residual (i.e.
Phase 3) distance calculations per query was measured, this number giving a good proxy for the
value of the exclusion mechanism. The plots show the inverse of these counts. Peaks similar to
those shown in the power graphs in Figures 4 and 6 can be seen in these experimental results.
   5
       The experimental code may be found in https://bitbucket.org/richardconnor/metricbitblaster.git
�7. Conclusions
In this paper, we have reiterated the design of the BitPart indexing and query mechanisms.
The novel contribution of this paper is an exposition of how dimensionality effects the efficacy
of this and other metric search mechanisms. We have demonstrated how the distribution of
distances changes with the dimensionality of the data and how in turn this affects the ability
to perform effective exclusions and thus efficient queries. In particular we have explored the
notion of balance and how that affects exclusion. To do this we have introduced the notion of
exclusion power, and shown its application to some Euclidean data sets in different dimensions.
   This analysis gives fundamental insights into what data can be queried effectively and
what cannot. Furthermore it demonstrates how index and search algorithms can be tuned to
compensate for data sets of arbitrary dimensions.
   We have demonstrated that the phenomena explored in mathematical terms are manifested
in synthetically generated data sets and queries over them using an implementation of BitPart.
We believe that the results in this paper may be applied to other metric search and indexing
mechanisms. Our future plans include investigating these phenomena in real life data sets and
exploring the outstanding issue of partition orthogonality.


References
 [1] P. Zezula, G. Amato, V. Dohnal, M. Batko, Similarity search: the metric space approach,
     volume 32, Springer Science & Business Media, 2006.
 [2] A. Dearle, R. Connor, Bitpart: Exact metric search in high(er) dimensions, Information
     Systems 95 (2021) 101493.
 [3] V. Pestov, Indexability, concentration, and vc theory, Journal of Discrete Algorithms 13
     (2012) 2–18.
 [4] R. Weber, H.-J. Schek, S. Blott, A quantitative analysis and performance study for similarity-
     search methods in high-dimensional spaces, in: Proc. of 24th VLDB, volume 98, Morgan
     Kaufmann, 1998, pp. 194–205.
 [5] G. Amato, C. Gennaro, P. Savino, MI-File: Using inverted files for scalable approximate
     similarity search, Multimedia Tools and Applications (2014) 1333–1362.
 [6] D. Novak, M. Batko, P. Zezula, Metric index: An efficient and scalable solution for precise
     and approximate similarity search, Information Systems 36 (2011) 721–733.
 [7] R. Connor, F. A. Cardillo, L. Vadicamo, F. Rabitti, Hilbert exclusion: improved metric
     search through finite isometric embeddings, ACM Transactions on Information Systems
     (TOIS) 35 (2016) 1–27.
 [8] R. Connor, L. Vadicamo, F. A. Cardillo, F. Rabitti, Supermetric search, Information Systems
     80 (2019) 108–123.
 [9] R. Connor, Reference point hyperplane trees, in: International Conference on Similarity
     Search and Applications, Springer, 2016, pp. 65–78.
[10] J. Lokoč, T. Skopal, On applications of parameterized hyperplane partitioning, in: Proceed-
     ings of the Third International Conference on SImilarity Search and APplications, 2010,
     pp. 131–132.
�