Introduction

A few weeks ago I unsubscribed from the BFO discuss mailing list. I’ve been reading and posting there since March 2007; in that time I’ve managed to send 492 mail messages which surprises even me. As a mailing list, BFO discuss is a slightly bruising experience: it’s a bit like a bar fight; one person swings a punch and everyone just piles in. I joined the mailing list because BFO has become somewhat of a force within the bio-ontology community and I wanted to help make sure it was fit for purpose; however, I have to admit that I have been as guilty of reaching for nearest available pool cue as the next ontologist. Not the best side of me, but there you have it.

During my time on the mailing list, I have learnt a lot about BFO and the realist philosophy that, in theory, underpins it. Actually, BFO is not at all bad; for me, though, realism is largely without merit. One of the main difficulties with realism is that is carries with it the idea that, by thinking very hard, you can come up with a “representation of reality”. I think that this is mistaken. As scientists, we should be wary of thinking too much; our role, whenever possible, is to think just enough to get us to the start of the next experiment. This doesn’t seem to happen with BFO; in the time that I have been on the mailing list, BFO itself has changed very little; the constant feedback and iteration to accommodate new knowledge and experience is largely not happening. I have qualms with many parts of BFO (for example, I have discussed the issues with the Realizable Entity hierarchy). However, for me, the worse outcome of the philosophical approach have happened as a result of not considering the advanced models that physics has produced to explain the experimental data that we see. I give four examples.


Length in Space

BFO makes a very high-level split between Independent and Dependent Continuants. A continuant is something that persists over time, but which exists in full for this entire time: my computer or me, for instance, as opposed to a process, not all of which exists at any point in time. The distinction between an independent and dependent continuant depends on whether this entity exists on its own; for my height, a dependent continuant, to exist, I also have to exist. Once I cease to exist, so does my height. This seems okay, but in tying physical dimensions to an independent continuant, BFO has made a fundamental error: how do we express the length of a Spatial Region? Length is a dependent continuant and, so, there must be independent continuant in which is inheres. Unfortunately, Spatial Region is not an independent continuant itself.

There are solutions, of course; we can think of another relation, other than inheres to link Spatial Region and Length. But, we still need a Independent Continuant to exist that this length inheres in. Another possibility is to describe the length of a spatial region as the length of a Independent Continuant that could exists in it. But, it is easy to think of Spatial Regions in which no Independent Continuant can exist (for example, the Spatial Region 1m longer than the longest object in the universe). BFO would be modelling the world backward; physics uses a coordinate system and places objects within that; this approach would use objects to define the coordinate system.

Currently, this problem seems to have been accepted by some of the authors of BFO; however, there is no solution. If BFO had started from the mathematical models of physics, to me it seems likely that we would not be in this position.


Change in Process

BFO suggests that Occurrents (such as a process) can have properties in a similar way that independent continuants can have qualities. I have a length, a process may have a duration. However, BFO suggests that the properties of a an Occurrent cannot change; rather, there must be a new Occurrent.

Again, this makes little sense, and ignores very simple physical examples. Consider, for example, a car first travelling at 10ms-1, then 20ms-1. Consider the process of motion. BFO would have us model this as 3 processes; car moving at 10ms-1, car moving at 20ms-1 and a single motion process of which the other two are part.

For a simple example, this style of modelling may work. However, consider the earth travelling around the sun. The problem is that the motion is continually changing; the earth’s velocity changes infinitesimally toward the sun, so it’s always accelerating. Worse, the acceleration also changes infinitesimally, as the earth’s relative location to sun changes. So, to model this in BFO, we need an infinite number of processes (for both the motion and acceleration). We could argue that while the velocity and acceleration change constantly, the angular velocity and speed of the earth is constant, so why not model the process in these terms? Unfortunately, even this is not true; the earth moves in an ellipse, not a circle, even if its very close to a circle. So, the angular velocity and speed change continually also.

The physics of this is, as I have said, straightforward. The earth’s motion has a velocity and acceleration expressed as (nearly) two sine waves along the two axes.


Rate of Change

In order to get to the subtleties in a clearer fashion, we remind you of a joke which you surely must have heard. At the point where a lady in a car is caught by a cop, the cop comes up to her and says, “Lady, you were going 60 miles an hour!” She says, “That’s impossible, sir, I was travelling only seven minutes. It is ridiculous – how can I go 60 miles an hour when I wasn’t going an hour?”

— Richard Feynman

In a short, recent thread, it appears that there has been discussion on those qualities that need a period of time to have meaning. The examples given include velocity and acceleration. But does this make any sense? It is certainly the case, as the Feynman quote shows, that the definition of velocity is not obvious. But it’s also a known issue. Feynman’s story shows that it can be very hard to describe exactly what you mean when talking about velocity; it’s for this reason that physics uses mathematical notation, where we can be precise. Velocity is \(dr/dt\), acceleration is \(d^{2}r/dt^{2}\). As I have said, these examples do not stand alone — the same applies to many other qualities, including those where change is not over time.

In short, it makes little sense to create distinctions in our physical model of the world that physics does not make. We are creating work for ourselves and confusion for everyone else.


Absolute Space

BFO distinguishes between Sites and SpatialRegions; the idea is to distinguish between bits of space in general, and holes — the lumen of the gut, for instance. This seems reasonable at first sight. However, this is being done by suggesting that a Site is relative to an IndependentContinuant while SpatialRegions are absolute.

In short, over 100 years after Michelson-Morley, BFO has reinvented absolute space. The justification for this is that, according to one of the authors, without absolute space, problems arise. The problems haven’t been described in detail, but apparently, involve things moving through space or changing shape.

BFO is put forward as a “realist” ontology — that is it models the key entities as they exist in reality. And, the reality is this; there is no evidence that absolute space exists and, indeed, very strong evidence that it does not. It is also hard to see how this could cause problems; Einstein removed absolute space from the model that physics uses a century ago. Now, admittedly, this produces some really weird and counter-intuitive results, but only when two objects are moving rapidly with respect to each other. Relativity does not cause any problems that are not necessary to describe the world. In practice for “everyday” physics, the upshot is that you just define (or assume) a frame of reference; there is normally an obvious one, but any frame will do, and the results will come out the same.

My post on this produced some interesting replies. Bjoern Peters straightforwardly agreed. Alan Ruttenberg suggested that I was arguing space doesn’t exist; while Barry Smith argued that having this (false!) distinction in BFO is necessary for practical reasons.

At which point, I unsubscribed.


Conclusions

I am not arguing here that BFO is totally broken or has no purpose. To some extent, I am yet to be convinced that having any upper ontology helps with ontology building: arguing against, they are hard to understand and often result in a top-down design which ends in philosophical arguments and analysis paralysis; arguing for, they provide some basic structure or a design pattern, which can ease the task of starting to build an ontology, or to understand someone else’s. I am unsure yet whether they help with (computational) interoperability; by analogy to software, design patterns are good for the developer but do not provide any more guarantees. In general, though, I work on the basis that the use of a common framework seems a sensible idea; it is something we should try until we have enough data to make a more coherent decision. BFO provides one such basic framework; and, in general, it’s okay so long as we do not take it too seriously. We should be willing to ignore it when it fails.

However, realism has much less going for it. It is based on the conceit that we should look at reality; now, within a scientific context, this means experimental data. The statement that science should use experimental data, though, is obvious and is a truism; it cannot, therefore, itself define a methodology.

In practice, however, BFO has been built leaning on 2000 years of philosophy; and here lies the mistake. We should acknowledge our limitations as ontologists; we have nothing at all to add to a physical model of the universe as the physicists have already done it. All we need is to represent their model; we should not be looking at experimental data, because someone else has already done it for us. The problems described here are all avoided by the simple mathematical model that physics uses — 4 dimensions, or real number lines, at 90 degrees to each other, and by the use of calculus to describe change.

In BFO, we see an attempt to consider the key entities as they exist in reality; and, the bottom line here, is that at least for these few classes, BFO has done a bad job of it. It has misunderstood lengths and space, developed a process model that is unmanageable and made distinctions that are known to be wrong. Biology is built on top of the other sciences, and it will not benefit the cause of bio-ontologies if we ignore them. Worse biologists attempting to use BFO will find it hard to apply models which are demonstrably wrong; what criteria can we apply to distinguish SpatialRegions and Sites, when physics tells us that these criteria do not and cannot exist? Finally, as ontologists, we should accept our limitations and the limitations of the technology; we should not attempt to re-represent knowledge which has already been modelled in more appropriate ways.

We should be experimenting and testing more than we are thinking; we should be embracing change when we are wrong. We should be leaning on 200 years of physics and biology, not 2000 years of philosophy.

4 Comments

  1. Allyson Lister says:

    Great post, Phil. It’s a shame you won’t be on the BFO mailing list anymore – you were always an important voice there IMHO. However, I can understand your reasons. :)

  2. Matthias Samwald says:

    Interesting post, and I can relate to it to some degree. I got interested in foundational ontologies quite soon after I got interested in RDF/OWL, because they seemed to be a basic ingredient for making ontologies actually interoperable by limiting the hundreds of possible choices of describing the world to just a few choices.

    I started out with DOLCE, because it seemed well-done and available in OWL at that time. Then came BFO, which got more buy-in from the biomedical ontology community, so I played around with BFO a lot. I also became an avid reader of BFO discuss, which I found to be intellectually stimulating and inspiring at that time. The discussions about formal ontology really helped me to get a clearer understanding of phenomena in the world, how they can be described, communicated and represented in computers systems. I also think that the realist approach is a practical one. The people around BFO unremittingly replied to (often redundant) discussion points with great detail and should be thanked for their great helpfulness in that regard.

    However, I always had the feeling that, when using BFO faithfully, one step forward was always followed by one step backwards. Using a realist ontology, I expected that different BFO-based ontologies from different sources could just be merged with an elegant ‘click’, and everything would fit. But this ideal turned out to be wrong, because each ontology could model the same thing at different granularities, and that would make alignment much more difficult than expected. Also, the difficulty of representing change-in-time and the uncertainty of how to represent quantitative data turned out to be major sources of frustration.

    Like you, I had the impression that the discussions were engaging, but not leading to any substantial change. All this finally made me lose interest in the mailing list and foundational ontologies in general.

    – Matthias

  3. Phil Lord says:

    I’d tend to agree with you, that the use of a more formal approach can be
    useful, and help to reduce the morass of decisions that have to be made;
    having clear English (or other language!) descriptions, supporting and
    supported by logical definitions are both important. Of course, this is not
    specific to BFO nor is a realist approach required to take advantage of it.

    Granularity is, indeed, also a worry. Upper ontologies, I think, work best
    when considered as design patterns; they provide a framework in to which the
    concepts required for a given ontology can be fitted. But it’s not the case
    that different ontologies will represent the same entities in the same part of
    this design pattern. Granularity is given by BFO as one reason or explanation
    for this; when crossing granularity boundaries, you can’t be sure the same
    entity will fit in the same place. Unfortunately, this multi-level,
    multi-granularity approach is where biology and biological ontologies need to
    go, if we are to support systems biology for instance.

    Ultimately, though, I think I disagree with you that realism as defined by the
    BFO folks is a practical approach; it tends to lead to long discussions about
    what really is real. As well as being time consuming for little gain, it also
    leads to the confusion that, for instance, we see in the information artifact
    ontology, where numerals are described instead of numbers, and conclusion
    textual entities instead of, well, conclusions.

    I still tend to think that upper ontologies can be useful, just so long as you
    do not expect too much from them, and so long as you constantly weight up the
    benefit to be gained from following their strictures against the costs of
    doing so. I think that the boat is still out on this one; most of the
    OBOFoundry ontologies were designed without BFO or realism, so we lack the
    experimental evidence to make this judgement. My suspicion is that the costs
    are going to be far higher than many expect.

  4. dosumis says:

    Excellent post. I share some of your worries about the BFO. I still have a (perhaps naive) attachment to the realist approach though.

    http://ontogeek.wordpress.com/2010/04/24/realism-really/

    Would be interested to hear your take on my position outlined here.

Leave a Reply