With a previous release of lentic (http://www.russet.org.uk/blog/3035), I got a couple of suggestions. One of which was a complaint that it was hard to get going, because lentic lacks documentation. This is a bit unfortunate. Lentic actually did have documentation but it is hidden away as comments in source code; although, it’s not specific to it, I wanted lentic to enable literate programming and it uses itself to document itself.

Now, this makes perfect sense, but there is a problem. The end-user form of the documentation needs generating from the source code. This is true in general in Emacs land, although the standard form is texinfo. The usual solution to this problem is to generate the documentation up during the packaging process.

This should work, but it doesn’t. Or, rather, it does not work in all cases. For an archive such as Marmalade, it is entirely possible. But for MELPA it fails. The problem is that MELPA works directly from a git checkout. My documentation, of course, is not source but generated. Now MELPA has support for the generation of info from texinfo. But my source is Emacs Lisp and I need to use Lentic to generate a readable form.

One solution would, of course, be not to use MELPA. Nic Ferrier recently argued on the emacs-devel mailing list that the idea was fundamentally broken — a package is something that the developers should generate and publish as with Java or Clojure . He makes a good point, and one that is correct. Moving to Marmalade would solve this problems; after Nic’s work it is largely stable, so this was definately an option.

However, I like MELPA (although I have only used it since stable came out). It is nice that it uses what I am doing anyway (tagging, pushing, so forth). And I like the download stats. So I talked with the MELPA folks but, entirely reasonably, they did not want to add specific support to MELPA for this. Nor support for, for example, downloading the source from somewhere else.

Other possibilities did raise themselves; I could just check in my documentation. But my documentation depends on my source, so pretty much every commit would require also require a documentation commit. Not nice. I thought about adding the documentation as an independent package. Then my documentation commits would be in a repo with nothing else; but this hassles the user, even if it auto-installs. And I’d need different packages for MELPA and Marmalade. So, I was left with no good solution.

At the same time, as all of this, I was working on the documentation, generating Org files from my lisp documentation, then converting that to info. This sort of worked, but not nicely. A significant problem was that something in the toolchain did not like having multiple sections with the same names and I have a lot of these (“Commentary, Header, Code”). I have not tracked down yet whether this is a problem with Org’s texinfo export, texinfo itself or info, nor am I sure it would be worth the effort.

Instead, I decided to try HTML output. This worked quite nicely; I use a single Org driver file (called lenticular.org) and imported all the generated org files from here. I also found org-info which I had not seen before — this is Javascript which gives an Info like experience — next, previous, occur, search and so on. It’s imperfect, but pretty usable, and gives a quite nice documentation experience. It’s also possible to view in EWW although there is no Info like paging here.

Dropping info has one other big advantage — my tool chain for generating the documentation is now entirely in Emacs. So, my source code is now enough, because lentic can generate it’s own documentation on-demand after installation. The first time the user requests the documentation either in EWW or a browser, lentic generates org files from it’s own source and then the HTML (http://homepages.cs.ncl.ac.uk/phillip.lord/lentic/lenticular.html). The only limitation is that this forces the requirement for a recent Emacs version, since the org mode exporter framework has just been updated; unfortunate but acceptable for a 0.x release.

Not all problems disappear. Because my documentation fits into the Emacs-Lisp commenting standards, it is not structured-ideally for info. For instance, the headers of all the lentic Emacs-Lisp files are included. I also would like to extend org-info so I can switch the “Code” sections on and off (embedded literate sources are useful, but not for everyone). It will need some work on Lentic, and probably also the org-mode HTML exporter.

But, then, neither is it that far off. It is good enough and a bit advance on the previous situation. Perhaps, too, it demonstrates a future for Emacs documentation in general as well, with Info replaced with HTML.

The new release (http://www.russet.org.uk/blog/3047) of Lentic is now available on Marmalade and MELPA, complete with documentation avaialble from the menu. Please feel free to try both lentic and its documentation system out now.

Bibliography

Lentic is an Emacs mode which supports multiple views over the same text. This can be used for a form of literate programming. It has specific support for Clojure which it can combine with either LaTeX, Asciidoc or Org-Mode.

Two lentic buffers, by default, the two share content but are otherwise independent. Therefore, you can have two buffers open, each showing the content in different modes; to switch modes, you simply switch buffers. The content, location of point, and view are shared.

However, lentic also allows a bi-directional transformation between lentic buffers — the buffers can have different but related text. This allows, for example, one buffer to contain an Emacs lisp file, while the other contains the same text but with “;;” comment characters removed leaving the content in org-mode, enabling a form of literate Emacs-Lisp programming with no change to either org-mode or Emacs-Lisp. Ready made transformations are also available for Clojure, latex and asciidoc.

Lentic is both configurable and extensible, using the EIEIO object system.

Lentic was previously known as Linked-Buffers.

The 0.7 release adds an integrated documentation system, support for Haskell, LaTeX literate programming and best of all, a ROT-13 transformation.

Available on MELPA-stable, MELPA and Marmalade https://github.com/phillord/lentic

As on a previous, happier, occasion, I ask for my readers indulgence for this personal post. This was my reading delivered today, 29th January, 2015.

My father once said that he could not understand how people found so much to say at funerals; at his, he said, his life story would be over in just a few lines. Perhaps, here, I will prove him right.

My father used to describe himself as a blessed man — he meant this in a very simple way, which was that he was a happy man, contented with his life and his family. My father did not want for things for physical possessions but he did want to look after the things that he had. He loved to make and build things, to define simple solutions to little problems; his garage has shelves and cupboards that he crafted like a fitted kitchen. He was happiest in his home, in the environment that he had helped to build.

As children, my father would take my brother and I on long cycle rides through the countryside around Worcester on many weekends. In my memory, these rides went on for a long time, though I was young and they cannot have been that far; but they were an enormously exciting adventure. They left me with a love of cycling that I retain to this day. In reality, of course, it was just my dad finding a creative way of getting us out of the house while my mum made dinner.

My father loved to talk with people, to find out how they were. After Sean, my son, was born, my dad spent many hours talking with my Italian father-in-law: first, they waved their hands around; then, after I showed them how, they used my computer to translate, passing it backward and forward between them. The lack of a common language was never going to get in the way of my dad having a good conversation.

When I was young, I did not think of him as a good father, just as my dad; I thought every dad was like that. Seeing him with my son made me appreciate him all the more; the sweet stupidity of an eighty year old man, crawling on all-fours with his grandson saying, “shall I make some silly noises, then”? He loved Sean beyond measure.

He had a deep consideration for others. When I told him how I had not been able to return this year at Christmas for his unexpected operation, he said that, he would have been devastated if I had, and that he hoped that he had not spoiled the holiday with his illness.

My dad’s life story may not be one of great deads or big adventures, but these things miss every thing that was important about him. He was a loving man, a kind man and a generous man. But above all, he was a gentle man. His loss leaves a hole in my, and our, lives that cannot be filled, but if when my, and our, time comes, if family and friends can say the same thing about us, then we, too, will have lead blessed lives.

William Henry Lord

Bill Lord

Dad

Thank you

Like many developers I often edit both code and documentation describing that code. There are many systems for doing this; these can be split into code-centric systems like Javadoc which allow commenting of code, or document-centric systems like Markdown which allow interspersing code in documentation. Both of these have the fundamental problem that by focusing on one task, they offer a poor environment for the other.

In this article, I introduce my solution which I call lenticular text. In essence, this provides two syntactic views over the same piece of text. One view is code-centric, one document-centric but neither has primacy over the other, and the author can edit either as they choose, even viewing both at the same time. I have now implemented a useful version of lenticular text for Emacs, but the idea is not specific to a particular editor and should work in general. Lenticular text provides a new approach to editing text and code simultaneously.


Quick Links

  • The lentic package which implements lenticular text
  • A screen cast showing the lenticular source of lentic.el
  • A screen cast showing the use ot lentic.el in a literate workflow

Lentic.el is available on MELPA(-stable) and Marmalade.


The need for Literate Programming

The idea of mixed documentation and code goes back a long way; probably the best known proponent has been Donald Knuth in the form of literate programming. The idea is that a single source document is written containing both the code and the documentation source; these two forms are then tangled to produce the real source code that is then compiled into a runnable application or readable documentation.

The key point from literate programming is the lack of primacy between either the documentation or the code; the two are both important, and can be viewed one besides the other. Tools like Javadoc are sometimes seen as examples of literate programming, but this is not really true. Javadoc is really nothing more than an docstring from Lisp (or perl or python), but using comments instead of strings. While Javadoc is a lovely tool (and one of the nicest parts of Java in my opinion) it is a code-centric view. It is possible to write long explanations in Javadoc, but people rarely do.

There are some good examples of projects using literate programming in the wild, including mathematical software Axiom or even a book on graphics. And there are a few languages which explicitly support it: TeX and LaTeX perhaps unsurprisingly do, and Haskell has two syntaxes one which explicitly marks comments (like most languages) and the other which explicity marks code.

These examples though are really the exception that proves the rule. In reality, literate programming has never really taken off. My belief is that one reason for this is the editing environments; it is this that I consider next.


Editing in Modes

When non-programmers hear about a text-editor they normally think of limited tools like Notepad; an application which does essentially nothing other than record what you type. For the programmer, though, a text-editor is a far cry from this. Whether the programmer prefers a small and sleek tool like Vim, the Swiss-Army knife that is Emacs, or a specialized IDE like Eclipse, these tools do a lot for their users.

The key feature of all modern editors is that they are syntax-aware; while they offer some general functions for just changing text, mostly they understand at least some part of the structure of that text. At a minimum, this allows syntax highlighting (I guess there a still a few programmers who claim that colours just confuse them, but I haven’t met one for years). In general, though, it also allows intelligent indentation, linting, error-checking, compiling, interaction with a REPL, debugging and so on.

Of course, this would result in massive complexity if all of these functions where available all of the time. So, to control this, most text editors are modal. Only the tools relevant to the current syntax are available at any time; so, tools for code are present when editing code, tools for documentation when editing documentation.

Now, this presents a problem for literate programming, because it is not clear which syntax we should support when these syntaxes are mixed. In general, most editors only deal with mixed syntax by ignoring one syntax, or at most treating it conservatively. So, for example, AucTeX (the Emacs mode for LaTeX) supports embedded code snippets: in this case, the code is not syntax highlighted (i.e the syntax is ignored) and indentation function preserves any existing indentation (i.e. AucTeX does not indent code correctly, but at least does not break existing indentation). We could expand our editor to support both syntaxes at once. And there are examples of this. For instance, AucTeX (the TeX editor) allows both the code and the documentation to be edited with its DocTeX support — this is slightly cheating though, as both the code and the documentation are in the same language. Or, alternatively, Emacs org-mode will syntax highlight code snippets, and can extract them from the document-centric view, to edit them and then drop them back again.

The problem here is that supporting both syntaxes at once is difficult to do, particulary in a text editor which must also deal with partly written syntax, and may not always follow a grammar.


Literate Workflows

As well as the editor, most of the tools of the programmer are syntax aware also. Something has to read the code, compile the documentation and display the results. For a literate document, there are two approaches to dealing with this. The first is the original approach, which is to use a tool which splits the combined source form into the source for the code and the documentation. This has the most flexibility but it causes problems: if your code fails to compile, the line numbers in any errors come out in the wrong place. And source-level debuggers will generally work on the generated source code, rather than the real source.

The second approach is to use a single source form that all the tools can interpret. This works with Haskell for instance: code is explicitly marked up, and the Haskell compiler ignores the rest as comments. This is fine for Haskell, of course, but not all languages do support this. In my case, I wanted a literate form of Clojure and, while I tried hard, it is just not possible to add in any sensible way (http://www.russet.org.uk/blog/2979).

A final approach is to embed documentation in comments; Javadoc takes this approach. Or for a freer form of documentation, there are tools like Marginalia. As a general approach it seems fine at first sight, although I had some specific issues with Marginalia (chiefly, that it is markdown specific and is relatively poor environment for writing documentation). But in use, it leads right back to the problem of a modal editing: Emacs’ Clojure mode does not support it so, for example, refilling a marked-up list ignores the markup and the list items get forced into a single paragraph.


Lenticular Text

My new approach is to use Lenticular text. This is named after lenticular printing, which produces images that change depending on the angle at which they are viewed. It combines approaches from all three of these literate workflows. We take a single piece of semantics and give it two different syntactic views. Consider the “Hello World” literate code snippet written in LaTeX and Clojure (missing the LaTeX pre-amble for simplicity).

This prints "hello world".
\begin{code}
(println "hello world")
\end{code}

Now, if this were Haskell, we could stop, as code environment is one of those that the compiler recognises. But Clojure does not; we cannot load this file into Clojure validly. And as it is not real Clojure, any syntax aware editor is unlikely to do sensible things. We could instead use a code-centric view instead.

;; This prints "hello world".
;; \begin{code}
(println "hello world")
;; \end{code}

This is now valid clojure but now LaTeX breaks. Actually, with a little provocation, it can be made valid LaTeX as well (http://www.russet.org.uk/blog/2979), but my toolchain does not fully understand LaTeX (nothing does other than LaTeX, I suspect), so again we are left with the editor not doing sensible things.

These two syntaxes, though, are very similar and there is a defined relationship between the two of them (comment/uncomment every line that is not inside a code environment). It is clearly possible to transform one into the other with a simple syntactic transformation. We could do this with a batch processing tool, but this would fail its purpose because one form or the other would maintain its primacy; as an author, I need to be able to interact with both. So, the conclusion is simple; we need to build the transformation tool into the editor, and we need this to be bi-directional, so I can edit either form as I choose.

This would give a totally different experience. I could edit the code-centric form when editing code-blocks, and I could edit the document-centric form, when editing comments. When in code-blocks, I would use the code-centric form and the editor should work in a code-centric mode: auto-indentation, code evaluation, completion and so on. In the comment blocks, with a document-centric view, I should be able to reflow paragraphs, add sections, or cross-reference.

But does it work in practice? The proof of the pudding is in the programming.


Lenticular Text For Emacs.

I now have a complete implementation of Lenticular text for Emacs, available as the lentic package (http://github.com/phillord/lentic). The implementation took some thinking about and was a little tricky but is not enormously complex. In total, the core of the library is about 1000 lines long, with another 1000 lines of auxilliary code for user interaction.

My first thought was to use a model-view-controller architecture, which is the classic mechanism for maintaining two or more views over the same data. Unfortunately, the core Emacs data structure for representing text (the buffer) is defined in the C core of Emacs and is not easy to replace. Indirect buffers, an Emacs feature which lentic can mimic, for instance are implemented directly in the core. Secondly, MVC does not really make sense here: with MVC, the views can only be as complex as the model, and I did not want the lentic to be (heavily) syntax aware, so that it can work with any syntax. So, we do not have much in the way of a data model beyond the a set of characters.

Lentic instead uses a change percolation model. Emacs provides hooks before and after change; lentic listens to these in one buffer and percolates to the other. My first implementation for lentic (then called linked-buffers) simply copied the entire contents of a buffer and then applies a transform (adding or removing comments for instance). This is easy to implement, but inefficient, so it scaled only to 3-400 lines of code before it became laggy. The current implementation is incremental and just percolates the parts which have changed. To add a new transformation requires implementing two functions — one which “clones” changes and one which “converts” a location in one buffer to the location in the other.

Re-implementing this for other editors would not be too hard. Many of the core algorithms could be shared — while they are not complex in the abstract, for incremental updates there are quite a few (syntax-dependent) edge cases. Similarly, I went through several iterations of the configuration options: specifying how a particular file should be transformed. This started off being declarative but ended up as it should do in a lisp: with a function.

I now have transformations between lisp (Clojure and Emacs) and latex, org-mode and asciidoc. These use much of the same logic, as they demark code blocks and comment blocks with a start of line comment. This is essentially the same syntax that Haskell’s literate programming mode provides. The most complex transformation that I have attempted is between org-mode and emacs-lisp; the complexity here comes because emacs-lisp comments have a syntax which somewhat overlaps with org-mode and I wanted them to work together rather than duplicate each other. For all of these implementations, my old netbook can cope with files 2-3000 lines long before any lag starts to become noticable. Lentic is fast enough for normal, every day usage.


Experience

The experience of implementing lentic has been an interesting one. In use, the system works pretty much as I hoped it would be. As an author I can switch freely between document and code-centric views and all of the commands work as I would expect. When I have a big screen, I can view both document and code-centric views at the same time, side-by-side, “switching” only my eyes. I can now evaluate and unit-test the code in my Tawny-OWL (http://www.russet.org.uk/blog/3030) manual.

By building the transform into the editor, I can get immediate feedback on both parts of my literate code. And, my lenticular text is saved to file in both forms. As a result of this, no other tools have to change. Clojure gets a genuine Clojure file. LaTeX gets a genuine LaTeX file. And for the author, the two forms are (nearly) equivalent. We only care which is the “real” file at two points: when starting the two lentic views, and when versioning as we only want to commit one form. The decision as to which form is “primary” becomes a minor issue. The lentic source code, for example, is stored as an Emacs-Lisp file, which can transform into an Org-mode file, so that it can work with MELPA (as well as for bootstrap reasons). The manual I am working on is versioned in the LaTeX form but transforms into Clojure so that people cloning the repository will get the document first; I use Emacs in batch to generate Clojure during unit testing.

There are some issues remaining. Implementation wise, undo does not behave quite as I want, since it is sensitive to switching views. And, from a usability point-of-view, I find that I sometimes try capabilities from the code-centric view in the document-centric or vice versa.

Despite these issues, in practice, the experience is pretty much what I hoped; I generally work in one form of the other, but often switch rapidly backward and forward between the two while checking. I find it to be a rich and natural form of interaction. I think others will also.


Conclusions

I doubt that tooling is the only difficulty with literate programming, but the lack of a rich editing environment is certainly one of them. Lenticular text addresses this problem. So far I have used lenticular text to document the source code of lentic.el itself, and to edit the source of a manual. For myself, I will now turn to what it was intended for in the first place: literate ontologies allowing me to produce a rich description of a domain both in a human and computational language.

The notion of lenticular text is more general though, and the implementation is not that hard. It can be used in any system where a mixed syntax is required. This is a common problem for programmers; lenticular text offers a usable solution.

Bibliography

Less than one month after the release of Tawny-OWL 1.2.0 (http://www.russet.org.uk/blog/3018), I am pleased to announce the 1.3.0 release. This is a much smaller release than 1.2.0, but provides two useful changes.

First, I have now added support for axiom annotations as more extensively documented in my past post (http://www.russet.org.uk/blog/3028). While I expect these are still a minority requirement, they are used heavily by some people, and so they need supporting.

Second, I have reworked the support for patterns. This has been through the addition of three functions. p allows writing classes with optionality. So, for instance, consider this code:

(p o/owl-class
   o partition-name
   :comment comment
   :super super)

comment and super are optional here; they can be nil. In this case, the p function removes the nil value from the owl-class call. More over if, as in this case, the entire frame is has only nil values, it will be removed altogether. p returns the results of this call as a clojure record contain the entity itself and the name that was used to create that entity. In a nice piece of serendipity, the support that I have added for annotations (http://www.russet.org.uk/blog/3028), also allows direct use of this record latter in the pattern. So, for example, this form uses partition with is the return value from above.

(map
 #(p o/owl-class o
     %
     :comment comment
     :super partition)
   values)

The reason for this record, though, is that it makes it relatively easy to build patterns in both function and macro form. Macros are never trivial, but all this one does is turn a set of symbols into their string equivalents.

(defmacro defpartition
  "As value-partition but accepts symbols instead of string and
takes the ontology as a frame rather than first argument."
  [partition-name partition-values & options]
  (tawny.pattern/pattern-generator
   'tawny.pattern/value-partition
   (list* (name partition-name)
          `(tawny.util/quote-word ~@partition-values)
          options)))

The practical upshot of this is that I can define a value partition using a macro like this:

(defpartition Hydrophobicity
  [Hydrophobic Hydrophillic]
  :comment "Part of the Hydrophobicity value partition"
  :super PhysicoChemicalProperty
  :domain AminoAcid)

As well as generating the OWL API objects, this also binds the relevant vars, both those visible here (Hydrophobicity) and those generated (hasHydrophobicity).

Take together these are both important additions to Tawny-OWL. We can provide better provenance with the axiom annotations, and with patterns we can lift the level of abstraction at which we build our ontology which is one of the original motivations for Tawny-OWL in the first place.

Tawny-OWL 1.3.0 is now available on Clojars.

Bibliography

Since the early development of Tawny-OWL and easy to use syntax has been a specific objective (http://www.russet.org.uk/blog/2214), as well as hiding some of the complexity of the OWL API. The intension has always been for Tawny-OWL to be an ontology developer tool first and a programmatic library second and keeping this in mind has been part of the reason that I believe does fill these objectives.

Unfortunately, the other part of the reason is that Tawny-OWL does hides functionality that is available in the OWL API. Or, more strictly, does not uncover it. Tawny-OWL is implemented in Clojure and what is possible in Java is also possible in Java.

One of the key decisions was to hide the support that the OWL API provides for certain forms of annotation, in particular annotatons on axioms. Of course, Tawny-OWL allows you to add annotations to entities. This is used to enable labels and comments on any entity. But axiom annotations allow the description of the relationships between entities. So, for example, as well as attaching comments on two classes, it is also possible to attach a comment on the sub/superclass relatinship between the two.

The main reason that Tawny-OWL did not support these natively is that it takes an entity-centric view of OWL. So, if we consider this statement:

(defclass A
   :super B
   :label "A")

We are describing the entity A primarily. In fact, this statement translates into two axioms, which we can see in the OWL/XML representation which looks like this:

<SubClassOf>
    <Class IRI="#A"/>
    <Class IRI="#B"/>
</SubClassOf>
<AnnotationAssertion>
    <AnnotationProperty abbreviatedIRI="rdfs:label"/>
    <IRI>#A</IRI>
    <Literal xml:lang="en"
       datatypeIRI="http://www.w3.org/1999/02/22-rdf-syntax-ns#PlainLiteral">A</Literal>
</AnnotationAssertion>

The defclass statement above returns the entity (actually, the var as it is a def form, but the var contains the entity), rather than the axioms. If I wished to return the axioms, as there are several, I would need a list, or more probably, a data structure so that I could extract the axiom I wanted. This would, however, complicate life considerably. For instance, B would now refer to this data structure, which would need unpicking for its use here. Worse, the OWL API works by mutation, so the axioms in B might now reflect only some of the axioms refering to B.

Of course, there is a way around this, which is to dip down into the OWL API, fetching the axioms this way. As far as I can tell, annotations need to be added at the time the axiom is created (it is probably possible to do it later as well). This example comes from my recasting of the OWL Primer ontology.

(add-axiom
 (.getOWLSubClassOfAxiom (owl-data-factory)
  Man Person #{(owl-comment "States that every man is a person")}))

This works well, but the syntax is not nice, we need to do a direct call to the OWL API. We are not even using the add-subclass function. This did not bother my overly, as it was not something that I thought would be needed often.

Unfortunately, it is something that the Gene Ontology people do often, including, for instance, annotating labels with the source of knowledge for these labels. If I am to support them, I need an attractive syntax that fits with current Tawny-OWL syntax. After a couple of attempts, I decided on this:

(defclass A
  :super (annotate B
           (owl-comment "A is a kind of B")))

The axioms in Tawny-OWL are syntactically implicit, describing the :super relationship between A, so I cannot directly address these. But attaching an annotation to B in this way is unambiguous. Compare these two statements that it might otherwise be mistaken for; in one case, we annotate A with a comment (which is most common thing to do) or, we inline an annotation of B (which would probably be better not inline!).

(defclass A
  :super B
  :annotation (owl-comment "A is an interesting entity"))

(defclass A
  :super (owl-class B
            :annotation
               (owl-comment "B is an interesting entity")))

This also extends naturally to other axioms, including annotation labels.

(defclass A
  :annotation (annotate (label "A")
                 (owl-comment "According to me")))

The implementation of this took me several attempts, including some fairly painful and ultimately unsuccesful macros. In the end, I found a much simpler solution. annotate returns a clojure record which contains both the entity — (label "A") or B in these examples — and the annotation always an owl-comment here, but potentially anything. This record is passed through the Tawny-OWL function call stacks in place of the raw entity, until the appropriate axiom is created. I then unpick this object with two calls to protocol methods — as-entity and as-annotations like so.

(.getOWLAnnotationAssertionAxiom
    (owl-data-factory)
    (as-iri named-entity)
    ^OWLAnnotation (as-entity annotation)
    (as-annotations annotation)

The protocol implementations are trivial.

(defrecord Annotated [entity annotations]
  Entityable
  (as-entity [this] entity)
  Annotatable
  (as-annotations [this] annotations))

These allow me to avoid checking for an Annotated object when creating my axiom. In most cases, I will not have one of these, but a normal OWLObject. Or a Long, String or even a Keyword for property characteristics. So I extend the protocols to cover these cases also, with even more trivial implementations.

(extend-type
    Object
  Entityable
  (as-entity [entity] entity))

(extend-type
    Object
  Annotatable
  (as-annotations [entity]
    #{}))

Finally, the annotate function broadcasts as do many other functions in Tawny-OWL, so it is possible to annotate several axioms at once. So, for example, here explicitly using a list.

(defclass A
   :super (annotate [B C D]
             (owl-comment "All of these are supers")))

Or, implicitly with a function existential restriction that itself uses broadcasting.

(defclass A
   :super (annotate (owl-some r B C D)
             (owl-comment "All of these are existentials")))

While the syntax is slightly more complex than most of Tawny-OWL, it is a considerable improvement on dropping down to the OWL API layer beneath; and, ultimately, this form of annotation is a more complex usage of OWL.

Bibliography