The goal of the phonetics ontology is a very practical one: to allow a computational system to interpret a transcription in IPA in a manner useful to a linguist. It is based almost entirely upon the work of Peter Ladefoged and Ian Maddieson in The Sounds of the World's Languages. We apologize in advance for any misinterpretations of this work we may have fallen prey to!
At the moment, the phonetics ontology is composed entirely of a graph of classes: no instances or slots have been defined for it. (Though most of you will know this, by "graph" we mean a hierarchical structure in which a node can be dominated by more than one mother) The "leaves" of the ontology are unicode IPA symbols (except in those few cases where no symbol yet exists), which are treated as representations of the class of sounds which are instanced by language-specific segments. Instances thus exist only in a language in this view, not in the ontology.
The ontology is composed of two major nodes: Transcription, which contains a subgraph of classes defining the symbols which are included in the IPA, and Articulation, which contains a subgraph of classes of articulatory features. Both Transcription and Articulation dominate every IPA symbol, so symbols can be interpreted either in transcriptional terms or in articulatory terms.
Some questions to consider on the phonetics ontology:
If you find the html version presented here difficult to understand, you might try looking at this version. It has the advantage that you can open and close nodes, and thus the basic structure of the ontology is easier to visualize. Note that you will need a late version of Sun Java to view it, so if the tree does not appear when you go to the page, you can download it here.