Saturday 20 August 2011

SASSY - The First Cut


In the previous round of development we demonstrated that it was possible
to generate a document from the content of an ontology. Now its time to do
an initial version of SASSY with a useful amount of functionality. The project
for this will be SASSY itself.


Requirements

The first step is to document the requirements. Eventually we will do this
using SASSY, but this time we will have to fall back to using a word processor.
The main requirements are:

  • The system shall enable the user to maintain a set of reference ontologies.
  • The system shall allow the user to maintain a set of project specific
    ontologies.
  • The system shall allow the user to generate documents based on the reference
    ontologies.
  • The system shall allow the user to generate documents based on the project's
    ontologies

Architecture

The system will use Protege to create the ontologies and save them to an
RDF/XML database. We access this database using the Java OWLAPI library,
and connect to the library using ICE from a C++ program. To keep things
simple, that C++ program is a command line program that accepts commands
on its input to perform various actions. A small GUI program, using Qt,
provides a graphical interface that runs the command line program when it
starts and passes the user commands to it. The GUI will have a tool bar
for launching Protege, a document viewer, the ontology viewer we built
previously, and Firefox for viewing help documentation.


In the final design we will have components for logging and tracing the
code, as well as administering the server processes. For this increment
we will not bother with this functionality.


User Interface Design

The Qt based GUI will have a tool bar and menus and a set of tabs with one
for each major action: Quality Attributes, Requirements, Data Dictionary
and Architecture. Other tabs will be added later for tracing and Configuration
Management. Each tab will allow the user to set parameters for the particular
action. Eventually we will get these actions from a configuration file and
the ontologies, but for this increment we will just hard code it.


Design

The ICE interface specification, for this cut, will have a separate module for
each action. Later we will re-factor the common components into a base module.


The Java code will have a server class that allows clients to connect using
the ICE protocol. For now we will just use a hard coded port for the comms.
Each ICE module results in a collection of Java classes that are automatically
generated from the ICE interface definition. We also need to create the server
implementation classes that use the OWLAPI to access the ontology databases.
It seemed prudent to keep the generated code separate from the hand built stuff,
and fortunately Eclipse is able to handle building in such a structure.


The C++ code for the ICE interface is accessed through an abstract class. This
keeps the implementation out of the remainder of the code, which does not even
need to include the ICE related header files.


The core set of classes manage the process, including interpreting the commands
from the GUI and running the filter programs that convert the LaTeX into PDF.
This core includes some base classes from which action specific classes are
derived. These include document and diagram modelling classes which builds an
internal representation of the document, and a formatting class which converts
the model into LaTeX.


Another set of classes handle the details of documents, sections, paragraphs
and diagrams. I added the ability to use hyper-links in the PDF which meant
that I had to design a mechanism to allow attributes to be assigned to the
text. For now it only handles the hyper-link references and targets, but it
could be extended to other attributes such as colour, underlining, italics
fonts, etc.


Development

For this first rough cut the focus is to get something working. Each action
was developed without much thought to the others, apart from finding bits of
code that could be copied. The result will need a fair amount of re-factoring
and cleaning up before its used as the basis for subsequent increments.


The Quality Attribute section was the simplest since it is reporting an
ontology that will just be used as a reference. Hence the code can have some
knowledge of the ontology hard coded into it. The focus was on developing
the best techniques for getting and formatting the data. A final step was to
add some diagrams.


The Requirements section was a little more complex in that the ontology
would mostly be unknown. It also used data relationships which required some
additional research to find the best way to access the data.


The Data Dictionary section built further on creating an ontology dependent
document, and added cross references in the form of hyperlinks to the PDF.


Finally the Architecture section could build on all these techniques to
create an initial architecture document. For this cut I restricted it to just
the conceptual view and the logical view of components. It became evident
that creating nice diagrams was still going to be complex, so this will need
to be one focus for the next increment.


Conclusion

It became clear during the architecture development that building the
document was too dependent on the content of the ontology. For the simpler
documents this would be OK since we could control what relationships would
be used. However, for the a more general ontology this starts to become
problematic. Also it bacame evident that some way to generate new views of
the data would be necessary. A little research found SPARQL-DL which should
allow us to build a system where new views can be added through the
configuration for SASSY.


The next increment will tidy up some of the rough edges, such as diagram
scaling and orientation, and re-factor the code so that it is a bit more
maintainable.












Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.