Sunday, 14 July 2013

Development Architecture

This chapter describes the environment in which the development, testing and deployment occur.

In the worst case the developers are allowed to modify the production code, scripts or configuration. This is generally considered to be undesirable, but what is the preferred arrangement?

5 Levels

The proposal here is for a five level configuration environment where changes progress from development, through testing to deployment into the production environment.

Level 1

This level is for the software developers to create new software or to create patches for existing software. This level needs to be easy to configure by the programmers themselves since they may need to try various configuration alternatives in rapid succession.

Programmers should run their unit tests in the level 1 environment.

Level 2

The second level is for the integration of the software created in level 1. The only development at this level is for the build system and similar configuration elements. The software should be built on this level so that there is confidence that the correct supporting libraries are being used (and not something that is under development by some programmer).

Programmers should run tests to confirm that the software builds and runs as specified.

Development software, such as compilers should only be accessible from levels 1 and 2.

Level 3

The third level is for the formal testing of the software. The package created in level 2 should be installed in a controlled manner and then subjected to the testing specified for the system.

Level 4

The fourth level is for confirming that the new software will run in the production environment. Hence this level should be as close as possible in configuration, and size to production.

The software should be installed by the same system administrators responsible for the production system.

Level 4 is also the correct place for acceptance testing by the client to confirm that what has been delivered is what they are expecting.

Level 5

The last level is for the production system. Software should only be installed here after it has passed all tests and the client has agreed that it is OK to install.

Versions

It is typical for there to be more than one version of the software in existence at any one time. It would be wasteful to put the programming and testing teams on hold while the acceptance testing was being done, for example.

In a large project you may have several development teams working on different enhancements to the system at the same time as the test team is performing their test work, and the client might have a production version running while they are acceptance testing the next release.

It is also possible that there will be multiple versions of the software in production at the same time. A large distributed environment, for example, may not be able to promote all sites to the next production version simultaneously (or they may consider it unwise to do so).

If there are multiple clients for the software some may be constrained to old versions of the software for various reasons, both economic, and technical.

Increments and Releases

Modern software development practices emphasise the need to work on small, well defined and focused developments, rather than huge multi-year projects. Typically an increment to the development will take a few weeks to go through the analysis, design and programming phases.

On the other hand the client will find that performing acceptance testing is expensive and time consuming, and is not something that can be undertaken frequently.

To reconcile these two competing requirements we can keep most increments in the Level 1 through Level 3 environments, and only promote to Level 4 when the client is ready for the next round of acceptance testing.

This can be done by nominating a special increment to be a Release of the software.

Platforms

Sometimes it is necessary to support software on multiple platforms. These can range from different versions of Windows, to support across a variety of operating systems such as Windows, Linux, HPUX, Solaris, AIX, BSD, MacOS, etc. along with the multiple versions of each.

Support for different hardware is also becoming a much more common requirement as the ARM architecture is rapidly gaining prominence.

Finally support for different user interfaces is recently become a hot topic. It might be necessary to support a conventional desktop with keyboard and mouse, a touch screen device such as a touch enabled laptop or tablet, and perhaps even a smart phone. One should also not overlook the usefulness of a simple command line interface when it comes to remotely administering a computer.

Configurations

The software might be required to operate in a wide variety of different configurations. It may need to work on a simple stand-alone computer, to a large network within an organisation to a "cloud enabled" distributed system.

Different clients may also use different supporting software, such as databases, GUI libraries, and so on. These can be just different versions of, for example, Java, through to entirely different products.

Virtual Machines

If we multiply our five level deployment model by the number of supported versions of the software we can already get quite an unmanageable number. Multiply that again by the number of potential platforms and configurations and it becomes obvious that no one could have every combination ready to develop and test on.

The answer is, of course, to set up a collection of virtual machines with the most common combinations, and have some others that can be adapted to the more unusual arrangements.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Sunday, 18 March 2012

Windows Future

As architects we need to have some understanding of what the future
might hold for the personal computing market. These devices form an
important component in most IT systems so its probably useful to
look at where they might be headed in the next few years.


Microsoft's Windows operating system has formed the basis for personal
computing for nearly two decades so this will be our focus. While
Apple have recently become the most valuable IT company they insist on
remaining hardware focussed, which ultimately means that they are limited
to about 10% of the total market purely because they are physically
unable to manufacture enough product by themselves. Microsoft products
however, can be installed on any computer so they have the potential
to be 10 times larger than Apple in terms of installed operating systems.


For this article I will define the term PC to refer to a machine that
has a keyboard and mouse. They are the laptop and desktop machines
that we are all familiar with. The term "tablet" will refer to a
machine that can be easily used without a keyboard or mouse, usually
because it has a touch sensitive screen.


The sales of PCs have grown almost exponentially for the last ten years
but are now showing signs that they will level off at about the 350 million
per year mark. Some analyses have shown a slight loss in market share
by Windows over the last few years. Combining these two pieces of
information and it looks likely that if Microsoft remained with just
PCs then it would have to report a decline in its sales. This would be
a major embarrassment for the management team at Microsoft.


Microsoft have, no doubt, been aware of this potential problem for some
time, and have been endeavouring to find some new source of income.
They tried a subscription system for Windows, but this sort of failed
when Vista was late being delivered. They are trying "Software as a
Service", (SaaS) but its likely that this is just eating into their
sales of Office. The products where SaaS works well are email and search
but they have a very strong competitor in the form of Google for
those markets.


Now Apple have shown them a new possibility - the App Store. The ability
to get a cut of every software sale would give Microsoft the income
stream they have wanted for so long.


The Apple tablet also demonstrated that it could be sold with a simple
smartphone style of interface rather than the desktop style that
Microsoft have been trying unsuccessfully for the last decade.
While the tablet sales are quite small in comparison to overall PC
sales, they are on a rapid growth trajectory that might be enough
to keep the total sales figures growing for a few more years.


The Metro interface, developed for the Windows smartphone, has been
adapted for the tablet. I will look at why it was also pushed into
the PC later.


The tablet needs to run on ARM since Intel have not been able to create
a sufficiently low power x86 device. This meant that there was no need
to be backward compatible - this machine could start with a clean slate.


The Windows tablet would need something to differentiate it from the
Android machines, and the most obvious solution was to include Office.
It would appear that the Office team was either unable or unwilling to
create a touch version, hence we have the rather awkward Metro-Desktop
switching which annoys the Windows-8 reviewers.


Microsoft almost certainly see the Android tablet as their number one
problem. Microsoft do not like it when something other than Windows
starts to get any sort of market share. We saw this with netbooks, the
first of which ran Linux, and did so quite well. MS responded by
bringing XP back so that there was a light weight Windows OS to use
on these small machines and, it's suspected by many, coercing the OEMs
into dropping Linux, probably by threatening to remove their OEM
discounts for Windows on their other machines.


Now we have Android tablets illustrating that you do not need Windows
to do computing. Microsoft is currently unable to coerce the OEMs
since they do not have an ARM version of Windows. This will change
with Windows-8.


I expect that Microsoft will use the X-Box strategy when they introduce
the Windows ARM tablets. The machine will most likely be sold at below
cost with the App store recouping the lost income. The use of UEFI to
lock down the machine so that no other operating system can be installed
supports this view. (You would not want to sell at a loss only to have
Ubuntu installed on it and not get any App store sales.)


Once the Windows ARM tablets become available I expect that they will
be priced below the corresponding Android equivalents. I guess that
Microsoft will hope that the lower price will make up for the lack of
available applications.


A price war among the tablets will spell the end for the laptop computer.
They will be squeezed from the bottom by tablets and from the top by the
ultrabook. This is sad for students that just need a cheap machine to get
work done. Prior to doing this analysis I thought that the ultrabook was
unlikely to succeed, but with the laptop being squeezed out I see that
it does have a future.


Will this work?

It is impossible to predict what the market will do.


It must be remembered that the tablet is not the most convenient
form factor. Its OK for things like recipes in the kitchen, browsing
the web from the coffee table or reading in bed, but for actually doing
any work its too big to be a phone and too small for the desktop.
The current rapid rise in sales might not last beyond those that
can afford it as a secondary device.


If Windows ARM tablets get ignored by the consumer then Microsoft
might do what they did with the Zune and the Kin - drop them
and go back to what they know - the PC. I am not so sure that this
would be their reaction since I think they can see too much potential
gain. I think they would advertise and price cut until the Android
tablets are overwhelmed.


If it does work and Android tablets follow Linux netbooks into
oblivion and Microsoft gets its new income stream via the App store,
what happens next? What can we expect from Windows-9?


Microsoft will want to extend the App store to the PC. It not only
provides an income stream, but it also allows them to screen out
malicious programs and viruses. I would expect that we will see
some API changes in Windows-9 designed to force developers to the
Metro and App store and away from the open desktop. This trend
will continue for subsequent releases of Windows since there is
so much to gain and so little to lose.


The PC's Future

With Microsoft moving to a Metro world what does that mean for
application developers? Basically the open desktop will become
deprecated. We will need to either build applications with a web
interface or pay Microsoft to distribute an app through their
store. Those dedicated business applications that you built using
.Net and other Windows APIs will eventually have to be rebuilt
for this new paradigm.


Many businesses are just now upgrading from XP to Windows-7 and
with the preview of 8 getting some quite hostile reviews perhaps
a lot more will move forward their upgrades so that 7 can be used
for the next few years.


Another factor to watch is the smartphone. Recently Ubuntu was
ported to an Android phone. This allows its user to use the device
as a phone while on the move and plug it into a proper keyboard
and display when in the office. You get the same applications
with all your data in both scenarios. There is enough power
in a modern smartphone for it run Linux well enough for office
use.


For docking to really take off though requires Google to define a
standard interconnect and protocol between the smartphone and
the desktop peripherals. Then an office could fit out with those
peripherals without worrying about which brand or model of phone
was going to be used and developers could write their software
to switch between the desktop and portable modes.


Conclusion

Windows-8 on the PC probably won't get much traction. They will get
their usual sales to consumers as they upgrade their hardware but
businesses will stay with Windows-7. On the tablet there will be a
fight until it displaces Android - might be quite nasty.


Windows-9 will attempt to push the Metro/App-Store model onto the
PC. Businesses will be forced to migrate their applications but
might well respond by moving to docked smartphones and web
applications.


While Microsoft has the resources to weather the storm that
Windows-8 brings, it is by no means certain that they will make
the correct decisions to return to a profitable path for
Windows-9.


Addendum - Jan 2014

I guess I underestimated the greed of Microsoft. They seem to have been mislead by thinking that just because everyone uses Windows it must actually be popular, and on that basis priced their tablet devices to be just barely competitive with Apple, rather than going after Android.

Given the extra resources that Windows needs above Android it will never be able to compete on price alone - the bill of materials is just too much larger. Additionally the other OEMs are having to pay for Windows licenses, making them even less competitive.

In the mean time Android have just about conquered the tablet market, with Apple falling steadily further behind. The prices for Android tablets have fallen to under $100 and are continuing to places that Microsoft can't follow.

My prediction now is that Microsoft will make a second attempt with the Surface line, but if it fails again, they will withdraw back to the rapidly shrinking PC market, with perhaps continued support for smartphones via Nokia.

This could mean the end of the "Devices and Services" model - the incoming CEO will have no attachment to that idea - and a focus on the business market. Businesses can expect to see a steady rise in the cost of using Windows as Microsoft try to keep the profits coming in a shrinking market.

There is a danger that Microsoft will be unable to manage its own downsizing and will collapse in some way.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Wednesday, 7 March 2012

SASSY Increment #5

This increment involved a lot of work on the ontologies and some tweaks to
interpreter.


Interpreter

The interpreter underwent a few updates as usage threw up a few issues. The
initial version required the programmer to put in the offsets as numbers for
the jumps, conditionals and loops. This got tedious very quickly. I enhanced
the language to use labels for the jump destinations, and this seemed satisfactory
for a while. However on larger scripts managing the labels started to get
tedious as well. I then modified the language so that all jumps were always for
a single statement which allowed the parser to remain a single pass design and
the need for labels was removed. This is a much cleaner looking language.


A final enhancement was to allow forward references. To retain a single pass design
meant that the forward reference is patched when the real function is processed.


: aFunction noop ; # forward reference

: usingFunction
aFunction
;

: aFunction
usingFunction
;


When the parser finds a function that has already been defined it assumes the
previous instance was a forward reference and replaces the first operator
with a call to the real function.


There is getting to be a need for some documentation on writing these scripts.


Ontologies

A lot of work for this increment consisted of developing examples for the
various views that have been identified. For each view it was necessary to
define the ontology classes and properties. Then individuals were added for
the SASSY architecture and a script written to produce the documentation.
Frequently it became obvious quickly that the class design was wrong and
it would be necessary to begin again.


The following views have been defined so far - I can think of a few more
that would be useful for large projects.

Requirements
This view lists the requirements for the system with a cross reference to the tactics used to implement it.
Tactics
This view relates each tactic to the responsibilities that the system must implement.
Concept Modules
This view describes the system components at the conceptual level. It focuses on assigning responsibilities to modules.
Methodology
This view describes the development process. It is a reference view.
Implementation Modules
This view shows the components of the system, such as specific programs, libraries and databases.
Interfaces
This view describes the interfaces between components of the system.
Data Flow
This view describes the flow of data through the system.
Use Cases
A view which shows the sequence of component activations for each use case.
Quality Attribute Scenarios
This view shows the test scenarios that are used to validate the architecture.
Task View
This view enumerates the development tasks that are undertaken in each increment of the development process.
Team View
This view shows the responsibilities of the team members of the project.
Execution Modules
This view shows the allocation of processes and threads to physical equipment.
Computer View
A view showing what is running on each machine.
License View
This view shows the licenses applicable to the components of the system.
Network View
This view shows the network connections between components of the system, and any external systems.

Planning

It is probably time to reconsider the plans for SASSY. During the development
of the ontologies it became clear that actual development and the architecture
defined in the ontology were beginning to drift apart. In addition the release
notes for Protege indicate that the next version, 4.2, will include a client
server architecture and support for a relational database - these are both
things we need if SASSY is to support teams of architects working on large
systems.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Saturday, 28 January 2012

Copyrights

When developing a software system the issue of copyrights can have a significant
effect on the solution. You will need to get licenses for the proprietary
components of the system. You will need to arrange for the users of the system
to also get the appropriate licenses. I have seen discussions during the
architectural design phase where the licensing costs actually drove the design
of the system. The best solution, technically, was put aside in favour of an
alternative that would be cheaper to license.


Perhaps its time that the whole question of copyrights is revisited to examine
whether it is relevant to the modern world. I will look at a variety of media,
including books, clothing, music, film, and software.


Books

Copyrights were introduced to control the publishers of books. In return for
not printing anything the sovereign did not approve of, publishers gained
exclusive rights to certain titles. The current copyright laws can be traced back
to the English "Copyright Act of 1709", also known as the "Statute of Anne".
Quoting from Wikipedia, the central plank of the statute is a social quid pro
quo; to encourage "learned men to compose and write useful books" the
statute guaranteed the finite right to print and reprint those works. It
established a pragmatic bargain involving authors, the booksellers and the
public.


The role of the book publisher has been primarily the conversion of the author's
manuscript into a typeset format which can be easily printed in large numbers
and distributed to book stores. Given that we now have computers that allow an
author to produce high quality originals and the internet that allows the near
instant distribution of those works directly to the readers it is hard to make
much of a case for the continued role of book publishers. The remaining problem
is how to reward the author, how do we encourage "learned men to compose and
write useful books"? I will address this later.


Clothing

The fashion industry has never been under the umbrella of copyright protection.
However few could argue that it has not been vital and innovative despite this lack
of protection. Even though designs are widely copied and available to be bought
within hours of them appearing on the catwalks there seems to be no pressure to
bring this under any form of copy prevention. The leading fashion houses are
some of the most profitable and well known businesses on Earth - all without
the backing of copyrights.


Music

Prior to the invention of the gramophone, the music publishing industry roughly
paralleled the book publishing industry. The gramophone record was interesting
in that it could be sold to the public, but there was no means for anyone to
make a useful copy.


When radio came along the focus moved to using performance rights to get
payments from the radio stations. (It could be argued that the
music publishers made more from the playing of their products than the radio
stations, but never-the-less the radio stations were made to pay.) The aggressive
policing of performance rights is why you are banned from singing "Happy Birthday"
when you hold a party at McDonalds.


The tape and cassette recorders of the 1960s and 1970s which were used by most
people to make copies of the fragile records so that the music could be played
on portable devices was barely tolerated by the music industry, but there was
not much that they could do about it at the time.


The compact disc and personal computer was the game changer for the music
industry. It was now possible for the average person to make a perfect copy of the
original. Not just a noisy, poor quality copy - a perfect copy. Add in the
internet and it now possible for nearly everyone to have a copy of a musical
work within minutes of it being performed, and at near perfect quality.


The copyright laws which were originally designed to control publishing
businesses were now aimed at the individual consumer. Punishments that were
designed to prevent one publisher from copying a single work of another were
suddenly applied to individuals that made their collections available for
others to copy.


Film

The film industry is a little different from the others. While a book is normally
written by a single person; music, software and clothing by small teams, a film
requires dozens to hundreds of people to make. However, film is different in
another very important respect - they have the cinemas to display their works.
These days most films make most of their income in the cinemas on the first
weekend.


Given that modern cinemas get their movies in digital form, and that the
cinemas are under a business contract with the film distributors, it could be
argued that films remain in private control even after they are screened. It
is not until the movie is published on DVD is it open to the world in which
copyrights apply. (Of course, publishing a copy on the internet before it hits
the cinemas is likely to be a case where the full force of the law can be
applied.)


Software

Most software is developed "in-house" to satisfy the needs of the company for
which it is being written. The remaining 10% or so is software for sale - office
programs like Word, Excel, Photoshop; and entertainment software such as World
of Warcraft. The software industry has been quite successful at limiting copying
without resorting to suing its customers for copyright infringement.


Rewards

The clothing and software industries seem to be doing fine without resorting to
copyrights. The film industry would probably survive quite nicely if they only
got the income from the cinemas. This leaves books and music, and perhaps DVD
movies that need some new business model.


If we dropped copyright protection a work would have two states - it would
either be private and under contractual control only accessible to the
developers of the work, or it would be public where it could be copied about
indefinitely. Obviously once a work is public it has almost no intrinsic value
and trying to extract value from it would seem futile. The value of the work
during its private phase is also rather low - in fact it is a liability since
effort has been expended on its creation for no reward.


The trick then is to persuade the public to make some contribution in return for
the work making the transition from private to public. This is a one-time
opportunity to make money. It removes the rather silly effect of copyrights
where someone can make popular work and live off the income from it forever
- a situation which defeats the original intention of copyrights as an
encouragement.


One possibility is to have governments pay for the work. This might not be
sensible for pop music, but for school text books or works by the national
orchestra it might well be appropriate.


Another possibility is a system of patronage, where wealthy organisations or
people pay for the work to be released. This was the main means of support for
authors before the modern era. For example a university might pay for the works
of some authors.


A third possibility is for an auction system. A part of the work is made available
along with reviews of it. The work would be placed in a vault until some desired
amount was contributed by members of the public. When the amount was reached each
contributor would get a copy that they could then do with as they please.
Acting as agents for such a system would seem to be the logical thing for
the current distributors and publishers.


Conclusion

The desire to share stuff is one of humanities deepest emotions. It has been
one of our defining characteristics since we climbed out those proverbial trees.
When something we have can be shared immediately and at almost no cost the
desire to do so is almost overwhelming.


To attempt to make such sharing illegal is even dumber than trying to
stop people from drinking alcohol. It can only be attempted with the most
draconian laws and the removal of citizen rights that make our modern civilisations
tolerable.


Technology has passed the publishers by. It is time for them to find something
better to do.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Sunday, 22 January 2012

SASSY - Increment #4

This increment has not gone well. I started off with trialling SPARQL-DL
as a query language for the OWL database. At first it seemed OK - it could run
some of the simple queries that paralleled the examples in its source code.


Getting the results out involved adding in a JSON parser. The only C++ one I
could find wanted to do everything using wide characters which added to the
complexity somewhat.


When I tried some of the more complex queries that will be needed for this
project things started to go awry. I contacted the author and he confirmed
that it was not capable of doing the queries I wanted. It could handle
relationships between individuals, but not those that involve
querying the relationships between classes.


Further research has not turned up any likely candidates for this problem,
so it appears that I will have to work up my own solution.


At about this time there have been some significant changes to my personal
life, employment and so on, which have been a bit distracting. I also invested
in a new computer with a bit more more performance, and it was a significant
distraction setting up all my favourite software on this new machine. Then
Christmas came along to further delay things. Anyway, I hope to be able to put a
lot more time and effort into this project for the next few months.


The break gave me time to reflect on the way forward. My decision was to build
a small interpreted language that could be used to query the ontology database
and construct the document. I chose a threaded interpreted language, similar to
Forth, as the basis since they a very easy to implement.


The interpreter deals with a range of objects, such as integers, strings,
and several complex structures used to return results from the database, plus
arrays of these objects. I wrapped them up and accessed them using a smart
pointer object so that they would be automatically managed and could be
placed onto a data stack.


The interpreter uses three stacks, a return address stack for subroutine calls,
a data stack for manipulating the results of the database queries, and an
object stack on which the document is constructed.


The runtime interprets a byte code array which consists of either indexes into
an array of function objects, or to the location of a subroutine within the byte
code array. The function objects are responsible for manipulating the stacks and
for making calls to the ontology database. The only other entries in the byte
code array are integer parameters used for jumps or indexing into the data stack
to load string constants.


A very simple, single pass parser is used to convert a textual version of the
program into the contents of the code array (and the string constants in the
data stack). This was mainly introduced as it was too tedious to hand calculate
the targets for jump statements.


Using this language I have been able to reproduce the architecture document, so
I am confident in using this as the basis for the remainder of the project.


The next phase will involve a significant overhaul of the ontology database for
the architecture. During my break I have been reading the "Description Logics
Handbook" and have been slowly gaining a better understanding of this subject.
The net result is that a lot of classes in the ontology need to be converted to
being individuals - so a lot of rewriting will be necessary.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Friday, 16 September 2011

SASSY - Increment #3

The aim of this round of development is to tidy up some of the rough edges
from the first round. Our first cut was done with the aim of getting
something done from which we could base the subsequent development. In the
rush it was easier to copy and paste similar code, and there were a few
bugs in the way diagrams were displayed.


To organise the work I used the Mantis bug tracker program and noted down
all the things that needed attention. Again, for this increment the aim
was not to extend the functionality, but rather to focus on the quality.


The following bugs were addressed:

Empty boxes in diagrams
The diagram building code got additional checks to ensure the diagrams were OK.
Diagrams the wrong size
The temporary file name given to the diagram's eps text turned out to be not unique. This resulted in some diagrams being scaled twice, and others not at all. A better file naming algorithm cured the problem.
Hyperlinks flowed into the margins
The LaTeX processor in dvips was not able to handle hyper-links adequately. I found, and installed a newer package, called dvipdfm, that could correctly break a hyper-link.
Table of Figures
The architecture document has a lot of diagrams, and it really needed a table of figures to appear after the table of contents. A few changes to the boiler-plate had that included easily.
Formatting Issues
Some paragraphs needed to be indented, so I constructed a LaTeX command to provide the required formatting. Also fixed the boiler-plate for things like the revision history and copyright notice. Finally I added a new SASSY logo.
Quality Attributes
The Requirements document was listing the quality requirements correctly, but these were worded in a way that referenced the corresponding quality attribute, which was not being shown. Some modifications to the code soon fixed that issue, and now the requirements are quite readable.
Requirement Priorities
I had put some support in for handling the priority of a requirement but it was unused. This meant that you could not separate the mandatory from the important or the "nice to have" requirements. I added a classification to each requirement, and added the code to display this information in the generated requirements document.
Code Refactoring
Common code was separated out, paramaterised where necessary, and placed into the appropriate parent classes.

Conclusion

The programs now form a solid basis for the rest of the project. We have
something from which we can build on and which will usually be in a working
state if we need to show it off.


My main concern is that the code to generate the documents is too
dependent on the structure of the ontologies. This is something we will
address in the next increment where that aim is to incorporate SPARQL-DL
into the design.



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Saturday, 20 August 2011

SASSY - The First Cut


In the previous round of development we demonstrated that it was possible
to generate a document from the content of an ontology. Now its time to do
an initial version of SASSY with a useful amount of functionality. The project
for this will be SASSY itself.


Requirements

The first step is to document the requirements. Eventually we will do this
using SASSY, but this time we will have to fall back to using a word processor.
The main requirements are:

  • The system shall enable the user to maintain a set of reference ontologies.
  • The system shall allow the user to maintain a set of project specific
    ontologies.
  • The system shall allow the user to generate documents based on the reference
    ontologies.
  • The system shall allow the user to generate documents based on the project's
    ontologies

Architecture

The system will use Protege to create the ontologies and save them to an
RDF/XML database. We access this database using the Java OWLAPI library,
and connect to the library using ICE from a C++ program. To keep things
simple, that C++ program is a command line program that accepts commands
on its input to perform various actions. A small GUI program, using Qt,
provides a graphical interface that runs the command line program when it
starts and passes the user commands to it. The GUI will have a tool bar
for launching Protege, a document viewer, the ontology viewer we built
previously, and Firefox for viewing help documentation.


In the final design we will have components for logging and tracing the
code, as well as administering the server processes. For this increment
we will not bother with this functionality.


User Interface Design

The Qt based GUI will have a tool bar and menus and a set of tabs with one
for each major action: Quality Attributes, Requirements, Data Dictionary
and Architecture. Other tabs will be added later for tracing and Configuration
Management. Each tab will allow the user to set parameters for the particular
action. Eventually we will get these actions from a configuration file and
the ontologies, but for this increment we will just hard code it.


Design

The ICE interface specification, for this cut, will have a separate module for
each action. Later we will re-factor the common components into a base module.


The Java code will have a server class that allows clients to connect using
the ICE protocol. For now we will just use a hard coded port for the comms.
Each ICE module results in a collection of Java classes that are automatically
generated from the ICE interface definition. We also need to create the server
implementation classes that use the OWLAPI to access the ontology databases.
It seemed prudent to keep the generated code separate from the hand built stuff,
and fortunately Eclipse is able to handle building in such a structure.


The C++ code for the ICE interface is accessed through an abstract class. This
keeps the implementation out of the remainder of the code, which does not even
need to include the ICE related header files.


The core set of classes manage the process, including interpreting the commands
from the GUI and running the filter programs that convert the LaTeX into PDF.
This core includes some base classes from which action specific classes are
derived. These include document and diagram modelling classes which builds an
internal representation of the document, and a formatting class which converts
the model into LaTeX.


Another set of classes handle the details of documents, sections, paragraphs
and diagrams. I added the ability to use hyper-links in the PDF which meant
that I had to design a mechanism to allow attributes to be assigned to the
text. For now it only handles the hyper-link references and targets, but it
could be extended to other attributes such as colour, underlining, italics
fonts, etc.


Development

For this first rough cut the focus is to get something working. Each action
was developed without much thought to the others, apart from finding bits of
code that could be copied. The result will need a fair amount of re-factoring
and cleaning up before its used as the basis for subsequent increments.


The Quality Attribute section was the simplest since it is reporting an
ontology that will just be used as a reference. Hence the code can have some
knowledge of the ontology hard coded into it. The focus was on developing
the best techniques for getting and formatting the data. A final step was to
add some diagrams.


The Requirements section was a little more complex in that the ontology
would mostly be unknown. It also used data relationships which required some
additional research to find the best way to access the data.


The Data Dictionary section built further on creating an ontology dependent
document, and added cross references in the form of hyperlinks to the PDF.


Finally the Architecture section could build on all these techniques to
create an initial architecture document. For this cut I restricted it to just
the conceptual view and the logical view of components. It became evident
that creating nice diagrams was still going to be complex, so this will need
to be one focus for the next increment.


Conclusion

It became clear during the architecture development that building the
document was too dependent on the content of the ontology. For the simpler
documents this would be OK since we could control what relationships would
be used. However, for the a more general ontology this starts to become
problematic. Also it bacame evident that some way to generate new views of
the data would be necessary. A little research found SPARQL-DL which should
allow us to build a system where new views can be added through the
configuration for SASSY.


The next increment will tidy up some of the rough edges, such as diagram
scaling and orientation, and re-factor the code so that it is a bit more
maintainable.












Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.