Thursday 30 June 2011

SASSY – Analysis, Part 7

Architecture Analysis


Once a design has emerged it needs to be analysed to determine how well it will meet the requirements. The analysis is usually a risk based approach since a complete formal analysis would be too time consuming.

The usual approach is as follows:
  • Each quality scenario is examined in turn. For each one a rating of its importance is given – low, medium or high. The stakeholders will need to discuss and agree on these ratings.
  • Then each scenario is examined to see how it will perform under the proposed design. A risk level (low, medium or high) is assigned indicating how easy it is likely to be that the requirement can be met.
  • The important and high risk scenarios become candidates for alternative tactics or designs.

However, once we have the architecture in a machine usable form we have some additional possibilities. We can perform various measurements on the interconnections between the components to get metrics for the dependency relationships.



In his thesis on a Software Architecture Analysis Tool Johan Muskens suggested the following measurements:
  • Components that call a service or from which a service is called in context of a scenario that is part of a specific use-case. The main thought behind this metric is that related functionality spread over the design is bad for maintainability and reusability.
  • The number of use-cases that contain a scenario in which a service of a specific component is called or in which that component calls a service. The main thought behind this metric is that cohesion within a component should be high.
  • Tasks should be distributed over the design as equally as possible. When the number of called services of a component is high this can give an indication that the component is a possible bottleneck when considering scalability. It also indicates that dependency of other components on the specific component is high.
  • the number of service calls of a specific component for all scenarios. Main thought behind this metric is that dependencies are bad for reusability and maintainability.
  • the number of different services called by a specific component for all scenarios. Main thought behind this metric is that dependencies are bad for reusability and maintainability.
  • the number of provided services of a specific component. The main thought behind this metric is that a well balanced distribution of functionality over the design is good for extendibility and maintainability.
  • the number of components of which a service is called. The main thought behind this metric is that dependence on many different components is bad for reusability, extendibility and maintainability.
  • the number of components that call any service of a specific component. The main thought behind this metric is that a large number of components depending on a specific component is bad for extendibility and maintainability.
  • the average number of state transitions per service for a component. Main thought behind this metric is that complexity of components / services is bad for maintainability and extendibility.
  • the maximum number of subsets of services of a component such that the sets of use-cases using services of the subsets are disjoint. It gives an indication of the cohesion between the services provided by a component.
  • The last metric gives an indication of the complexity of a scenario. It measures how deep the service calls are nested for a scenario. If the depth of a scenario is to high this is bad for the understandability and therefore also bad for maintainability and adaptability.


The absolute values for these measurements is not the important criteria. However, values that are abnormal may indicate some problem with the design.

If we add attributes to the components then we can also measure things like performance and portability for the overall design. For example components could have an attribute that indicated what platforms it was suitable for. This would then feed into the execution architecture to control how the components would be assigned to platforms. A performance attribute for each component could be used to assign components to CPU cores to get the best cost vs performance balance.

It might be possible, eventually, to automate the analysis phase sufficiently to enable it to be used to automatically refine the architecture. Techniques such as genetic algorithms might be able to generate and test various alternative designs and perhaps come up with some new design patterns. For the really large systems that we are contemplating this might be the only way to do it.

This is the completion of the description of what a software architecture is. Next we will look at ontologies as these will form the basis for SASSY.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Wednesday 29 June 2011

SASSY – Analysis, Part 6

Views and Viewpoints


The proposed design is documented in a series of Views, each from the viewpoint of one or more stakeholders.

It is important to keep the number of viewpoints in each diagram or document to a minimum, otherwise there will be confusion over what the document is trying to express. This means that for a large system there can be a multitude of documents, which can become a hindrance to the understanding of the system in its own right.

In the paper [M&B05] describe six types of architectural views – behavioural and a structural views for each of the conceptual, logical, and execution architecture. They go on to suggest the following should form the minimum set:
  • Reference Specification. The full set of architecture drivers, views, and supplements such as the architecture decision matrix and issues list, provides your reference specification.
  • Management Overview. For management, you would want to create a high-level overview, including vision, business drivers, Architecture Diagram (Conceptual) and rationale linking business strategy to technical strategy.
  • Component Documents. For each component owner, you would ideally want to provide a system-level view (Logical Architecture Diagram), the Component Specification for the component and Interface Specifications for all of its provided interfaces, as well as the Collaboration Diagrams that feature the component in question.

For SASSY the knowledge base forms the reference specification. The management overview seems to be a collection containing the original project vision statement, the preliminary analysis, in addition to a view showing the conceptual architecture. The component documents outlined above would be a good start, but you will also want other views, such as a network view and a database view, that have a system wide perspective.

There are a lot of different views that can be created for a complex system. Various researchers have attempted to classify these views into view models, such as the 4+1 Architectural View. Despite these models the views for any particular system should be carefully selected according the specific requirements of the project.

The core ontology for SASSY will need to have a collection of views defined along with a relationship that allows the components of the design to be associated with one or more views. We might also include view models so that the user can quickly select a collection of views that will have good coverage of the design.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
References

M&B05: Ruth Malan and Dana Bredemeyer, Software Architecture: Central Concerns, Key Decisions, 2005

Tuesday 28 June 2011

SASSY – Analysis, Part 5

Design Patterns


Once the tactics for achieving the desired quality and functionality have been selected the next step is to choose the design patterns that can best implement those tactics. Note that we are not interested in design patterns that deal with algorithms and data structures (such as the observer pattern) but rather we are looking for structural design patterns such as a client-server design.

The SASSY Ontology will need to include a catalogue of architectural design patterns and provide a means to include them into the design for a specific system. High level patterns such as ETL, SOA, Publish/Subscribe, Client/Server, Message Exchange, Peer-to-Pear and Data Warehouse will need to be included. More structural patterns such as Layered Architecture, Pipe and Filter and Event Driven Architecture will also need to be included.

A quick review of the literature has not turned up a very large number of architectural patterns, so this ontology is likely to be easy to implement and manage.

Products


We will also be looking at various existing products that can be used to implement the selected design. For example a message based design might select IBM's Websphere-MQ message queuing product.

Selecting a COTS product may have some side effects. In their book [WAL02] repeatedly stress that the examination of a COTS product may influence the requirements for the project. When users see what a product is capable of, they may want their system to also have that capability. This is to be expected. They also describe cases where the software exhibited behaviour that was not desired because the functionality was not correctly disabled, for example.

The options available will present different flexibility choices. A commercial product will constrain the remainder of the system to its interfaces. An open source product, if used unchanged would be the same, but there is the possibility of making modifications to it. The most flexible alternative is to build your own, but that flexibility comes at the cost of doing the development, and the corresponding time delay. A possible option is to start with COTS, then migrate to OSS or in-house developed at a later release of the system, once the initial time-to-market pressure has eased.

The number of products available is extremely large. While building an ontology of available products would be extremely valuable it will have to be placed beyond the scope of this project for now.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Monday 27 June 2011

SASSY – Analysis, Part 4

Decomposition


One tactic that is common to any large project is to divide it into more manageable pieces. Malan and Bredemeyer describe how the complexity of a large project can be addressed by the architecture:
Intellectual Intractability
The complexity is inherent in the system being built, and may arise from broad scope or sheer size, novelty, dependencies, technologies employed, etc.

Software architecture should make the system more understandable and intellectually manageable—by providing abstractions that hide unnecessary detail, providing unifying and simplifying concepts, decomposing the system, etc.

Management Intractability
The complexity lies in the organization and processes employed in building the system, and may arise from the size of the project (number of people involved in all aspects of building the system), dependencies in the project, use of outsourcing, geographically distributed teams, etc.

Software architecture should make the development of the system easier to manage—by enhancing communication, providing better work partitioning with decreased and/or more manageable dependencies, etc.


My experience with very large projects leads me to the conclusion that the first partitions should be on political lines. This has the advantage that it also divides the stakeholders which can be a significant step towards getting a solution. All that has to happen is for the interfaces to be defined and each subproject can be worked upon independently.

This should be repeated recursively until tractable projects emerge.

A variation on the political division idea is to also include a “technical” partition in addition to the functional partitions. This is made responsible for the common aspects of the system. It can be an easy sell to the stakeholders as it reduces their individual costs, being a shared component. You can then move functions in or out of the technical sub-project as best suits the design. The stakeholders will then also gain an appreciation of the cost of having a special component, and may decide that the cheaper alternative is to modify the business process instead.

In their Attribute Driven Design technique Bass, Clements and Kazman describe the following steps for designing the architecture:
  1. Choose the module to decompose. The module to start with is usually the whole system. All required inputs for this module should be available (constraints, functional requirements, quality requirements).
  2. Refine the module according to these steps:
    1. Choose the architectural drivers from the set of concrete quality scenarios and functional requirements. This step determines what is important for this decomposition.
    2. Choose an architectural pattern that satisfies the architectural drivers. Create (or select) the patterns based on the tactics that can be used to achieve the drivers. Identify child modules required to implement the tactics.
    3. Instantiate modules and allocate functionality from the use cases and represent using multiple views.
    4. Define interfaces to the child modules. The decomposition provides modules and constraints on the types of module interactions. Document this information in the interface document for each module.
    5. Verify and refine use cases and quality scenarios and make them constraints for the child modules. This step verifies that nothing important was forgotten and prepares the child modules for further decomposition or implementation.
  3. Repeat the steps above for every module that needs further decomposition.
An ontology with its subclassing capability seems like a good fit for decomposing a system into its component modules. If SASSY fulfils its requirements the step 2c above will be significantly automated since the point of the project is to generate the various views of the system.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Saturday 25 June 2011

SASSY – Analysis, Part 3

Requirements


The requirements for a system come in three categories: Functional, Environmental, and Quality.

The functional requirements describe what the system is required to do.

The environmental requirements are usually restrictions that the system must conform to, such as the operating system that must be used, or that the software will be for medical use.

The quality requirements describe those other aspects of the system, such as performance, safety, usability and maintainability.

Tactics


The components of a system have a set of responsibilities. Each component has assigned to it a set of things that it is responsible for.

For the functional requirements these normally map directly to a component’s responsibilities. That is, a particular component will be responsible for a particular system function.

For the quality requirements however there is no single component that is directly responsible. No component could be responsible entirely for the performance of the system. A tactic is what maps these requirements to a set of responsibilities which can then be assigned to components. For example a scalability requirement might be met by using a client-server design – the tactic. We can then map the consequential responsibilities to components; in this case by putting a service in one component and a client in another, and perhaps a name lookup service in a third.

Thus the design process is one of selecting a set of tactics that give the best response to each of the quality requirements. Of course there will be competition since using one tactic might compromise the use of another – it is hard to have a system that has both good security and good usability, for example.

In their introduction to tactics Bass, Clements and Kazman define a tactic as a design decision that influences the control of a quality attribute response. They also define an architectural strategy as the collection of tactics, and an architectural pattern as something which packages one or more tactics.

The software architecture ontology that will form the foundations for SASSY should include a collection of well known tactics and their relationships to the quality attributes.

Alternative Requirements


While considering some potentially very large systems it became apparent that sometimes the requirements were not always single valued. For example, building a system to support an entire organisation might have an environmental requirement that the solution must use Microsoft products where possible. However we can easily imagine a similar project that had a requirement for open source software where possible. The conceptual architecture for these two alternatives is likely to by almost identical, with the differences appearing in the logical and execution architectures.

Similarly the same project – a system to support an entire organisation – might be required to support a small office, a one-man company, or a large government department. When you look at the functionality, a lot of the requirements are identical but the quality requirement of scale has various alternatives. Again the conceptual design is likely to be quite consistent across all scales, but the logical and execution architectures will diverge.

It would seem then that the design of SASSY should include the capability of creating a suite of alternative designs based on these varying requirements.

It is somewhat amazing that we have gone from systems that are gigantic in scope to suites of such systems !

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Friday 24 June 2011

SASSY – Analysis Part 2

Quality Attributes


A fundamental driver to modern software architecture development are the quality attributes that the system is required to have. Factors such as performance, security, safety, usability and maintainability are often much more important in the design process than the functional requirements.

It is so important to have these quality requirements that we should provide some support in SASSY for their acquisition.

The core ontology for SASSY will be the software architecture knowledge base. An important part of that ontology will be a collection of quality attributes. The system should allow its user to assign an importance to each attribute for the specific project and thus include the quality requirements in the project's ontology.

Quality Frameworks


Several researchers in this field have developed classifications of the quality attributes according to how they see the relationships between them. One such classification has been promoted to being an ISO standard.

FURPS
A model developed by Hewlett-Packard that partitions the quality attributes into Functionality, Usability, Reliability, Performance, and Supportability.
Boehm
A model that partitions the quality attributes into Utility, Portability, and Maintainability. Utility is further divided into Reliability, Usability, and Efficiency.
McCall
Another model that partitions the quality attributes in a manner similar to the Boehm model, dividing them into operations, transitions, and revisions.
Dromey
This model partitions the quality attributes into Correctness (Functionality and Reliability), Internal (Maintainability and Performance), Context (Portability and Reusability) and Descriptive.
ISO-9126
This model partitions the quality attributes into Functionality, Reliability, Maintainability, Usability, Portability and Efficiency.

We should allow the user to select whichever classification they need, or use what might be called the Ontological classification of all known terms.

Quality Attribute Scenarios


It is all very well to specify that the system must achieve some quality requirement, but it is of no real use if there is no way to determine if the requirement is being met. In their book, Software Architecture in Practice, Bass, Clements and Kazman describe a set of Quality Attribute Scenarios that can be used to test the quality of a system.

Each scenario consists of six parts:
  • The source of a stimulus which is some entity that generates a stimulating event.
  • The stimulus which is some event that must be considered when it arrives at the system.
  • The environment in which the stimulus is processed, such as a running instance of the system, or a system under some stress.
  • The artefact that is stimulated, which may be the entire system or some part of it.
  • The response that the system makes after the arrival of the stimulus; and
  • Some measurement that can determine how the system responded.

While some these are quite obvious, such as how long the system takes to respond to an incoming event, others are a bit more problematic. For example it is quite reasonable to have a quality requirement that states that the system must be easy to maintain. However, the system itself is unlikely to include facilities for self modification. This sort of requirement forces us to expand the scope of what we mean by “the system” to include not only a set of running executables, but also the infrastructure required to maintain it – the maintenance team becomes part of the system.

SASSY will need to include the ability to set up these scenarios for all the important quality requirements.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Wednesday 22 June 2011

SASSY – Analysis, Part 1

If we are going to build a system for doing software architecture we had better get a very good grip on what a software architecture actually is.

Scope


Garlan and Shaw describe software architecture as the design problems that go beyond the selection of algorithms and data structures:

“Structural issues include gross organization and global control
structure; protocols for communication, synchronization, and data
access; assignment of functionality to design elements; physical
distribution; composition of design elements; scaling and
performance; and selection among design alternatives.”

A common theme in the discussion of the nature of software architecture is that it covers those design issues that encompass more than one component module of the system.

Malan and Bredemeyer describe the central concerns of a software architect as being the setting of system wide priorities; system decomposition and composition; system wide properties, especially those that cut across multiple components; how well the system fits its context; and the overall integrity of the system.

Structure


Malan and Bredemeyer describe an architectural framework as having three main layers:
  • A meta-architecture with a focus on the high level decisions that will strongly influence the structure of the system, rule certain structural choices out, and guide selection decisions and tradeoffs among others.
  • An architecture with a focus on decomposition and allocation of responsibilities, interface design and assignment to processes and threads; and
  • A set of guidelines and policies which guide the engineers creating designs that maintain the integrity of the system.
The architecture layer is further divided into three sub-layers:
  • A conceptual architecture with a focus on identification of components and allocation of responsibilities to components;
  • A logical architecture with a focus on design of component interactions, connection mechanisms and protocols, interface design and specification, and providing contextual information for component users; and
  • An execution architecture with a focus on the assignment of the runtime component instances to processes, threads and address spaces; how they communicate and coordinate and how physical resources are allocated to them.

Process


At its core the software architecture process entails taking the requirements and preliminary analysis and developing a set of documents and diagrams describing the system to be constructed.

The functional requirements, and perhaps more importantly the quality requirements are first examined in order to develop a set of tactics that will best satisfy those requirements. The software architecture discipline has a large body of well known tactics that can be combined to produce the approach for the specific problem.

The tactics are then used select appropriate COTS products and to develop a set of design patterns.

The project is divided into sub-projects which can each be separately developed. This is important for large projects, since very few large projects ever complete successfully it is necessary to subdivide them into manageable chunks to a size where the success rate is more satisfactory.

The overall design is then subjected to an analysis to determine if it is a viable solution.

The interfaces between these components is then fully documented so that each sub-project can get under-way as soon as possible.

A set of scenarios are developed that will form the basis of the test cases for the system. These tests are designed to ensure that not only are the functional requirements met, but also the quality requirements.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Sunday 19 June 2011

The SASSY Project - Introduction

The Super Sized Software page introduced the idea of the very large project. At
the end I lamented the lack of any decent support for building really large
systems. The Software Architecture Support System (or SASSY) is a project I have
undertaken to try and address this problem.

The basic idea is to build a knowledge database containing all the information
concerned with the architecture of the system. By having a single repository
we avoid the issues of having things get out of synch. We can then generate
all the architectural documentation from that repository.

My plan is to do the initial development as a personal project, and if it starts
to look promising, migrate it to Source Forge and open it up for others to assist.
Such projects will only take off if there is something solid to start
with - hence the initial private project.

While SASSY is not likely to become a hugely complex project, I though that it
might be interesting to use itself as its first project. This way I only have
to design the architecture for one system, not two.

The Vision Statement

The goal is to produce a software system that assists with the task of creating
an architecture for a software system.

The current process involves writing text documents, typically using a word
processor, and trying to embed into them a set of diagrams that describe the
proposed design at a high level. The results are generally not very satisfactory
for a variety of reasons.

Creating diagrams for inclusion into word processing documents is not easy.
Some word processors are quite poor at handling diagrams. There are serious
limits to the complexity that can be described in such a diagram. For a complex
system the diagram will either be too cluttered to be readable or such a high
level view that it is essentially meaningless.

Creating diagrams is often manually intensive. Even with diagramming editors
that have the appropriate symbols and an understanding of what the symbols
represent it is still necessary to place the symbols and label them. This
takes a considerable amount of tedious effort. The result is that diagrams can
quickly become out of date as the design evolves but there is not time to
revisit the diagrams. There is also a tendency to combine several aspects of
the design into the one diagram. This can make the diagrams confusing and
ambiguous.

The use of word processing documents to describe the system also has its
problems. When writing a document it is usual to target a specific audience
- perhaps the developers, perhaps the stakeholders, for example. It is very
difficult to write a single document for the use of a variety of audiences.
It will have too much detail that is not relevant for most of its readers,
and conversely each reader will have difficulty finding the information that
they need.

The rather obvious answer is to create numerous documents and diagrams each
tailored to a specific aspect of the design. However this brings with it
another problem, namely, how to keep a large collection of objects consistent
while the design goes through its inevitable evolution?

The current process for developing a software architecture does not scale very
well. For very large projects it becomes necessary to use a significant number
of 3rd party products (unless you have an extraordinarily large budget). This
range of products can easily exceed the knowledge of any single architect.
Moving to a team of architects presents its own problems since the
project will no longer have a single visionary to guide it.

The objective of this project is solve these problems. The approach is to
capture the design, the software architecture, in an ontology or knowledge
database. Being in a single repository it can be kept consistent, and in fact
the tools available for checking ontologies can be used to identify any
inconsistencies. We will then develop a set of tools that can generate the
documentation and diagrams for each aspect or view that is required. A single
repository, with multi-user access, can allow multiple architects to
collaborate on the design process by enabling each one to easily see the
information that is relevant to the part of the project that they are working on.

Software will then extract the data from the knowledge base and feed them
into programs that will generate the text and diagrams. A user interface
component will coordinate the process. The output format should be PDF so
that there is less temptation to edit the output documents rather than the
source knowledge base.

Next time we will look at the preliminary analysis for the SASSY project.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Thursday 16 June 2011

Development Process

We have looked at the model and the methodology. Now we bring it all
together into a development process.

The process is what controls the development. We need to control the quality
of the system, the changes that can be made to the system, and the progress
in developing the system. The fourth pillar for managing a project is
knowledge distribution.

Change Control

The fundamental tool for managing change is the version control system. I am
amazed at how often I find projects that have not even this.

However, change control goes beyond what version control systems can provide.
You must be able to constrain the development to what is necessary to satisfy
the requirements. How you do this will depend on the project - it could be
nothing more than some guidelines in a document - or it might be a system that
prevents access to the code base without explicit authorisation, teamed with
a gate keeper that only allows changes to be merged in once they have been
thoroughly tested and reviewed.

A comprehensive test suite can be of great benefit in managing changes. One of
the impediments to applying a change is not knowing if it will break some
existing component. If you can run the test suite after applying the change
you can tell immediately if it has introduced a defect. The test suite thus
gives you the freedom to attempt to introduce more complex changes.


As well as changes to the code base, you will also need to manage changes to
the requirements. Once the users see the product they will start to
request changes. This may be because they can see that some of the COTS products
have more capability than they originally expected, or because the product allows
for new capabilities for the business.

The incremental development model allows us to manage the changes to requirements
in a controlled manner. At the start of each increment we should schedule a
short analysis task to consider how any new requirements might influence the
design. These requirements can then be scheduled into future increments, allowing
the product to evolve to meet the business needs.

In order to keep things under control you will need to keep a register of
change requests where you track how each one is progressing through the
development cycle. I find that an increment plan which describes what is going
to be done in future development cycles (and any dependencies) is a useful
device for explaining the long term development strategy.

Quality Control

The quality of a system has many aspects. The most important is functionality -
the system does need to do what it was required to do, but there are many, many
others. In the literature I have found over 130 terms used to describe some form
of quality attribute.

Quality Measurement
The first step to controlling something is to be able to measure it. There are
a few ways to measure the quality of a software system:

Code reviews can be a source of data for measuring the quality of the system.
A simple count of the noted defects (perhaps grouped by severity) will give a
guide to how well things are going.

Test harnesses are another source of quality data. The unit tests should provide
a report detailing the number of tests run, and the number of failures found.
Of course these are mostly aimed at the functionality of the system.

In order to test the non-functional quality requirements there needs to be a
set of tests designed to evaluate how well they are satisfied. For some
requirements this can be as simple as measuring the resource footprint, but
others will require more sophisticated tests, and others are so subjective as
to be close to untestable.


Actions
The spiral development model enables us to get the quality of the system under
our control. By deliberately starting with a lower quality product we can still
deliver much of the required functionality early in the process. We can then
make decisions about the best way of improving the quality of the product
and apply them, one by one, until the desired quality performance is met.

Progress Control

The aim of progress control is to deliver a system that satisfies the requirements
within the budget allocated to the project.

The first step is to determine what the budget actually is, and that involves
estimating the size of the project, based on little more than the initial
analysis and perhaps a feasibility study. Obviously, as the development unfolds
it gets easier to develop an accurate estimate, but this is often long after
any funding arrangements have been made.

Measuring the progress is the usual project management task of plotting the tasks
in a project management tool (such as MS Project) and keeping track of how much of
each one has been completed. Finer measurements, such as counting lines of code
can be used, but might be compromised if they are susceptible to tampering.

The iterative aspects of the modern development models can be an issue with
these project management tools. You need to add extra tasks for supporting
the subsequent tasks - for example there will need to be two implementation
tasks, one to create the code, and a second to fix the bugs found by the
test team.

Controlling progress is primarily achieved by assigning a suitable number of
developers to the project. Additional control can be had by using the incremental
model to defer some functionality, or the spiral model to defer some of the
quality, so that something useful gets delivered as early as possible.

Knowledge Management

This is an addendum to the original article.
On a recent project it occurred to me that there was a fourth element to the
development process, namely the distribution of knowledge about the system.
A quick way to ruin a project is to make one or more of the team members
indispensable because only they know some important aspects of how it all works.

In the examples that I have seen over the years the issue is not the fine details
of the algorithms that causes the problems, but rather the overall flow of data through
the processing that causes the most difficulty for new members of a team. Once they
have that big picture they can zero in on almost any issue. Conversely, without
that overview they are left utterly confused and reliant on the experienced team
members to point them at the likely cause of a bug. My preference is for a set of
diagrams showing the static relationships, a set showing the flow of data (including
processes, files and database tables) and a set of collaboration diagrams for most
use cases (preferably generated from the running code).

I once heard about a company (Japanese if I recall) that had a policy of firing
anyone who had become indispensable. This is perhaps a little to extreme, but not
by much, because if that person were to leave then your project might be in
difficulty.

Some mechanisms for knowledge distribution include collaborative working, code and
design reviews and regular team meetings to discuss deep technical issues. The
documentation for the system needs to be kept current and useful.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Tuesday 7 June 2011

Development Tasks, Part 4

This is the fourth part of a series describing the development tasks for a software
project. See Software Development Process and Development Methodology for the
context of this topic.

Implementation

Fill in the details of each method, and add test cases to the test harness to
exercise any that are non-trivial.

The coding should be guided by a set of coding standards. You should include
error handling from the first cut of the code so you can tell if its working
correctly. My usual approach is to start with a simple error message with file
name and line number to the console on the first cut and then add more
sophisticated error handling in subsequent rounds of development.

The code should be reviewed by other team members. This provides a mechanism to
spread the knowledge of how the system works across a broader cross section of
the team.


Documentation

Typical documentation includes on-line help (including tool-tips), user
manuals, and installation guides. You may also need to provide administration
guides if the system is complex.

The design documentation and programmer's guides should also be included in
the document set if the system is to be able to be extended by others.

The project documentation should be kept for reference by the maintenance team.


Use Case Test

We need to be able to demonstrate that the classes have been built correctly.


Interface and Application Integration

The aim is join the separately developed components into a single unified
system.

Combine the application and user interface classes.

Create shell scripts to mediate between executables and set their environments
and parameters. Retest the scenarios using the UI as a driver instead of
the test cases. Use the test plan as a guide to walk through the user transactions
that establish the test pre-conditions, then test the scenarios.



Generalization

The aim is to improve the design using a bit of hindsight.

Examine the class design in the light of implementation to find improved
structures. Review future increments to find potentially reusable classes
and separate out the reusable parts into new abstract classes. Look for
standard design patterns.

It is possible to spend too much time in this phase, finessing the code
without making significant improvements. It is possible to over do the
generalization, making the code harder to understand.


System Integration Testing

Many systems need to interface to other systems. These interfaces need to be
tested. This testing requires a different approach as it usually involves
other organisations.

Liase with the the system administrators of the other systems to organise the
test. Develop the procedures necessary for the systems to interconnect. Put the
remote systems into test mode and pass test data across the connections. Use
the test harness logging to verify that the systems communicate correctly.



Packaging

Once construction has finished, the system has to be packaged so that it can be
easily installed on the client’s machines.

The build script will include a mode that builds the delivery package. Once built
and tested, the files are copied onto the distribution media. Include a script
that will install the package onto the client’s system, possibly replacing the
previous version, and upgrading the client’s data files as necessary. The data
upgrade should be non-destructive so that the client can back out the upgrade
if they so wish.

Acceptance Testing

The client needs to assure themselves that they are getting what they paid
for.

You will need to support their testing by providing an environment that
allows the testing to be performed and the results to be gathered into
a useful format.

You will also need to provide some guidance on what tests to perform. While
the client should design the tests based on their requirements, it is the
development team that really understands what the product can do.


Post Implementation Review

The aim is to identify anything that could have been done better.
A combination of one-on-one interviews and team meetings should be used
to illicit ideas for improvement of the development process and the
product. The use of blogs and wikis by the team members should also be
encouraged as a means of proposing improvements to the process.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.