Thursday 30 June 2011

SASSY – Analysis, Part 7

Architecture Analysis


Once a design has emerged it needs to be analysed to determine how well it will meet the requirements. The analysis is usually a risk based approach since a complete formal analysis would be too time consuming.

The usual approach is as follows:
  • Each quality scenario is examined in turn. For each one a rating of its importance is given – low, medium or high. The stakeholders will need to discuss and agree on these ratings.
  • Then each scenario is examined to see how it will perform under the proposed design. A risk level (low, medium or high) is assigned indicating how easy it is likely to be that the requirement can be met.
  • The important and high risk scenarios become candidates for alternative tactics or designs.

However, once we have the architecture in a machine usable form we have some additional possibilities. We can perform various measurements on the interconnections between the components to get metrics for the dependency relationships.



In his thesis on a Software Architecture Analysis Tool Johan Muskens suggested the following measurements:
  • Components that call a service or from which a service is called in context of a scenario that is part of a specific use-case. The main thought behind this metric is that related functionality spread over the design is bad for maintainability and reusability.
  • The number of use-cases that contain a scenario in which a service of a specific component is called or in which that component calls a service. The main thought behind this metric is that cohesion within a component should be high.
  • Tasks should be distributed over the design as equally as possible. When the number of called services of a component is high this can give an indication that the component is a possible bottleneck when considering scalability. It also indicates that dependency of other components on the specific component is high.
  • the number of service calls of a specific component for all scenarios. Main thought behind this metric is that dependencies are bad for reusability and maintainability.
  • the number of different services called by a specific component for all scenarios. Main thought behind this metric is that dependencies are bad for reusability and maintainability.
  • the number of provided services of a specific component. The main thought behind this metric is that a well balanced distribution of functionality over the design is good for extendibility and maintainability.
  • the number of components of which a service is called. The main thought behind this metric is that dependence on many different components is bad for reusability, extendibility and maintainability.
  • the number of components that call any service of a specific component. The main thought behind this metric is that a large number of components depending on a specific component is bad for extendibility and maintainability.
  • the average number of state transitions per service for a component. Main thought behind this metric is that complexity of components / services is bad for maintainability and extendibility.
  • the maximum number of subsets of services of a component such that the sets of use-cases using services of the subsets are disjoint. It gives an indication of the cohesion between the services provided by a component.
  • The last metric gives an indication of the complexity of a scenario. It measures how deep the service calls are nested for a scenario. If the depth of a scenario is to high this is bad for the understandability and therefore also bad for maintainability and adaptability.


The absolute values for these measurements is not the important criteria. However, values that are abnormal may indicate some problem with the design.

If we add attributes to the components then we can also measure things like performance and portability for the overall design. For example components could have an attribute that indicated what platforms it was suitable for. This would then feed into the execution architecture to control how the components would be assigned to platforms. A performance attribute for each component could be used to assign components to CPU cores to get the best cost vs performance balance.

It might be possible, eventually, to automate the analysis phase sufficiently to enable it to be used to automatically refine the architecture. Techniques such as genetic algorithms might be able to generate and test various alternative designs and perhaps come up with some new design patterns. For the really large systems that we are contemplating this might be the only way to do it.

This is the completion of the description of what a software architecture is. Next we will look at ontologies as these will form the basis for SASSY.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

No comments:

Post a Comment