ISO 9126 - International standard for evaluating software products
In 1991, the ISO published its first international consensus on the terminology
for the quality characteristics for software product evaluation (ISO 9126 / 1991).
From 2001 to 2004, the ISO published an expanded version, containing both the ISO
quality models and inventories of proposed measures for these models. The standard
is divided into four parts which addresses, respectively, the following subjects:
quality model, external metrics, internal metrics, and quality in use metrics:
- Quality models - ISO 9126-1.
- External metrics - ISO TR 9126-2.
- Internal metrics - ISO TR 9126-3.
- Quality in use metrics - ISO TR 9126-4.
Internal metrics are those that do not rely on software execution (static measures)
while external metrics are applicable to running software. Ideally, the internal
quality determines the external quality and this one determines the results of quality
in use.
The quality model established in the first part of the standard, ISO 9126-1, classifies
software quality in a structured set of factors as follows:
- Functionality - A set of attributes that bear on the
existence of a set of functions and their specified properties. The functions are
those that satisfy stated or implied needs. This characteristic has the following
attributes: Suitability, Accuracy, Interoperability, Compliance, Security;
- Reliability - A set of attributes that bear on the
capability of software to maintain its level of performance under stated conditions
for a stated period of time. This characteristic has the following attributes: Maturity,
Recoverability, Fault Tolerance;
- Usability - A set of attributes that bear on the effort
needed for use, and on the individual assessment of such use, by a stated or implied
set of users. This characteristic has the following attributes: Learnability, Understandability,
Operability;
- Efficiency - A set of attributes that bear on the
relationship between the level of performance of the software and the amount of
resources used, under stated conditions. This characteristic has the following attributes:
Time Behavior, Resource Behavior;
- Maintainability - A set of attributes that bear on
the effort needed to make specified modifications. This characteristic has the following
attributes: Stability, Analyzability, Changeability, Testability;
- Portability - A set of attributes that bear on the
ability of software to be transferred from one environment to another. This characteristic
has the following attributes: Installability, Conformance, Replaceability, and Adaptability.
The sub-characteristic Conformance is not listed above and applies to all
characteristics. Examples are conformance to legislation concerning Usability
or Reliability.
Each quality sub-characteristic (as Adaptability) is further divided into attributes.
An attribute is an entity which can be verified or measured in the software product.
Attributes are not defined in the standard, as they vary between different software
products.
ISO 9126 distinguishes between a defect and nonconformity, a defect
belongs to the application space being the nonfulfilment of intended usage
requirements, whereas nonconformity is defined upon the application specification
space and is defined as being the nonfulfilment of specified requirements.
Complexity, Reengineering and Testing
There is in common usage hundreds of software complexity measures, ranging from
the simplest, such as source lines of code, to the complex, such as number of variable
definition/usage associations. It is essential to use a low complexity subset of
these measures for implementation. One of the most important criterions for metrics
selection is uniformity of usage. One can read mostly in all papers that the key
idea here is open reengineering. The reason that makes open systems
so popular for commercial software applications stems in the fact that the user
is guaranteed a certain level of interoperability - it means that the applications
work together in a common framework, and software systems can be ported across different
hardware platforms with minimal effort. Complexity measurement using metrics is
a primary request, but open reengineering extends to other modeling techniques such
as flow graphs, structure charts, and structure-based testing. Common complexity
measures as the Halstead Software Science metrics are a significant step
up in value. Halstead measures were introduced in 1977 and have been used and experimented
with extensively since that time. They are one of the oldest measures of program
complexity. By counting the number of total and unique operators and operands in
the program, measures are derived for evaluating program size, programming effort,
and estimated number of defects. Halstead metrics are, in fact, independent of source
code format, so they are able to measure intrinsic attributes of the software systems.
Halstead metrics are considered by several authors as being a little bit controversial,
especially in terms of the psychological theory behind them, but they have been
used productively on many projects. The main weakness, however, is that the derived
mathematical formulas of the main Halstead metrics are considerably unconcerned
from the measured code, so there isn't a strong prescriptive component.
One can identify code of an application as being potentially unpredictable, but
the Halstead theory doesn't say much about how to test it, if it is testable, or
how to improve it, if one proves to be necessary. Despite these limitations, Halstead
Software Science metrics are very helpful and constructive for identifying computationally-intensive
code with many dense formulas, which represent possible sources of inaccuracy or
errors that other complexity procedures are likely to miss. However, their properties
are well-known and, in they have been shown to be a very strong component of the
Maintainability Index Technique measurement of maintainability method.
The McCabe Cyclomatic Complexity Measure is very flexible and extensively
used for software systems complexity evaluation, mostly for existing ones. It measures
the number of linearly-independent paths through a program module. The McCabe complexity
is one of the more widely-accepted software metrics; it is intended to be independent
of language and language format. The complexity number is generally considered to
provide a stronger measure of a program's structural complexity than is provided
by counting lines of code, previously used. It is widely proposed as the foundation
of every software complexity tool. It may be considered a broad measure of soundness
and confidence for a software system. This complexity measure is based purely on
the code's decision structure. It makes this method to be uniformly applicable across
projects and languages being completely insensitive to cosmetic changes in code.
Many studies have reported its correlation with errors in software code, so it is
used to predict reliability. More significantly, experimental studies have shown
that the risk of errors is rising for functions having cyclomatic complexity over
15, so one could consider it as a validated threshold for reliability screening.
If a function has a cyclomatic number of 15, there are at least 15 (but probably
more) execution paths through it. More than 15 paths are hard to identify and test.
Functions containing one selection statement with many branches make up an exception.
Also, this assessment can be performed step by step during development and can even
be estimated from a detailed design. Considering a specified software module, one
can easily calculate cyclomatic complexity, in a manual way, by counting the decision
constructs in the code. This approach allows building up continuous control during
project development, so that unreliable code is prevented early at the unit development
stage. A reasonable upper limit cyclomatic number of a file is 100. Using automated
tools one can verify code compliance at any stage of the project development. McCabe's
cyclomatic complexity measure gives precise testing rules. Most complex function
being most error prone piece of code has to be first considered in order to receive
required testing.
One of the most successful measurement concepts, used for quantitative productivity
levels is function point metrics. Software measure based on function points
techniques (FP) reflects the user's view of a system's functionality and gives
size as functionality. One unit (the function point) represents the amount of information
processing that a module offers the user. The unit is viewed separately from the
way in which the information processing is carried out in principle. This concept
was introduced in the mid-1970s when IBM commissioned engineer Allan J. Albrecht
and his colleagues to explore software measurement and metrics. IBM was motivated
for this assignment by the growing impact of software quality within the company
tied with the difficulties and obvious limitations of the ubiquitous line of the
code metrics, used before.
Functional point data has two targets. First one is an estimation variable used
mainly to evaluate the size of each software module, while the second one is intended
as a baseline metrics collected from older projects developed by same team and used
conjunctively with estimation variables helping to devise cost and effort projections.
Function points are categorized into five groups: outputs, inquiries, inputs, files,
and interfaces. Basically the approach proceeds to identify and count of unique
function types:
- external inputs (file names, as example)
- external outputs (e.g. reports, messages)
- queries (interactive inputs needing a response)
- external files or interfaces (files shared with other
software systems)
- internal files (invisible outside the system)
Function point metrics extended among many companies because they did provide substantial
benefits to their users. The first benefit of function point metrics is that they
are offering substantial ability to the software industry in order to carry out
economical based studies for developed products [05, 09, 10, and 24]. These metrics
have become the standard for studying topics associated with software, including
but not limited to:
- Outsource contracts
- Quality baseline and benchmarks
- Process improvement economics
- Litigation analysis
- Productivity baseline and benchmarks
Function points are powerful metrics but successful usage of them is not a trivial
task. Accurate counting of function points metrics require good training. Main feature
of function point metrics is the fact that them are able to measure economic productivity
or the defect volumes found in software requirements, design, and user documentation
as well as measuring coding defects.
References
1. Abran, A., Al-Qutaish, R.E., Desharnais, J. M., Habra, N., An Information Model
for Software Quality Measurement with ISO Standards, In: SWEDC-REK, International
Conference on Software Development , Reykjavik, Island , University of Iceland,
2005, pp: 104-116.
2. Abu Talib, M., Ormandjieva, O., Abran, A., Buglione, L., Scenario-Based
Black-Box Testing in COSMIC-FFP, In: Software Measurement European Forum
- SMEF 2005,
Rome, Italy
, 2005, pp: 173-182.
3. Abu Talib, M., Abran, A., Ormandjieva, O., COSMIC-FFP & Functional Complexity
(FC) Measures: A Study of their Scales, Units and Scale Types, In Proceedings
of The 15th International Workshop on Software Measurement - IWSM'2005, Montreal,
Canada, Shaker-Verlag , 2005, pp. 209-225.
4. Anton, A.I., and Potts, C., Functional Paleontology: System Evolution as the
User Sees It, In: Proceedings of The 23rd International
Conference on Software Engineering, ICSE01, Toronto, 12-19 May 2001, pp: 421-430.
5. Al-Qutaish, R.E., Abran, A., An Analysis of the Design and Definitions of Halstead's
Metrics, In proceedings of The 15th International Workshop on Software Measurement
- IWSM'2005 , Montreal, Canada , Shaker-Verlag , 2005 , pp. 337-352.
6. Azuma, M., SQuaRE: The next Generation of ISO/IEC 9126 and 14598, International
Standards Series on Software Product Quality, in Proceedings of the European
Software Control and Metrics Conference (ESCOM), 2-4 April 2001, London,
UK, pp. 337-346.
7. Homer, S., and Selman, A. L., Computability and Complexity Theory, Springer
Verlag,
New York
, 2001, ISBN: 0387-95055-9.
8. ISO, 1991, ISO/IEC IS 9126, Software Product Evaluation - Quality Characteristics
and Guidelines for Their Use,
Geneva
, International Organization for Standardization.
9. ISO, 2001, ISO/IEC 9126-1, Software Engineering - Product Quality - Part 1:
Quality model,
Geneva
, International Organization for Standardization.
10. ISO, 2003, ISO/IEC TR 9126-2, Software Engineering - Product Quality - Part
2: External Metrics,
Geneva
, International Organization for Standardization.
11. ISO, 2003, ISO/IEC TR 9126-3, Software Engineering - Product Quality - Part
3: Internal Metrics,
Geneva
, International Organization for Standardization.
12. ISO, 2004, ISO/IEC TR 9126-4, Software Engineering - Product Quality -Part 4:
Quality in Use Metrics,
Geneva
, International Organization for Standardization.
13. ISO, 2004, ISO/IEC FCD 25000, Software Engineering - Software Product Quality
Requirements and Evaluation (SQuaRE) - Guide to SQuaRE, Geneva, International
Organization for Standardization.
14. ISO, 2004, ISO/IEC FCD 25020, Software and System Engineering - Software Product
Quality Requirements and Evaluation (SQuaRE) - Measurement Reference Model and Guide,
Geneva
, International Organization for Standardization, January 24, 2005.
15. ISO, 2004, ISO/IEC PDTR 25021, Software and System Engineering - Software Product
Quality Requirements and Evaluation (SQuaRE) - Measurement Primitives,
Geneva
, International Organization for Standardization.
16. Halstead, M.H., Elements of Software Science, Operating, and Programming Systems
Series Volume 7.
New
York, NY
: Elsevier, 1977.
17. Lopez Martin, M.-A., Habra, N., Abran, A., A Structured Analysis of the McCabe
Cyclomatic Complexity Measure, In: Proceedings of the 14th International
Workshop on Software Measurement (IWSM2004) Berlin, Germany, Shaker Verlag, 2004.
18. McCabe, T., A Complexity Measure, In: IEEE Transactions on Software
Engineering, Vol. SE-2, No. 4, December 1976, pp: 308-320.
19. SC7, 2004, ISO/IEC FCD 25000, Software Engineering - Software Product Quality
Requirements and Evaluation (SQuaRE) - Guide to SQuaRE, ISO/IEC JTC1/SC7
WG6, January 1, 2004, 2971.
20. Suryn, W., Abran A., and April A.,
ISO/IEC
SQuaRE
: The Second Generation of Standards for Software Product Quality, In: The
7th IASTED International Conference on Software Engineering and Applications,
California
,
USA
, 2003.
21. Tran-Cao, D., Levesque, G., Abran, A., From measurement of software functional
size to measurement of complexity, In: ICSM 2002,
Montreal, Canada
, 2002, pp: 11-22.
22. Tran-Cao, D., Abran A., and Levesque, G., Functional Complexity Measurement,
In: Proceedings of the International Workshop on Software Measurement (IWSM'01),
Montreal, Quebec,
Canada
, August 28-29, 2001, pp 173-181.
23. Tran-Cao, D., Levesque, and G., Meunier, J.-G., Software Functional Complexity
Measurement with the Task Complexity Approach, In: Proceedings of the International
Conference RIVF'04,
Hanoi, Vietnam
, February 2-5, 2004, pp: 77-85.
24. Garmus, D., Herron, D., Function Point Analysis - Measurement Practices for
Successful Software Projects, Addison-Wesley, 6th Printing, December
2004, ISBN: 0201699443.
25. Bruegge B., Dutoit, A.H., Object-Oriented Software Engineering - Using UML,
Patterns, and Java, © 2004 Pearson Education Inc., Pearson Prentice Hall,
ISBN: 0-13-191179-1.
26. Boehm, B.W., Abts, C., Brown A.W., Chulani, S.,
Clark
, B., Horowitz, E., Madachy, R., Reifer, D., and Steece, B., Software Cost Estimation
with COCOMO II, © 2000 by Prentice Hall PTR, ISBN: 0-13-026692-2.
27. McCabe, T.J., and Watson, A.H., and McCabe and Associates, Inc., Software Complexity,
December 1994, http://www.stsc.hill.af.mil/crosstalk/1994/12/xt94d12b.asp.
28. VanDoren, E., Maintainability Index Technique for Measuring Program Maintainability,
March 2004, http://www.sei.cmu.edu/str/descriptions/mitmpm.html.
29. VanDoren, E., Cyclomatic Complexity, July 2000, http://www.sei.cmu.edu/str/descriptions
/cyclomatic.html#989041.