ISSN 1842-4562
Member of DOAJ
Volume 17 - 2022
Volume 16 - 2021
Volume 15 - 2020
Volume 14 - 2019
Volume 13 - 2018
Volume 12 - 2017
Volume 11 - 2016
Volume 10 - 2015
Volume 9 - 2014
Volume 8 - 2013
Volume 7 - 2012
Volume 6 - 2011
Volume 5 - 2010
Volume 4 - 2009
Volume 3 - 2008
Volume 2 - 2007
Volume 1 - 2006

Journal Home > Volume 2, Issue 1 - March 30, 2007

JAQM Volume 2, Issue 1 - March 30, 2007

Reliability and Quality Control - Practice and Experience


Optimal Redundancy Allocation for Information Management Systems (p 1)

The aim of this paper is the investigation of reliability allocation to specific sets of software applications (AST) under the circumstances of minimizing development and implementation costs by using the Rome Research Laboratory methodology and by complying with the conditions of costs minimization triggered by the introduction of redundancies.

Specific Aspects of Financial and Accountancy Software Reliability (p 17)
Marian Pompiliu CRISTESCU

The target of the present trend of the software industry is to design and develop more reliable software, even if, in the beginning, this requires larger costs necessary to obtain the level of reliability. It has been found that in the case of software which contain large amounts of components - financial and accounting software are also included here - the actions taken to increase the level of reliability in the operational stage induces a high level of costs.

Capability Maturity Model Integration (p 31)

In this paper we present an introduction to, and an overview to the CMMI process model, that appeared from the need to address generic, company-wide, organizational issues for a wider range of activity domains, providing a flexible framework that allows further "plugin" extensions to be added.

Software Quality Verification Through Empirical Testing (p 38)

The empirical nature is characterized by the partial quality of its elements, the absence of systematic behavior in the process and the idea of random attempts at program behavior. Empirical testing methods are used the program as a black box view, as well as for the source code. Software testing at source level pursues raising the tree-like coverage associated with the code. There are known indicators for quantifying the test methods and measuring their efficiency upon programs by an empirical approach, as well as measuring the program quality level.

A Model for Evaluating the Software Reliability Level (p 61)
Gheorghe NOSCA, Adriean PARLOG

The COTS utilization in the software development is one of the nowadays software production characteristics. This paper proposes a generic model for evaluating a software reliability level. The model can be, also, used to evaluate any quality characteristics level.

Evaluating the Effects of the Optimization on the Quality of Distributed Applications (p 70)

In this paper, we present the characteristic features of distributed applications. We also enumerate the modalities of optimizing them and the factors that influence the quality of distributed applications, as well as the way they are affected by the optimization processes. Moreover, we enumerate the quality characteristics of distributed applications and a series of evaluation's indicators of quality for varied structures of distributed applications.

Internet Databases in Quality Information Improvement (p 83)

Even though many important companies are reluctant into deploying their databases on the Internet, being too concerned about security, we would like to demonstrate that they shouldn't be worried too much about it, but to try to provide information in real-time to management, boards or people who travel on companies interests.

Software Reliability from the Functional Testing Perspective (p 89)

The metrics proposed in this paper give a methodological framework in the field of the functional testing for the software programs. The probability of failure for software depends of the number of residual defects; obvious the detection of these depends very much of data test used, that are conforming with a operational profile.

On Measuring Software Complexity (p 98)

In this paper we measure one internal measure of software products, namely software complexity. We present one method to determine the software complexity proposed in literature and we try to validate the method empirically using 10 small programs (the first five are written in Pascal and the last five in C++). We have obtained results which are intuitively correct, that is we have obtained higher values for average structural complexity and total complexity for the programs which "look" more complex than the others, not only in terms of length of program but also in terms of the contained structures.

Modeling the Audit in IT Distributed Applications (p 109)
Victor-Valeriu PATRICIU, Calin Marin VADUVA, Octavian MORARIU, Marius VANCA, Olivian Daniel TOFAN

The aim of this paper is to define the requirements for auditing and to propose a solution for implementing them in a software system. The paper starts from the description of the requirements for audit, goes on with a presentation of original concepts in the field and presents in the end the practical approach for implementation of the solution in a real software system.

Performance Criteria for Software Metrics Model Refinement (p 118)

In this article, the refinement process is presented with respect to model list building using model generators. Performance criteria for built models are used to order the model lists according to the needs of modelling. Models are classified by means of performance and complexity. An aggregated indicator based on the two factors is proposed and analysed in model list ordering. A software structure for model refinement is presented. A case study shows practical aspects of using the aggregated indicator in model refinement.

Performance Analysis of Parallel Algorithms (p 129)
Felician ALECU

Grid computing represents unlimited opportunities in terms of business and technical aspects. The main reason of parallelization a sequential program is to run the program faster. The first criterion to be considered when evaluating the performance of a parallel program is the speedup used to express how many times a parallel program works faster than the corresponding sequential one used to solve the same problem. When running a parallel program on a real parallel system there is an overhead coming from processors load imbalance and from communication times needed for changing data between processors and for synchronization. This is the reason why the execution time of the program will be greater than the theoretical value.

Experimental Design Techniques Applied to Stydy of Oxygen Consumption in a Fermenter (p 135)

The dependence of the volumetric rate of oxygen consumption with the variables agitation and air flow, was studied in a laboratory scale fermenter. A 2^2 factorial experimental design, with four axial points and four replicates of the central point, was used. The coefficients for the two-variables, quadratic model were calculated. The "fit" of the model with the empirical data was studied using a lack of fit test. The surface response methodology was used to maximize the dependent variable, volumetric rate of oxygen consumption. The response surface obtained showed an absolute maximum on the domain's boundary, consistent with theoretical considerations indicating that a relative maximum inside the domain was impossible. This approach allowed us to derive a model to predict volumetric rate of oxygen consumption in a standard laboratory fermenter, as a function of agitation and air flow, i.e., the two variables selected.

Aspects on Statistical Approach of Population Homogeneity (p 142)
Alexandru ISAIC-MANIU, Viorel Gh. VODA

Authors emphasize the manner in which this statistical indicator - the variation coefficient (v) - could help the inference on measurable characteristics generated by technological processes. Our interest lies upon the so-called SPC - Statistical Process Control; the main result obtained is the following: if the coefficient of variation is known, then the statistical distribution of capability index is of ALPHA-type distribution (Druzinin). They also put into light some links between (v) and Taguchi's quality loss function.

Finding GPS Coordinates on a Map Using PDA (p 150)
Nicolae Iulian ENESCU

The purpose is defining algorithms for the elaboration of GPS coordinates using the PDA. The C++ and C# programming languages have been used for this. A software was elaborated which: runs on PDAs that have Windows Mobile 2003 OS, assumes GPS information and implements algorithms for establishing the coordinates on a map represented as a TIF image. The precision for the coordinates is less than 10 meter.

Interactive Methods Used in Graduate Programs (p 171)

Any professional act will lead to a significant change. How can one make students understand "managing change" as a consequence or as an intended objective? "DECISION IN CASCADE" - is a Management Computational Game for the Education of University Master Students and Junior Executive - simulates five economic functions: research and development, production, purchases and sales, personnel, finance and accounting of five to nine companies operating on a free market.

The Contribution of Labour and Capital to Romania's and Moldova's Economic Growth (p 179)

In the present research we have used the Cobb-Douglas production function in its classical form for analyzing Romania's and Moldova's economic growth in relation to the intensity of using capital and labour as determinants of the production and GDP level and structure.

PhD Thesis Review (p 186)
Gheorghe NOSCA

PhD Thesis Review on
by Adrian COSTEA