Building protection systems against the threat of denial of access to information
Since one of the main tasks of the information system is to provide users of the system with the necessary information in a timely manner (information, data, control actions, etc.), the threat of denial of access to information can still be considered as a threat of denial of service or a threat of failure of functioning. The threat of failure of the information system can be caused by:
targeted actions of intruders;
errors in the software;
equipment failure.
It is often impossible to separate the reasons for refusal. In this regard, the concept of reliability is introduced.
Reliability is the property of an object to preserve in time the values of all parameters characterizing the ability to perform the required functions in specified modes and conditions of use, maintenance, repair, storage and transportation.
To assess the reliability of the information system, it does not matter whether failures are caused by the actions of an attacker or are related to development errors, it is important how and to what extent they will be countered.
It is advisable to evaluate the reliability of hardware and software separately, since the approach to determining reliability is different here.
The assessment of equipment reliability is based on the following approach.
The elemental reliability of any device or system as a whole is estimated as the product of the probability of trouble-free operation P (t) by the availability coefficient Kg: P0(t) = P (t)Kr .
If reliability acts as one of the measures of system efficiency, then its optimal value is one in which the cost of operation is minimal. The optimal value of the reliability indicator can be estimated graphically.
In some cases, the task of achieving maximum reliability at fixed costs or other fixed conditions is solved.
To determine reliability, there are both theoretical calculation methods and working methods. It is on the basis of such calculations that practical measures are developed to improve the reliability of both individual elements and systems as a whole.
At the initial stage of design, working methods based on simple models or elementary reliability calculation methods based on the assumption of the independence of individual elements are most often used.
In the theoretical methods of calculating reliability, the most widely used methods of calculating elements. In this case, the functional dependencies and parameters characterizing the reliability of an individual element can be expressed by the following formulas: failure rate f(t) = dq(t)/dt = - dP(t)/dt; failure rate ( ) ( ) ( ) ( ) ( ); t P t dq t dt P t dP t dt 1 1 average uptime tcp t f t dt ( ) 0, where P is the probability of uptime of the element; q is the probability of failure of the element.
These formulas are applicable to systems with any number of elements and an arbitrary ratio of them.
The probability of failure-free operation of the system is a function of the probabilities of failure-free operation of the elements included in the system Pc = f1 [P1(t), P2(t), ... , Pn(t)].
The relationship of functions for individual elements may be different. In particular, the probability of trouble-free operation or the reliability function of a system consisting of n arbitrarily connected elements can be expressed as a polynomial P AIpI i k c 1.
In the case of an independent influence of individual elements on the operability of the installation, if the failure of each of the elements leads to the failure of the entire system, the scheme of structural reliable relations is presented in the form of a sequential connection of elements. In this case, the probability of failure-free operation of the system is determined by the product of the probabilities of failure-free operation of the elements P t P t i i n c ( ) ( ) 1.
If the elements influence each other, then the scheme of structural reliable relations will be parallel or mixed.
If the failure of an element does not lead to a system failure, then in the scheme of structural reliable relations this element is switched on in parallel, and when calculating the reliability of the system, the probabilities of failures of parallel elements are multiplied and the resulting product is subtracted from one: P t Pj t j n c ( ) (( )) 1 1 1.
It is not always convenient to characterize the reliability of the elements by the probability of trouble-free operation, since for small periods of time of operation of the elements, the values of Pi (t) will be close to one. In this case, it is better to use the failure rate, which characterizes the probability density of failure of a single element. It is determined by the number of failures pi per unit of time t, attributed to the number of elements of the same type N working properly at the moment, that is, n N t i.
The probability of failure-free operation is related to the failure rate by the following ratio: P(t) exp((t)dt). 0.
The first section of the increased failure rate characterizes the period in which failures occur mainly as a result of hidden malfunctions made during design, violation of the manufacturing technology of the system or associated with difficulties in mastering operation. The system is operated under normal conditions for the longest time (section II). It is this period of operation of the system that is taken into account when calculating reliability during the design process. Section III characterizes the period of increased failure rate due to equipment wear and aging.
An analysis of the operation of numerous technical devices has shown that the simpler they are, the more reliable they are.
When protecting an information system from the threat of failure, it is usually assumed that the reliability of hardware components is high enough and this component in the overall reliability can be neglected. This is due to the fact that the rate of moral aging of computer technology is significantly faster than the rate of its physical aging and the replacement of computer technology, as a rule, occurs before its failure.
Thus, the reliability of the functioning of an information system is largely influenced by the reliability of the functioning of the software that is part of it.
Despite the obvious similarity in the definitions of reliability for hardware and software, in fact, the latter has fundamental differences:
in most cases, the program cannot fail accidentally;
errors in the software made during its creation depend on the development technology, organization of work and qualification of performers;
Errors are not a function of time;
the reason for failures is the set of input data that has developed at the time of failure.
There are two main approaches to ensuring the protection of software from the threat of failure of functioning:
ensuring fault tolerance of software;
fault prevention.
Fault tolerance provides that the remaining software errors are detected during program execution and countered through the use of software, information and time redundancy.
Troubleshooting involves analyzing the nature of errors that occur at different phases of software development and the causes of their occurrence.