By definition, the automation system works in Real Time. This notion has various interpretations, therefore it must be defined.
The Definition for Real Time, as per DIN 443000 is:
The real time mode of a computer system is such one mode where the programs for processing of the any incoming data are always active, so that the data processing results are to be available within a defined time period. The data that may appear may be sporadic or periodic.
Therefore the system has to always be able to react to each event in a preliminary assigned time interval, conformable with the specifics of the automation object, independently of the:
- information volume
- intensity and dynamics of changes in the objects
- all other factors
The time interval – “reaction time” of the system is defined as the time interval from the moment the event arises to the moment it is recognized, processed and a time tag (with 10 msec accuracy) is assigned. In other words, the system has to be able to take and process information in time, i.e. until he is available and actual.
The above defined criteriа are also called “hard criteria”.
The definition of Real Time, presume that the reaction time for the various information objects is different. It varies from 1 msec. in energetics, to a few minutes for other types of automation objects. Example of functions, which satisfy “hard criteria” are those for collection and processing of technological information from the object. In this case, the information always comes as discrete impulses, i.e. it is the impulse that holds the information and then the device sending it returns into normal state. Thus, if “hard criteria” are not applied, the loss of information is inevitable and this results in a system failure.
There also exists “soft criteria” for Real Time, where the reaction (respond) time is defined statistically.
There the reaction time is an average value of time limit, where the processing has to be carried out.
Such criteria are applied at the visualization, data recording, archiving etc.
Here the display time could be within the scope of few seconds and this does not affect the system efficiency. The same also holds true for the archiving and recording functions as object data is comming with its time tag.
When building the system all structures that comply with the “hard criteria” are separated from others that comply with “soft criteria”.
The first case requires software and hardware compatible with the Real Time requirements, while the other case use standard PC configurations.
The main contradiction that appears inside automation systems, is the one between the automation object’s specifics and the requirements that it is surveiled and controlled in Real Time.
The automation object and its substations and devices are characterized with a great information volume, which comes into the system sporadically and spreads with great intensity.
Furthermore the indeterministic behavior of the communication environment and system complexity make it harder to meet the time requirements.
On the other hand, the system has to always fulfil all Real Time criteria even in worst case scenario.
The main principles used to build Real Time system are:
Multi-hierarchical and module structure of the system, where the main system levels are:
- “process” level – where the acquisition and processing of information from technological devices/objects, is fulfilled. The data comes in parallel from digital or analog I/O’s, or serial from Intelligent Electronic Devices (IED) by means of various networks, environments and communication protocols.
- “object” level – it supports the communication between system objects and Control Centers. They can be situated on different system levels – local, regional and/or central.
- “system” level – it supports the communication inside the Control Centers, and between them and various external systems
Various system structures are installed on each hierarchical level, which function independently, work in parallel and communicate with each other by regulated methods and protocols.
Autonomous structures and subsystems are formed which work in parallel and independently and secure the functionality of the whole system in case of failure, i.e. if something breaks it will lead to reduced functionality, but the system would continue to work.
This approach creates structural surplus, because of duplicate structures and functions.
Using the size and specifics of the surplus, one builds autonomous and parallel structures and thus creates the possibility to scale and increase the reliability of the system.
Decentralization of functions
The system functions are distributed along the whole system hierarchy, where the effort is to install each function to the possibly lowest system level, closely to the information source.
Criterion for the installation location of a function inside a given system level is such that the information on that same level is sufficient for the function execution.
The system is created on the base of “parallel processing” and Event Driven Architecture (EDA).
The system is characterized by his complexity, which is realized primarily by his program system.
It consists of various universal programmable modules, where the criteria for functions implemented inside the modules are there functionally completeness and minimizing the data flows between them.
After installation, the modules are transformed into program processes located inside different system structures/computers. The program architecture is build based on “parallel processes” and is organized as Event Driven Architecture.
All the processes are put in the memory, but their activation is triggered sporadically, when the appropriate process/function needs to be executed.
The activation is done sporadically by internal or external events, triggered by change of process information or by intermediate results, formed by various data processing.
This approach is alternative to the case where all processes are executed periodically one after another independently of the necessity for their activation.
Minimizing and structure of information flow
For a given time span the changed process information is only a part of the whole information amount. This part could be very small (e.g. a signal or measure) but also huge process changes, provoked by some technological situations.
Relevant to the system are all process changes. Therefore the functions for data-acquisition and transmission (to higher system levels) primary design to identify the changes, their occurrence time at the source and send those with high priority to the respective subscribers.
Unchanged information is also periodically transmitted to keep the data base up-to-date, but this happens with lower priority.
The definition of “signal change” is clear. It determines the state change of a certain device.
On the other hand, “measure change” is more complex. A device continuously changes its measures and this could easily overload the communication channels and processor.
In order to solve this challenge, we set a small threshold value Δ (delta) dependent on the importance of each measure and his influence on the technological process.
The measure value is transmitted when its change is bigger than its Δ threshold.
The variable change could be calculated not only as the absolute value between two measures in a small time interval, but also by more complex mathematical dependencies.
The process data selection and filtering is a way to regulate the intensity of the information flow and thus effectively use the communication environment and computational resources.
The communication software is based on parallel and independent communication processes
The communication is very important for the quality of the automation system (dynamic behavior, reliability etc.). Data exchange is fulfilled on all system levels.
Such systems deal with large volume of process- and intern generated data, which occur sporadically with various intensity.
This information has to be completely acquired, verified, processed and transmitted to the appropriate system structures.
Therefore the Communication subsystem is build based on parallel and independent communication processes, supporting physical or logical “point to point” connections.
The object’s RTU communicates with one or more subscribers over separate connections and the appropriate Master/Slave, resp. Client/Server pair.
From the whole object’s information volume only these for the appropriate subscriber relevant data are extracted and transmitted.
Thus the initial information flow splits in many parallel and independent communication processes. Each of these holds a communication stack for adjustment of individual settings.
Disturbances or connection loss leads to information loss during the down time. Therefore each communication channel has its own data structures, queues etc. to save and resend the missing data after communication is reestablished.
The communication environment, technologies, protocols etc. are many different, usually considered as given and the system adapts to them.
The Data management is done by means of:
- off-line Data Base
- Real Time Data Base – RTDB and
- archive databases
They form the core of the system’s Data Management and contain the information models, describing the statics, dynamics, actions’ principle, functional and technological connections of the automation object.
The adaptation of the databases to concrete automation object is carried out by their configuration.
The off-line database is standard, universal and relational and provides static information (coefficients, parameters, images etc.) about the automation object. It has a configuration role and provides initial information to the Real Time Data Base – RTDB.
The RTDB stores the dynamical information generated by the objects, as also the internal information, generated by the system’s software. It also embeds the functional and technological connections among the objects and devices.
RTDB is distributed throughout the whole system hierarchy and supply the system software with the required data.
The archive database collects, process, formats, filters and outputs stored information about process data changes, alarms, operator actions etc. and supplies the appropriate user.
The implementation of the above principles is carried out by:
- suitable that works Real Time Operating System – RTOS
- development of Real Time Data Base
- development of appropriate structure of the application software
Choice for Operating System – OS suitable to work in Real Time
The system software is based on a Real Time Operating System – RTOS.
The dynamical properties of the computer and the Real Time system respectively, are defined by the software design and the capabilities of the operating system.
In order to achieve the required short reaction time, by high variation of the information volume and dynamic and the high processing effort, which results in a heavy computer load, its program system has to be organized in “parallel” processes, activated by sporadically occurred external and internal events, i.e. the operating system has to support parallel processes and Event-Driven Architecture – EDA.
The RTOS has system tools that support:
- synchronization and coordination of software modules and/or processes in the frame of the available computational resources
- data- and event exchange between program modules, computers and subsystems
Data management – Real Time Data Base
Such database creates the Real Time working environment. It works under an operating system which supports Real Time.
The Real Time database is used:
- to store dynamical information. The database decouples the data acquisition processes from their further processing and transferring around the system’s hierarchy.
- to provides the plausibility checks of the incoming data
- based on the detected events, to activate internal functions to process the “changed” objects, as also additional functions for processing the functionally and technologically associated objects
- to provide with information the SCADA-, Human Machine Interface (HMI)-, Historical Information (HIS) etc.- Servers
The main functions of the Real Time database is to support the input and processing of large information volume for short time period, i.e. the time for which the information is valid and available.
In a case a process data change, it is at first validated and saved and then the Real Time Data Base executes a sequel of encapsulated functions and manipulations, throughout all other objects inside the information model, which are technologically of functionally connected with the unit, that generated the change, to fully process the event.
The properties that characterize the dynamics and speed of the Real Time Data Base are:
- it is based on information models, fully describing the statistics and dynamics of the technological connection among objects
- the information model is static and is always loaded in the memory
- all connections among the objects inside the Real Time Data Base are created during the design of the model, i.e. off-line.
After an event occurs, all subsequent processing, related to the source element as well as the other elements directly or indirectly related to it, are carried out sporadic at the moment of their occurrence and thus the consistency of the database is always secured.