ForSa@OC-Trust
Overview
ForSa@OC-Trust has developed formal methods, software engineering approaches and algorithms for trust-critical Organic Computing systems. The focus was on ensuring functional correctness and safety. The work includes formal modeling and analysis of trust-critical systems and methods for monitoring systems at runtime. Furthermore, it was investigated how the construction of trust-critical Organic Computing systems can be standardized and simplified and how measures to ensure trustworthiness can be integrated into the development process. Finally, algorithms that include trust values and whose correctness can be demonstrated were developed and applied in trust-critical scenarios. An application platform from the energy sector, the Trusted Energy Grid, and the "Autonomous Virtual Power Plants" application based on it served to evaluate and demonstrate the techniques developed.
?
Description
Systems which organize themselves, which dynamically change their structure at runtime and which are used in environments whose dynamic development is unpredictable cannot be constructed and analyzed using classical software engineering methods. In particular, if a system is mission-critical, i.e. must not fail or cause damage under any circumstances, it must be ensured that functionality and operational reliability and thus its trustworthiness are guaranteed. Special challenges are the uncertainty about the environment and the complex, changing structure of the systems. An example of a domain in which such systems could be used is energy management and in particular the decentralised control of energy supply.
?
In order to make the challenges manageable, ForSA@OC-Trust has developed methods that make it possible to combine formal aspects, software technology and algorithms for trust-critical Organic Computing systems. Three concrete goals were pursued:
?
1. Control of emergent behavior through correctness and safety considerations at runtime and design time
The system functionality of trust-critical OC systems results from the interactions of a large number of elements. Trust aspects play an important role in the interaction of the elements, especially in interactions between unknown partners. In addition, there is the great heterogeneity of the systems and their openness. Therefore, it is very difficult to perform correctness and safety considerations for the entire system at design time.
?
Alternatively, however, the correctness of the system can be ensured with the aid of runtime verification. The basis for this is the specification of a soft behavior corridor that defines the conditions under which the system functions correctly and allows relationships between these conditions to be specified. In addition, targeted countermeasures can be combined with the violation of the conditions.
?
Similar to functional correctness, it is difficult to perform analyses in advance in an open, heterogeneous, dynamic environment. Research is therefore being conducted on dynamic safety models that can be created and analyzed at runtime.
?
The interaction between agents in Organic Computing systems leads to implicit or explicit subsystems or hierarchies within which the agents are encapsulated (Systems of Systems). In many cases it is not the case that the target of the individual agents corresponds to the target of the subsystem to which they belong, or even to the target of the overall system. In ForSA@Oc-Trust research was therefore carried out on the interrelations between targets at different system levels and the conditions defined for the individual levels.
?
2. Methods for trust-critical Organic Computing systems
In order to ensure trustworthiness in trust-critical Organic Computing systems, it is necessary to introduce confidence-building measures at the design stage of these systems and to use them sensibly. Since previous software development methods are either not applicable or only insufficiently meet the requirements, methods have been developed that enable the systematic construction of trust-critical Organic Computing systems.
?
In this context, reference architectures for the application areas developed in the case studies were defined and patterns for the handling of trust values and for mechanisms to increase reliability and data security were created. On this basis, a guideline was specified which specifies how these must be used in existing software development methods in order to achieve effective trustworthiness. Taking into account formal aspects and the hierarchical organization of the systems under consideration, it then becomes a guideline for the construction of trust-critical Organic Computing systems.
?
3. Dealing with uncertainty in algorithms
In many cases, optimization processes have to rely on predictions about the future development of a system. However, these predictions are usually incorrect and therefore lead to bad or useless results, which can be a big problem especially in safety-critical systems. In addition, the error can vary from forecast to forecast. If system components included in the optimization fail, their tasks must be taken over by other system components in order to keep the system stable. In the same way, new components can be added during runtime, which means that the system has a different structure than assumed at the time the optimization was performed.
?
In order to deal with these problems, ForSA@OC-Trust has developed methods that make uncertainty quantifiable by means of confidence values and include these values in optimization or structuring algorithms.
Autonomous Virtual Power Plants
Power generation and consumption in the electricity grid must always be in balance. To achieve this goal, however, uncertainties caused by fluctuating energy consumption and stochastic energy producers must be taken into account. Stochastic energy generators are problematic in that they have limited controllability and their output is difficult to predict because it depends either on the weather (e.g. in the case of wind or solar power plants) or on consumer behavior (e.g. in the case of household CHP plants). In addition, the number of consumers and energy producers, especially stochastic energy producers, is increasing rapidly.
?
Autonomous Virtual Power Plants (AVPPs) (see Anders et al., 2010) are one approach to dealing with this increasing uncertainty and complexity. Here, the power plant landscape is subdivided into networks of power plants (AVPPs). An AVPP is a self-organizing, self-adaptive and self-optimizing network of different power plants that independently provides the required amount of electricity. In order to enable proactive, forward-planning control of the power plants, the individual power plants provide power forecasts, i.e. a forecast of how much energy they will or can provide in the near future. Consumers also make a forecast of their probable load. With this information, an AVPP can create schedules for the controllable power plants. In case of wrong forecasts, a reactive mechanism in the power plants adjusts the schedules accordingly. The creation of power plant schedules is an optimization problem with a very large search space, which is greatly reduced by subdividing the power plant landscape into AVPPs.
?
AVPPs organise themselves when changes to the structure are necessary, e.g. when an AVPP permanently produces too little electricity and has to buy electricity from other AVPPs. In order to achieve a distribution of power plants into AVPPs that meets the requirements, the formation of the AVPPs depends on various factors, including the energy mix, the mixing of trustworthy and untrustworthy power plants and the load to be covered.
To deal with uncertainty in energy systems, AVPPs take into account the trustworthiness of power plants. An important aspect of this is the credibility of generators; generators are considered credible if they can actually deliver their predicted output. Furthermore, the reliability of generators plays an important role. This provides information on how often a power plant fails and goes off the grid. Trustworthiness of power plants is used in AVPPs at runtime in two ways: firstly in the formation of AVPPs, where a good mix of trustworthy and untrustworthy power plants is to be produced for each AVPP, and secondly in the preparation of power plant schedules, where, as far as possible, untrustworthy power plants are to be involved less in electricity production than trustworthy power plants. Trust also plays a role at other system levels, e.g. between AVPPs in energy trading (does another AVPP always deliver as much energy as promised?). A priori analysis and runtime analysis can also be used to examine the facets of functional correctness and safety. The application platform and the "Autonomous Virtual Power Plants" application based on it were implemented in a simulation environment (see figure).
?
?
In addition to AVPPs, other applications in the energy sector benefit from the use of confidence values. For example, it is difficult for individual pantographs to offer load reduction on the energy market because only relatively large quantities of electricity are traded there. This is why companies, public authorities and other consumers join together in static consumer associations, which can then offer load reductions at a certain point in time, e.g. when the ovens in the canteens are switched on at noon. Consumers whose energy requirements cannot be planned in advance are, however, excluded from such rigid constructions. It would often make sense for consumers to autonomously find other pantographs with whom they would then meet at short notice in order to enter the market together. Since in such cases, however, only short-term relationships exist, it must be ensured that the partners are trustworthy and that the promised quantity of electricity is actually requested for the promised period. Even these dynamic consumer associations (see Ruiz et al., 2009) are only made possible by the inclusion of trust.
?
The two applications outlined were the basis for the abstract concepts of the Trusted Energy Grid. The most important concepts and relationships in energy systems were identified and then incorporated into the reference architecture. The reference architecture was implemented as an application platform and the application of the Autonomous Virtual Power Plants was implemented in a simulation environment.
Team
- Phone: +49
Institute for Software & Systems Engineering
The Institute for Software & Systems Engineering (ISSE), directed by Prof. Dr. Wolfgang Reif, is a scientific institution within the Faculty of Applied Computer Science of the 新万博体育下载_万博体育app【投注官网】 of Augsburg. In research, the institute supports both fundamental and application-oriented research in all areas of software and systems engineering. In teaching, the institute facilitates the further development of the faculty's and university's relevant course offerings.