
|
Model Based Manufacturing: Outstanding research issues
|
|||
|
Primary/Lead Authors |
|||
|
Identification |
|||
|
Purpose |
Review research issues, basis for future projects |
||
|
Intended Audience |
All interested parties, both inside and outside CAPE.NET formal membership |
||
|
Outline Content/Summary |
|
||
|
Revision History |
|||
|
Rev. No. |
Date |
Who |
Description of Changes |
|
0 |
Dec 99 |
T. Perris |
Summary of Oslo brain storming session
|
|
1 |
Feb 00 |
G.Heyen |
Elaborate text
|
|
2 |
Apr 00 |
G.Heyen |
Revision and extensions
|
|
|
|||
|
|
|||
Model Based Manufacturing:
Outstanding research issues
In response to the market pressure, process plants are becoming more complex and more tightly integrated. This is occurring both within individual plants and on a site-wide basis through shared utilities and relief systems. Such integrated complexes exhibit more complex operational behaviour than older, less integrated plants. They are therefore more difficult to start-up and operate, especially when they must respond to changing market conditions.
The vision calls for an integrated set of model-based tools & techniques, which provide support to plant management and operators in achieving manufacturing excellence.
Model based systems are needed to help the operator in their daily work, but also to better train them and to improve the overall plant management. Some model-based tools are already available, but the working group feels that that could be further developed and improved in several areas, identified in the present document. Furthermore the constant development in the hardware and software industry provides technology breakthrough, that will have an impact on the way plants are build and operated. We will also try to identify areas where progress is likely to be made.
The following text is based on a brain storming session organised within TGW5 in Oslo December 1999, following a workshop where issues in model quality were raised. Some items addressed may seem to overlap and some will also be of interest to other TWGs.
Models can only help the operators if they are adequately formulated, and if they provide the right answer at the right time.
Ideally, a model should be able to predict the value of all the important variables within a prescribed accuracy, but should not be overly complex. The fitness for purpose is thus a quality to be sought. Simple models are cheaper to build and to maintain, and require less parameters to evaluate. The conception of a model development framework than would provide tools to build, evaluate, reduce and maintain consistent models is thus an important research goal.
Two problems need to be addressed: how good is a model, what is the level of detail needed to solve a given problem efficiently? How far can the model be extrapolated without changing the underlying hypothesis or the parameter values? These issues have been analysed by the Technical Working Group 1 (Unit models).
How to measure model quality is a difficult issue. The manual on Good Practice provides some guidelines.
It will be appropriate to review modelling techniques from statistics, electrical engineering and information technology to see how they might be used together with process modelling knowledge. A suitable way forward might be to build hybrid models, where (for example) an empirical model, such as a neural network is embedded for a specific purpose into a process model.
Each modelling strategy would contribute with its advantages. First principle models will be used to enforce mass and energy balance constraints for some well-known processing steps. Black box or grey box models (e.g. neural nets) will be chosen for process elements whose details are not well understood, or whose behaviour is too complex to be modelled from first principles. In this respect, self-learning or self-adapting grey box models could become an interesting alternative, which start from a reasonable and easily available first guess and then iteratively improve during their on-line usage (e.g. for optimisation tasks, where the model is improved as far as necessary for the next step of the optimisation algorithm).
Regarding the black box models, it is a crucial issue to get the right information out of the available data. It is a well known fact that measurement records of plants in normal operation often do not contain the necessary data spectrum to generate an identification based model. This implies the risk that black box models are generated which are not only inappropriate do describe the respective part of the process but are responsible for the failure of the overall hybrid modelling approach. Therefore, guidelines and tools are necessary which support the modeller in deciding issues like: which parameters to be identified are most crucial for the performance of the overall model, which input signals should be used, and how the requirements on he dynamic properties of the input signal for the identification task can be traded off against the operating restriction of the plant, so that most of the data can be recorded during normal operation.
More and more the process industry is required to innovate by manufacturing products whose properties correspond to a market demand. The specifications are not just product purity, but product perceived properties, or the capability to answer some market need.
These complex properties (such as. smell, taste, "feel") are related to the composition and the processing history, but are sometimes hard to quantify and to relate to the process conditions. Optimising a process to achieve improved product properties is thus a challenge. One potential solution is in the development of hybrid models for product properties.
More and more, models are needed to support the production process, while a lot of effort has been spent in developing process models for design. Even if the goals differ, a significant part of the modelling effort should be reusable. This would not only save time, but also address the issue of consistency between models applied at different level of detail, or for different stage in the process life cycle. A software design framework and a toolbox allowing to save effort in model adaptation is not yet available, and would be an useful development.
A dynamic simulation model usually consists of a large set of algebraic-differential equations. Integrating such a system may be difficult when some discontinuity occurs in the modelled process. This may happen as the result of a phase change (switch from one phase to two-phase system), from a transition in an equipment state (a pump switched on or off) or a sudden variation in some model parameters (transition from laminar to turbulent flow). This induces either sudden variation in the Jacobian matrix of the equation system, or even worse, this may change the structure and the number of equations. A key element required for hybrid simulation is a mechanism for the effective and accurate detection of threshold crossings of state variables (state events). This can be critical because a wrong sequence of events may lead to qualitatively different trajectories. Once an event was detected, the logical part of the dynamics must be executed; if a switching occurs, the initial state must be computed and the integration restarted at this point.
The above mentioned problems are of crucial importance, particularly within the scope of TWG3. In the model based manufacturing context, these issues are of major importance for batch process operation. Many batch processes can be separated into a hybrid, mostly recipe-driven part and a continuously working part (e.g. recycling stage or product finishing line). The recipe-driven part consists of discrete dynamics induced by transitions of the recipes and internal events (i.e. phase transitions). The continuous part of the process dynamics is usually represented by a complex (non-linear) DAE-system, which requires a specialised integration algorithm for an effective solution.
For a combined modelling and simulation of this kind of processes two different approaches are applicable. The first method applies a universal simulation tool resulting in a single monolithic model of the process. In this case, it is not possible to apply different, most suitable solution strategies for each part of the system. Usually the problem has to be simplified (at least on one side) to meet the algorithms abilities. Furthermore, the solution may even fail because the problem does not converge.
The second approach, which is usually preferred, uses two different simulators in a distributed simulation system. One simulator is predominantly designed for the continuous domain and one is specialised in the discrete (hybrid) domain. In this case an interface between the two simulators has to be created to exchange data and to synchronise the simulation time. The necessary communication in a distributed simulation system leads to a certain overhead, which has a strong influence on the simulation speed.
Optimal productions planning and controlled execution of the production plan are key issues where models will be applied more and more in the future.
A change in the order book, or in the economic conditions, or in the availability of raw materials may necessitate a modification of the production schedule. This schedule modification needs to be implemented by the control system. A tight integration between these systems would bring benefits, since any significant process change is likely to result in the production of some off-specification product during the transient. Thus an optimal system should include:
In a similar way, an intelligent control system should be self optimising. It should react with the supply chain, and seek inventory minimisation on a network-wide basis.
Since the supply and distribution networks have dynamic characteristics of their own, they should be taken into account in the design and tuning of the control system.
Start-up and shutdown are all, of course, inherently dynamic operations and so significant improvements can be expected from an improved understanding of the plant’s dynamic behaviour. During the transient operation, production is of low market value, since products are often off-specifications and, at best, need to be reprocessed. Thus an optimisation of any transient operation in order to minimise wasted product, manpower, materials and energy will lead to short term benefits.
Trajectory optimisation should also take into account controllability issues, in order to avoid operation too close to limits where some system failures might result in a hasard.
Algorithms able to solve very large models (>100000 variables) have been proposed, but the problem formulation and the resulting model structure have a definite influence on their efficiency. SQP algorithms based on interior point methods will probably soon be able to handle even larger problems.
Progress is still expected in the area of larger systems. Large models tend to exhibit multiple local optima, so research on global optimisation is expected to benefit to the process industry.
In real-time optimisation (RTO), progress has been made in understanding how to determine constraint back-off, to take account of parameter uncertainty and drift in both the optimisation layer and regulatory control layer. The idea is to move the operating point some way off the active constraints so that when there is variation over time in the process parameters the process will have a high probability of remaining feasible with respect to the operating constraints.
RTO has to embed discrete decisions, which are difficult to handle when the problem is not convex. So progress in the handling of discrete variables (MILP - MINLP) is still needed.
Reduced models must be consistent. If different sub-models (used in different subsystems, such as smart controllers, data reconciliation & optimisation) have a different and inconsistent view of the process, then it is inevitable, sooner or later, that the subsystems will give conflicting and confusing answers, with possibly serious implications for both plant efficiencies and such issues as safety and environment.
This has not been too serious a problem up to now because (a) they have been used independently offline and (b) because operations have typically been sufficiently far removed from optimal that the different models have been "qualitatively consistent". As we get closer to optimal operations and are seeking to make smaller and smaller corrections, however, such discrepancies will become increasingly significant - intuition would suggest that the models must be accurate/consistent to at least a factor of 5 (?), compared to the size of the correction we are seeking to make?
These needs and requirements may be met by a model reduction framework, where a definitive supermodel is developed and is used to produce consistent reduced models for specific tasks. The framework will need to provide an a priori assessment and guarantee of fitness for purpose of the model. This must not be left to end-users to judge as they cannot in general be expected to have all the requisite skills, knowledge and experience.
Since dynamic models are often developed at the process design stage, they allow also to design the control system. The interaction between process design and control design is extremely important: better and safer processes with intrinsically safe dynamic behaviour should be designed, and would later be much easier to operate. This overlaps with the topics analysed by TWG3.
Many processes can only operate safely under automatic control. However no general tuning strategy allows correct controller tuning in all cases. System non linearities or discontinuities limit the application of well known theories on optimal control of linear systems. However models developed at the design stage should be used more and more to provide acceptable initial values for the control parameters.
Process operation is a human-centred activity. The operator is an integral part of the control and management systems and yet, relatively speaking, little is done to help him/her when things begin to go wrong. With the growing complexity of the life of the plant operator, there is a growing danger of overload, leading to operation by crisis management with all the inefficiency and cost that that implies.
Operator's work can be made easier by providing aid to :
Some of these topics will be further developed here below.
The state of the art is to use simplified models for operator training. However important issues are the model fidelity and the maintenance of the model when the process is modified. Trend is also to use operator-training models for other purposes, which may require using the model out of the originally foreseen range of application.
Previously, the variety of goals of the simulation has led to numerous types of models and simulators for the same process at different steps of its life cycle. Moreover, these models and simulators were often developed from grass roots, and often without any links between them. As indicated above, the increasing availability of integrated modelling tools provides motivation for research into how models which are to be used at different stages of the life cycle should be developed in order to be compatible.
Modelling and simulation have made major advances in recent years. They have been driven:
Dynamic simulation based on first principle modelling is now realistic (in terms of CPU time). This permits the use of the same model (or at least of compatible models) for the same process at different stages of its life-cycle: for instance, using the same basic dynamic model for design and training, in the same basic simulation environment.
For all those reasons, we feel that a realistic trend will be to use detailed models developed in the process design stage and to adapt them for operator training (mainly by fitting a proper user interface). Issues such as the use of component models or the reuse of existing code will thus become more and more important. The capability to solve large models in real time will also become crucial, and progress in hardware is not the only solution to be sought. Automatic model reduction offers also good potential as allowing a real trade-off between model complexity and goodness for purpose.
In most plants, the operators are not really aware of the value of their work. The control system does not provide as a feedback a quality index, that would be related to the operator's skills. Some index could be the economic value of the product, or the raw materials consumed, or the energy usage (compared to a standard).
The economic figures could be displayed on-line on the control panel, and based on real-time measurement (or better on reconciled values).
Operator support systems provide real time information to the operators. While extension in fault detection capabilities can be foreseen (see elsewhere), another interesting downstream application would be the on-line provision and display of safety and health related information to the operators and to the managers, in parallel with hazards warning.
New development in portable computers and networking allow to design computing equipment that could be used not only in control rooms, but also right in the plant. An interesting application would be the development of wireless networks that could carry information to operators while they have to work on the spot. Progress in visualisation tools allows now to design a "data helmet" through which access to safety databases, maintenance records or flowsheets could be provided. The capability to retrieve information from archives or to run models while being in the plant would surely improve the decision making process and the overall safety.
Similar devices are already use to visualise complex variable interactions and to display large set of data, e.g for presenting measured data on oil reservoir, in combination with model predictions.
Plants are being driven harder, closer to physical limits using online optimisation. In such cases it is essential to have accurate and comprehensive measurements of the plant performance. Steady state data reconciliation is recognised as an enabling technology, to be applied before online optimisation. However no generic software package is available to apply it easily to any dynamic system. The application of dynamic data reconciliation faces several difficulties. The first is the need for redundancy, which is even more difficult to warrant in a dynamic system then in a steady state one. It might be difficult to distinguish between inconsistent flow measurements, leading to an error in a mass balance, and a change in the process inventory, unless all material and energy hold ups are measured. The second is related to the level of detail needed in the underlying models: what is the right level of granularity? Can some parts of the process be modelled as quasi steady state (e.g. equilibrium on a distillation plate?). Is the model valid only to represent transients around some steady state, or also to represent large variations, such as start up and shut down?
One reason to collect plant data is to monitor the condition of equipment. Some parameters such as fouling factors or catalyst activity cannot be directly measured, but can only be inferred by interpreting numerous measurements with the help of a model. Development of tools allowing to build "soft sensors" to monitor important parameters is thus to of key importance.
Early detection of performance degradation can later be used to plan maintenance: by detecting where the failure is likely to occur, the plant can be shut down orderly, and only the parts which need a close inspection or a repair need to be dismantled, which means saving man power and reducing the unavailability due to maintenance.
This information could also be corroborated (at least for machines) with the analysis of vibration modes; manufacturers could provide valuable information by providing sensors for vibration, but also guidance (in the form of expert systems?) on the type of fault to be associated with a given "voice print".
Automatic process control tends to correct and mask malfunctions. Thus fault detection is made less obvious, and model based tools provide basis for improved detection mechanisms. Development is needed along several complementary pathways : heuristic systems (e.g. based on neural networks), statistical tools (e.g. principal component analysis), or tools based on detailed plant models.
Models are available and able to predict the system response when some disturbances occur. However fault diagnosis implies that the model has to be inverted: observing an abnormal response, one should identify the possible cause. While much effort has been devoted to this problem, no general solution is still available, especially in case of multiple failures occurring simultaneously. The operator is then overwhelmed by many alarms, which may cause confusion, stress and wrong decisions.
One can also expect from automated fault detection system to react early, and to detect malfunction before they can seriously affect plant operation. All this safety related issues explain why the fault diagnosis and alarm management is an area where new development are expect to impact not only the economy, but also the quality of life of the population.
All the research topics discussed here above could be transposed (at least partly) into useable tools within a reasonable time scale, and are expected to provide benefits to industry.
However the expected benefits and the amount of efforts required to obtain results and transpose them into industrial practice vary considerably. Members of our working group wanted to express their feeling about those considerations, and tried to rank the research topics according to two main axes :
- relevance to industrial applications and expected benefits.
- time needed to be able to put into practice.
The following chart express the opinion of the group.

1.1. Model performance and model reduction. Mixed granularity models
1.2. Hybrid models
1.3. Modelling "perceived quality" parameters related to measurable/controllable parameters
1.4. Adaptation/enhancement of design models for use in production
1.5. Handling discontinuities and simulation of processes with mixed discrete and continuous dynamics
2.1. Adaptive scheduling
2.2. Plant-wide control and interactions with supply chains
2.3. Start-up & shut-down, evaluation of operating procedures, trajectory optimisation
2.4. Robust dynamic optimisation
2.5. MPC & hierarchical control; generation of models which are suitable at each layer but which are consistent
2.6. Integration design of process & control systems
2.7. Parameter pre-tuning based on dynamic models
3.1. Operator training using rigorous models
3.2. Online economics
3.3. Online display of she issues to operators
3.4. Visualisation; "process multi-media"
3.5. Dynamic performance analysis/data reconciliation
3.6. Condition monitoring and predictive maintenance
3.7. Fault detection & diagnosis; alarm management
For sure, one could argue about such a ranking. Notably there is a correlation between the perceived added value and the effort needed. It seems however that area 2.1 (adaptative scheduling) is likely to bring large benefits within a rather short time frame, and should be considered as a priority.