|
WeM1 |
5F-XinXi Palace A |
Data Analytics and Machine Learning I |
Invited Session |
Chair: Gopaluni, Bhushan | Univ. of British Columbia |
Co-Chair: Huang, Biao | Univ. of Alberta |
Organizer: Gopaluni, Bhushan | Univ. of British Columbia |
Organizer: Tulsyan, Aditya | Massachusetts Inst. of Tech |
Organizer: Chiang, Leo | The Dow Chemical Company |
|
10:00-10:20, Paper WeM1.1 | |
>Knowledge Extraction Using Process Data Analytics: A Tutorial (I) |
Tsai, Yiting | Univ. of British Columbia |
Lu, Qiugang | Univ. of British Columbia |
Rippon, Lee | Univ. of British Columbia |
Lim, C. Siang | Univ. of British Columbia |
Tulsyan, Aditya | Massachusetts Inst. of Tech |
Gopaluni, Bhushan | Univ. of British Columbia |
Keywords: Big Data Analytics and Monitoring, Modeling and Identification
Abstract: Traditional techniques employed by process control engineers require a significant update in order to handle the increasing complexity of modern processes. Conveniently, advances in statistical machine learning and distributed computation have led to an abundance of new techniques that are potentially suitable for advanced analysis of chemical processes. In this tutorial we briefly introduce popular data analytics techniques and discuss their potential for application in the chemical process industries. Specifically, this tutorial provides control engineers with a practical understanding of data analytics techniques, some of which may be unconventional in chemical process applications. Each of the presented techniques can potentially extract valuable knowledge from raw data, which can then be utilized to make smarter process control decisions.
|
|
10:20-10:40, Paper WeM1.2 | |
>Robust Optimization in High-Dimensional Data Space with Support Vector Clustering (I) |
Shang, Chao | Cornell Univ |
You, Fengqi | Cornell Univ |
Keywords: Optimization and Scheduling, Big Data Analytics and Monitoring
Abstract: Data-driven robust optimization has attracted immense attentions. In this work, we propose a data-driven uncertainty set for robust optimization under high-dimensional uncertainty. We propose to first decompose the high-dimensional data space into the principal subspace and the residual subspace by employing principal component analysis, and then adopt support vector clustering and classic polyhedral uncertainty set to describe the intricate geometry in the principal subspace and the tiny variations in the residual subspace, respectively, giving rise to a new data-driven uncertainty set. Similar to classic uncertainty sets, the proposed data-driven uncertainty set can also preserve the tractability of robust optimization problems. In addition, we establish the probabilistic guarantee theoretically by further calibrating the uncertainty set with an independent dataset, which ensures that the data-driven uncertainty set covers a portion of uncertainty with a given confidence level. Numerical results show the effectiveness of the proposed uncertainty set in reducing conservatism of robust optimization problems as well as the fidelity of the established probabilistic guarantee.
|
|
10:40-11:00, Paper WeM1.3 | |
>Extraction and Graphical Representation of Operator Responses to Multivariate Alarms in Industrial Facilities (I) |
Hu, Wenkai | Univ. of Alberta |
Al-Dabbagh, Ahmad | Univ. of Alberta |
Li, David | Univ. of Alberta |
Chen, Tongwen | Univ. of Alberta |
Keywords: Process Applications, Process and Control Monitoring
Abstract: Expert knowledge is an important factor to achieve operational effectiveness. This work focuses on mining such knowledge on operator responses to alarms, and examining the relations between the responses and alarms from the historical Alarm & Event logs, which are commonly available in modern industrial facilities. The process mining is adapted and applied to construct dependency matrices, based on which workflow models of the operator responses to alarms are discovered. Also, a new framework for graphical representation of operator responses is proposed to give a better visualization of the extracted workflow models. To demonstrate the effectiveness of the method, an industrial case study is presented.
|
|
11:00-11:20, Paper WeM1.4 | |
>A Novel Approach to Feedback Control with Deep Reinforcement Learning (I) |
Wang, Yuan | Univ. of Alberta |
Velswamy, Kirubakaran | Univ. of Alberta, Edmonton Canada |
Huang, Biao | Univ. of Alberta |
Keywords: Energy Processes and Control, Modeling and Identification
Abstract: A novel deep reinforcement learning (RL) algorithm is applied for feedback control application. We propose Proximal Actor-Critc, a model-free reinforcement learning algorithm that can learn robust feedback control laws from direct interaction data with the plant. We show efficacy of the algorithm on a benchmark problem in Heating Ventilation and Air Conditioning (HVAC) heating, with the RL controller achieving lower Integral Absolute Error (IAE) and Integral Square Error(ISE) as compared to baseline Proportional- Integral (PI) and Linear Quadratic Regulator (LQR) controllers. We also provide details on establishing feedback control problems within the deep reinforcement learning framework, including policy parameterization, neural network architecture and training procedures.
|
|
11:20-11:40, Paper WeM1.5 | |
>A Set-Based Model-Free Reinforcement Learning Design Technique for Nonlinear Systems (I) |
Guay, Martin | Queen's Univ |
Atta, Khalid | LTU |
Keywords: Optimization and Scheduling, Big Data Analytics and Monitoring, Model-based Control
Abstract: In this study, we propose an extremum-seeking approach for the approximation of optimal control problems for unknown nonlinear dynamical systems. The technique combines a phasor extremum seeking controller with an reinforcement learning strategy. The learning approach is used to estimate the value function of an optimal control problem of interest. The phasor extremum seeking controller implements the approximate optimal controller. The approach is shown to provide reasonable approximations of optimal control problems without the need for a parameterization of the nonlinear control system. A simulation example are provided to demonstrate the effectiveness of the technique.
|
|
11:40-12:00, Paper WeM1.6 | |
>A Systematic Framework for Assessing the Quality of Information in Data-Driven Applications for the Industry 4.0 (I) |
Seabra dos Reis, Marco P. | Univ. of Coimbra |
Keywords: Process and Control Monitoring, Batch Process Modeling and Control, Process Applications
Abstract: Managing and improving the quality of information generated in data-driven empirical studies is of central importance for Industry 4.0. A fundamental and necessary condition for conducting these activities is to be able to measure the quality of information – “If you can not measure it, you can not improve it” (Lord Kelvin). It is somewhat surprising that, with so many efforts devoted to take the most out of the available data resources, not much attention has been paid to this key aspect. Therefore, in this article we described and apply a framework, the InfoQ framework, for evaluating, analyzing and improving the quality of information generated in the variety of data-driven activities found in the Chemical Processing Industry (CPI). This systematic framework can be used by anyone involved in conducting these activities, irrespectively of the context and the specific goals to achieve. For instance, it can either be used to provide a preliminary assessment of the project risk, by analyzing the adequacy of the data set and analysis methods to achieve the intended goal, as well as to perform a SWOT analysis on an ongoing project, to improve it and increase the quality of information generated, i.e., increasing its InfoQ. The framework is applied to a real world case study in order to illustrate its implementation, utility and relevance. The author recommend its routine adoption, as part of the Definition stage in any data-driven task, such as in Lean Six Sigma projects, exploratory studies, on-line and off-line process monitoring, predictive modelling and diagnostic & troubleshooting activities.
|
|
WeM2 |
5F-XinXi Palace B |
Real-Time Optimization |
Regular Session |
Chair: Engell, Sebastian | TU Dortmund |
Co-Chair: Li, Shuyun | West Virginia Univ |
|
10:00-10:20, Paper WeM2.1 | |
>Enforcing Model Adequacy in Real-Time Optimization Via Dedicated Parameter Adaptation |
Ahmad, Afaq | TU Dortmund |
Singhal, Martand | EPFL |
Gao, Weihua | TU Dortmund |
Bonvin, Dominique | EPFL |
Engell, Sebastian | TU Dortmund |
Keywords: Process and Control Monitoring
Abstract: Iterative real-time optimization schemes that employ modifier adaptation add bias and gradient correction terms to the model that is used for optimization. These affine corrections lead to meeting the first-order necessary conditions of optimality of the plant despite plant-model mismatch. However, since the added terms do not include curvature information, satisfaction of the second-order sufficient conditions of optimality is not guaranteed, and the model might be deemed inadequate for optimization. In the context of modifier adaptation, this paper proposes to include a dedicated parameter-estimation step such that also the second-order optimality conditions are met at the plant optimum. In addition, we propose a procedure to select the best parameters to adapt based on a local sensitivity analysis. A simulation study dealing with product maximization in a fed-batch reactor demonstrates that the proposed scheme can both select the right parameters and determine their values such that modifier adaptation can drive the plant to optimality fast and without oscillations.
|
|
10:20-10:40, Paper WeM2.2 | |
>An Integrated Biomimetic Control Strategy with Multi-Agent Optimization for Nonlinear Chemical Processes |
Mirlekar, Gaurav | West Virginia Univ |
Gebreslassie, Berhane | Vishwamitra Res. Inst |
Li, Shuyun | West Virginia Univ |
Diwekar, Urmila | Vishwamitra Res. Inst |
Lima, Fernando V. | West Virginia Univ |
Keywords: Model-based Control, Energy Processes and Control, Optimization and Scheduling
Abstract: In this paper, a framework is proposed for integrating a Biologically-Inspired Optimal Control Strategy (BIO-CS) with Multi-Agent Optimization (MAO) algorithms for process systems engineering applications. In this framework, the BIO-CS employs gradient-based optimal control solvers in an intelligent manner to simultaneously control multiple outputs of the process at their desired setpoints. Also, the MAO uses the capabilities of nonlinear heuristic-based optimization techniques such as Efficient Ant Colony Optimization (EACO), Efficient Genetic Algorithm (EGA) and Efficient Simulated Annealing (ESA) by sharing process information to obtain as an upper layer optimal operating setpoints for the controller that satisfy the overall process objective. The resulting approach is a unique combination of control and optimization methods that provide optimal solutions for dynamic systems. The applicability of the proposed framework is demonstrated using a nonlinear, multivariable fermentation process. In particular, a multivariable control structure associated with the first-principles-based model derived from mass and energy balances of the fermentation process is addressed. The performance of the proposed approach for each step is compared to Sequential Quadratic Programming (SQP) and a classical Proportional-Integral (PI) controller in terms of optimization and control, respectively. The proposed approach improves the overall performance of the process in terms of cumulative production rate by approximately 10-15%, resulting in economic benefits. The obtained results illustrate the capabilities of this novel integrated framework to achieve desired nonlinear system performance considering scenarios associated with setpoint tracking and plant-model mismatch.
|
|
10:40-11:00, Paper WeM2.3 | |
>Reliable Iterative RTO of a Continuously Operated Hydroformylation Process |
Hernandez, Reinaldo | TU Dortmund |
Dreimann, Jens | TU Dortmund |
Engell, Sebastian | TU Dortmund |
Keywords: Process Applications, Model-based Control, Energy Processes and Control
Abstract: In this work, the application of a reliable iterative real-time optimization (RTO) scheme to a continuously operated transition metal complex catalyzed process for the hydroformylation of 1-dodecene is presented. The aim of the proposed scheme is to ensure optimal operation despite the presence of model uncertainties and measurement errors. Iterative optimization using Modifer Adaptation with Quadratic Approximation (MAWQA) is applied. Furthermore, additional modules for steady-state identifcation (SSI) and robust data reconciliation (DR) were designed and implemented. The proposed scheme was commissioned in a real miniplant,and an improved performance in comparison to the model-based optimal operating point was achieved.
|
|
11:00-11:20, Paper WeM2.4 | |
>Application of Economics Optimizing Control to a Two-Step Transesterification Reaction in a Pilot-Scale Reactive Distillation Column |
Haßkerl, Daniel | TU Dortmund Univ |
Lindscheid, Clemens | TU Dortmund Univ |
Subramanian, Sankaranarayanan | TU Dortmund |
Markert, Steven | TU Dortmund |
Gorak, Andrzej | TU Dortmund |
Engell, Sebastian | TU Dortmund |
Keywords: Process Applications, Model-based Control, Process and Control Monitoring
Abstract: The challenges in the chemical process industry of tighter environmental and safety constraints, a higher economic efficiency and an operation in a more dynamic environment motivate the utilization of optimizing control where economic policies are integrated into a (often nonlinear) model predictive control scheme. This so-called one-layer approach or dynamic real-time optimization (D-RTO) has the advantage that the processes are dynamically steered towards the most profitable region. High-fidelity dynamic process models are a basic prerequisite for a good controller performance, and building such models is a challenge. Using highly complex models also may lead to long computation times and thus feedback delays. These issues are in practice avoided by applying only steady-state optimization based on nonlinear models (RTO) and/or using simplified models in MPC. However, the development of computational methods that are able to solve large-scale dynamic optimization problems efficiently have paved the way for applications of economics optimizing control to complex chemical processes. In this contribution, we will demonstrate that a real complex pilot-scale chemical processes, a two-step transesterification realized by reactive distillation that is described by a large DAE model can be operated at the economic optimum by using direct optimizing control. We discuss the problem formulation and the numerical methods used and show experimental data that were obtained at the real process.
|
|
11:20-11:40, Paper WeM2.5 | |
>Validation of a Hydrogen Network RTO Application for Decision Support of Refinery Operators |
Galan, Anibal | Univ. of Valladolid |
de Prada, Cesar | Univ. of Valladolid |
Sarabia, Daniel | Univ. of Burgos |
Gutierrez, Gloria | Univ. of Valladolid |
González, Rafael | Petronor |
Sola, Mikel | Petronor |
Marmol, Sergio | Petronor |
Keywords: Optimization and Scheduling, Process Applications, Model-based Control
Abstract: The validation process of a real-time optimiser (RTO) of a refinery hydrogen network is studied in this paper. The analysis focuses on the RTO utility for operators’ decision support, due to process and equipment uncertainties, such as actual hydrogen demand and hydrocarbons (HC) loads. The validation underpins on the analysis of shortlisted key network variables, comparing actual reconciled (REC) and RTO figures. This methodology is applied to: a high hydrogen demand scenario (period 1), and a low hydrogen demand scenario (period 2). The RTO showed better solutions than REC for both periods. However, the gap between RTO and REC was larger at lower hydrogen demands, due to better usage of hydrogen purification membranes by the RTO than operators. Additionally, other important information for operators was provided by the RTO, such as optimal HC loads, least gas purges and optimal hydrogen production. Hence, the application of the RTO for aiding operators’ decisions was successfully validated. Nonetheless, some challenging limitations appeared and are discussed. This is the case of: a more sensible account of low purity header lower bound, and incorporation of Lagrange multipliers to the analysis. These improvements may lead future work on the subject.
|
|
11:40-12:00, Paper WeM2.6 | |
>On an Aspect of Implementing Real-Time Optimization: Establishing the Suspending and Activating Conditions Incorporating Process Monitoring |
Ye, Lingjian | Ningbo Inst. of Tech. Zhejiang Univ |
Shen, Feifan | Department of Information Science and Engineering, Ningbo Inst |
Ge, Zhiqiang | Zhejiang Univ |
Song, Zhi-Huan | Zhejiang Univ |
Keywords: Optimization and Scheduling, Model-based Control, Process and Control Monitoring
Abstract: For a large class of real-time optimization (RTO) schemes where online experimental gradients are evaluated for convergence to the plant optimum, the input signals are sufficiently excited in the noisy environment. Furthermore, the evaluations are typically persistent even if convergence is attained, for handling varying operating conditions caused by disturbances. The unsettled operation around the optimum leads to oscillations and extra economic loss. In this paper, we propose a strategy that establishes the suspending and activating conditions for RTO schemes. The conditions are developed based on process monitoring methods, which can in a passive way detect operating condition changes. Using the conditions, the RTO implementation is allowed to be suspended upon convergence and further restarted to approach the new optimum when the operating condition changes. The Williams-Otto reactor is studied to show the usefulness of the new idea.
|
|
WeM3 |
6F-8 |
CO2 Capture, Storage and Management |
Invited Session |
Chair: Ricardez-Sandoval, Luis Alberto | Univ. of Waterloo |
Co-Chair: Lee, Jay H. | KAIST |
Organizer: Ricardez-Sandoval, Luis Alberto | Univ. of Waterloo |
Organizer: Lee, Jay H. | KAIST |
|
10:00-10:20, Paper WeM3.1 | |
>Design and Sustainability Analysis of a Combined CO2 Mineralization and Desalination Process (I) |
Oh, Jae woo | Korea Advanced Inst. of Science and Tech |
Jung, Da bin | LG Chem |
Oh, Seung Hwan | KAIST |
Roh, Kosan | KAIST |
Chung, Jane | KAIST |
Han, Jong-In | KAIST |
Lee, Jay H. | KAIST |
Keywords: Modeling and Identification, Optimization and Scheduling, Process Applications
Abstract: CO2 mineralization sequestrates CO2 in a form of mineral carbonate through chemical reactions of CO2 with metal oxide or alkaline solution. This process is attractive because it has no risk for a leakage of hazardous materials and requires a relatively small area for sequestrating CO2 compared to geological storage. In addition, generated mineral carbonate can be used as useful chemicals if the purity of it is high enough. One of the recent ideas in CO2 mineralization is integrating it with desalination. Mineral ions needed for CO2 mineralization are separated from seawater, and generated deionized seawater is used as a feed to a desalination process such as reverse osmosis (RO) to reduce its electric energy load. The goal of this study is to design the proposed process and examine its sustainability. The overall process is designed and simulated using Aspen plus® and Matlab. Based on the simulation results, techno-economic analysis (TEA) and CO2 lifecycle assessment (CO2 LCA) are conducted to verify the sustainability. In order to identify the improvement potential of the process, best scenario study is performed. It turned out that this process can achieve about 230tonne CO2 reduction/yr as well as relative economic benefit of almost 1million /yr compared to benchmark process which are stand-alone Solvay process and RO. From the best scenario result, we can say that proposed process has enough potential to reduce CO2 emission while generates economic benefit.
|
|
10:20-10:40, Paper WeM3.2 | |
>Dynamic Modeling and Analysis of Amine-Based Carbon Capture Systems (I) |
Jung, Howoun | KAIST |
Im, Dasom | Korea Advanced Inst. of Science and Tech |
Kim, Sun Hyung | Korea Inst. of Energy Res |
Lee, Jay H. | KAIST |
Keywords: Modeling and Identification, Model-based Control
Abstract: Carbon capture technologies are studied widely to curb the rising trend in the atmospheric concentration of CO2 causing global warming. The post-combustion carbon capture technology using amine solvents is one of the mature technologies that can be deployed to existing power plants. Chemical absorption based on an amine solvent has a fast reaction rate and gives a high capacity to capture CO2. However, a large amount of energy is needed to regenerate the CO2 rich solvent after the absorption. Flexible operation with a properly chosen control strategy is a way to alleviate this problem and developing a simple, yet accurate dynamic model is a key to finding stable operation conditions while maximizing the flexibility of the process. In this research, chemical absorption process based on the most widely used amine solvent, monoethanolamine (MEA), is developed using the commercial software of gPROMS. The Kent-Eisenberg model and a rigorous rate-based approach are used to develop a dynamic column model. The process model is simulated and the results are compared with experimental data in the literature. The developed model is consistent with the experimental data within about 10% error in rich loading and capture rate. The model was used to compare two control strategies. As a result, the control strategy that control CO2 capture with the lean solvent flow showed faster settling time than with the regeneration heat.
|
|
10:40-11:00, Paper WeM3.3 | |
>A Multi-Scale Model for CO2 Capture: A Nickel-Based Oxygen Carrier in Chemical-Looping Combustion (I) |
You, Huabei | Univ. of Waterloo |
Yuan, Yue | Univ. of Waterloo |
Li, Jingde | Univ. of Waterloo |
Ricardez-Sandoval, Luis Alberto | Univ. of Waterloo |
Keywords: Modeling and Identification, Optimization and Scheduling, Energy Processes and Control
Abstract: In this work, we present a multi-scale modelling framework for the Ni-based oxygen carrier (OC) particle that can explicitly account for the complex reaction mechanism taking place on the contacting surface between gas and solid reactants in Chemical Looping Combustion (CLC). This multi-scale framework consists of a gas diffusion model and a surface reaction model. Continuum equations are used to describe the gas diffusion inside OC particles, whereas Mean-field approximation method is adopted to simulate the micro-scale events, such as molecule adsorption and elementary reaction, occurring on the contacting surface. A pure CO stream is employed as the fuel gas whereas the NiO is used as the metal oxide because it is one of the mostly used material in laboratory and pilot-scale plants. Rate constants for the micro-scale events considered in the present work were obtained from a systematic Density Functional Theory (DFT) analysis, which provides a reasonable elementary reaction kinetics and lays a solid foundation for multi-scale calculations. A sensitivity analysis on the size of intra particle pore and the adsoprtion rate constant was conducted to assess the mass transport effects on the porous particle. The proposed multi-scale model shows reasonable tendencies and responses to changes in key modelling parameters.
|
|
11:00-11:20, Paper WeM3.4 | |
>Economic NMPC Strategies for Solid Sorbent-Based CO2 Capture (I) |
Yu, Mingzhao | Carnegie Mellon Univ |
Biegler, Lorenz T. | Carnegie Mellon Univ |
Keywords: Energy Processes and Control, Model-based Control, Process Applications
Abstract: Nonlinear Model Predictive Control (NMPC) enables the incorporation of detailed dynamic process models for nonlinear, multivariable control with constraints. This optimization-based framework also leads to on-line dynamic optimization with performance-based and so-called economic objectives. Nevertheless, economic NMPC (eNMPC) still requires careful formulation of the nonlinear programming (NLP) subproblem to guarantee stability. In this study, we derive a novel reduced regularization approach for eNMPC with stability guarantee. The resulting eNMPC framework is applied to a challenging nonlinear CO2 capture model, where bubbling fluidized bed models comprise a solid-sorbent postcombustion carbon capture system. Our results indicate the benefits of this improved eNMPC approach over tracking to the setpoint, and better stability over eNMPC without regularization.
|
|
11:20-11:40, Paper WeM3.5 | |
>Economic Model Predictive Control of an Absorber-Stripper CO2 Capture Process for Improving Energy Cost |
Chan, Lester Lik Teck | Chung-Yuan Christian Univ |
Chen, Junghui | Chung-Yuan Christian Univ |
Keywords: Process Applications, Energy Processes and Control, Model-based Control
Abstract: Carbon dioxide (CO2) is the major source of greenhouse gas and its capture and recovery is the key to effective reduction of CO2 emissions. Optimization of the CO2 capture plays a critical role in the reduction of energy cost. CO2 concentration in the plant varies with time and a dynamic study of the economic optimization reflects the true cost better when compared to the current strategy of the steady state optimization. The economic model predictive control (EMPC) that combines real-time economic process optimization and feedback control is applied to the optimization of CO2 capture process. The large energy requirement for solvent regeneration is optimized in dynamic settings. Unlike the conventional steady state consideration of the economic optimization, the proposed method allows the cost to be adjusted to the changing condition such as feed composition and utility cost. Case studies are then presented to show the benefits of the EMPC optimization for CO2 capture process.
|
|
11:40-12:00, Paper WeM3.6 | |
Simulation of the Dynamics and Control Responses of the Carbon Dioxide Chemical Absorption Process Using Aspen Custom Modeler |
Liu, Yen-Chun | Industrial Tech. Res. Inst. |
Chang, Rey-Yue | Industrial Tech. Res. Inst. |
Shen, Ming-Tien | Tamkang Univ. |
Chen, Yih-Hang | Tamkang Univ. Dept. of chemical and materials engineering |
Chang, Hsuan | Tamkang Univ. |
|
WeA1 |
5F-XinXi Palace A |
Data Analytics and Machine Learning II |
Invited Session |
Chair: Tulsyan, Aditya | Massachusetts Inst. of Tech |
Co-Chair: Cinar, Ali | Illinois Inst. of Tech |
Organizer: Gopaluni, Bhushan | Univ. of British Columbia |
Organizer: Tulsyan, Aditya | Massachusetts Inst. of Tech |
Organizer: Chiang, Leo | The Dow Chemical Company |
|
13:00-13:20, Paper WeA1.1 | |
>Image-Based Process Monitoring Using Deep Belief Networks (I) |
Lyu, Yuting | Zhejiang Univ |
Chen, Junghui | Chung-Yuan Christian Univ |
Song, Zhi-Huan | Zhejiang Univ |
Keywords: Process Applications, Process and Control Monitoring, Energy Processes and Control
Abstract: With the advances in optical sensing and image capture systems, process images certainly offer new perspectives to process monitoring. Compared to the process data collected by traditional sensors at local regions, process images, which can capture more significant variations in the whole space, enhance the monitoring performance in data-driven monitoring methods. In this paper, a popular deep learning method, namely deep belief network (DBN), is applied to effectively extract useful features from the images. Meanwhile, a new statistic is developed for the DBN model, which integrates feature extraction and fault detection into one model rather than separately accomplish them. The effectiveness of the proposed DBN based monitoring method is demonstrated in a real combustion system.
|
|
13:20-13:40, Paper WeA1.2 | |
>Product Attribute Forecast: Adaptive Model Selection Using Real-Time Machine Learning (I) |
Bayrak, Elif Seyma | Amgen Inc |
Wang, Tony | Amgen Inc |
Tulsyan, Aditya | Massachusetts Inst. of Tech |
Coufal, Myra | Amgen |
Undey, Cenk | Amgen Inc |
Keywords: Modeling and Identification, Batch Process Modeling and Control, Process Applications
Abstract: A real-time machine learning framework is developed to forecast product concentration in mammalian cell culture bioreactors. In real-time, the framework evaluates several machine learning algorithms and chooses the most representative algorithm based on current dynamics of the system. Data from multiple sources is combined and only subset of features are fed to the model based on a pre-selection criteria. The model performance is tested using two small-scale bioreactors run. The performance improved towards the end of the process with accumulating data and results for 1 day ahead prediction is presented.
|
|
13:40-14:00, Paper WeA1.3 | |
>Machine-Learning for Biopharmaceutical Batch Process Monitoring with Limited Data (I) |
Tulsyan, Aditya | Massachusetts Inst. of Tech |
Garvin, Christopher | Amgen Inc |
Undey, Cenk | Amgen Inc |
Keywords: Batch Process Modeling and Control, Big Data Analytics and Monitoring, Modeling and Identification
Abstract: Commercial biopharmaceutical manufacturing comprises of multiple distinct processing steps that require effective and efficient monitoring of many variables simultaneously in real-time. This article addresses the problem of real-time statistical batch process monitoring (BPM) for biopharmaceutical processes with limited production history; herein, referred to as the ‘Low-N’ problem. In this article, we propose an approach to transition from a Low-N scenario to a Large-N scenario by generating an arbitrarily large number of in silico batch datasets. The proposed method is a combination of hardware exploitation and algorithm development. To this effect, we propose a Bayesian nonparametric approach to model a batch process, and then use probabilistic programming to generate an arbitrarily large number of dynamic in silico campaign data sets. The efficacy of the proposed solution is elucidated on an industrial process.
|
|
14:00-14:20, Paper WeA1.4 | |
>Automated System Identification in Mineral Processing Industries: A Case Study Using the Zinc Flotation Cell (I) |
Shardt, Yuri | Tech. Univ. of Ilmenau |
Brooks, Kevin Seth | BluESP |
Keywords: Big Data Analytics and Monitoring, Modeling and Identification, Model-based Control
Abstract: In many industries, including the mineral processing industry, process modelling can be improved by mining the data historian. However, the data in the historian is often contaminated with missing values, unknown operating conditions, and other imperfections. Furthermore, manual segmentation of the data is difficult due to the large number of data points and variables. Thus, there is a need to develop and implement methods that can automatically segment the data set into viable components for identification purposes. One approach uses Laguerre models to segment the data set. However, when used in a multivariate situation, such as in the zinc flotation cell, various issues, such as collinearity, arise. Therefore, the data segmentation algorithm needs to take this into consideration when examining a data set. Using the zinc flotation cell, it is shown that for the multivariate case preselecting the data variables to consider improves the data segmentation.
|
|
14:20-14:40, Paper WeA1.5 | |
>Hybrid Online Multi-Sensor Error Detection and Functional Redundancy for Artificial Pancreas Control Systems (I) |
Feng, Jianyuan | Illinois Inst. of Tech |
Hajizadeh, Iman | Illinois Inst. of Tech |
Samadi, Sediqeh | Illiinios Inst. of Tech |
Sevil, Mert | Illinois Inst. of Tech |
Hobbs, Nicole | Illinois Inst. of Tech |
Brandt, Rachel | Illinois Inst. of Tech |
Lazaro, Caterina | Illinois Inst. of Tech |
Maloney, Zacharie | Illinois Inst. of Tech |
Yu, Xia | Coll. of Information Science and Engineering, NortheasternUniv |
Littlejohn, Elizabeth | Univ. of Chicago |
Quinn, Lauretta | Univ. of Illinois at Chicago |
Cinar, Ali | Illinois Inst. of Tech |
Keywords: Big Data Analytics and Monitoring, Modeling and Identification, Process Applications
Abstract: Sensor errors limit the performance of a supervision and control system. Sensor accuracy can be affected by many factors such as extreme working conditions, sensor deterioration and interferences from other devices. It may be difficult to distinguish sensor errors and real dynamic changes in a system. A hybrid online multi-sensor error detection and functional redundancy (HOMSED&FR) algorithm is developed to monitor the performance of multiple sensors and reconcile the erroneous sensor signals. The algorithm relies on two methods, outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model. The two methods have different way of using data, ORKF is comparing current signal samples with the signal trace indicated by previous samples and LW-PLS is comparing samples in the past window with the samples from a database and uses the samples with the most similarity to build a model to predict the current signal values. The performance of this system is illustrated with a clinical case involving artificial pancreas experiments, which include data from a continuous glucose monitoring (CGM) sensor, and energy expenditure (EE) and galvanic skin response (GSR) information based on wearable sensors that collect data from people with type 1 diabetes. The results indicate that the proposed method can successfully detect most of the erroneous signals and substitute them with reasonably estimated values computed by the functional redundancy system.
|
|
14:40-15:00, Paper WeA1.6 | |
>Map-Reduce Decentralized PCA for Big Data Modeling and Diagnosis of Faults in High-Speed Train Bearings (I) |
Liu, Qiang | Northeastern Univ |
Kong, Dezhi | Northesten Univ |
Qin, S. Joe | Univ. of Southern California |
Xu, Quan | State Key Lab. of Synthetical Automation for Process Indus |
Keywords: Big Data Analytics and Monitoring
Abstract: Real-time fault detection and diagnosis of high speed trains is essential for the operation safety. Traditional methods mainly employ rule-based alarms to detect faults when the measured single variable deviates too far from the expected range, with multivariate data correlations ignored. In this paper, a Map-Reduce decentralized PCA algorithm and its dynamic extension are proposed to deal with the large amount of data collected from high speed trains. In addition, the Map-Reduce algorithm is implemented in a Hadoop-based big data platform. The experimental results using real high-speed train operation data demonstrate the advantages and effectiveness of the proposed methods for five faulty cases.
|
|
WeA2 |
5F-XinXi Palace B |
Planning and Scheduling |
Regular Session |
Chair: Shang, Chao |
Cornell Univ |
Co-Chair: El-Farra, Nael H. | Univ. of California, Davis |
|
13:00-13:20, Paper WeA2.1 | |
>Data-Driven Process Network Planning: A Distributionally Robust Optimization Approach |
Shang, Chao | Cornell Univ |
You, Fengqi | Cornell Univ |
Keywords: Optimization and Scheduling
Abstract: Process network planning is an important and challenging task in process systems engineering. Due to the penetration of uncertainties such as random demands and market prices, stochastic programming and robust optimization have been extensively used in process network planning for better protection against uncertainties. However, both methods fall short of addressing the ambiguity of probability distributions, which is quite common in practice. In this work, we apply distributionally robust optimization to handling the inexactness of probability distributions of uncertain demands in process network planning problems. By extracting useful information from historical data, ambiguity sets can be readily constructed, which seamlessly integrate statistical information into the optimization model. To account for the sequential decision-making structure in process network planning, we further develop multi-stage distributionally robust optimization models and adopt affine decision rules to address the computational issue. Finally, the optimization problem can be recast as a mixed-integer linear program. Applications in industrial-scale process network planning demonstrate that, the proposed distributionally robust optimization approach can better hedge against distributional ambiguity and yield rational long-term decisions by effectively utilizing demand data information.
|
|
13:20-13:40, Paper WeA2.2 | |
>An Integrated Personnel Allocation and Machine Scheduling Problem for Industrial Size Multipurpose Plants |
Santos, Fernando | Univ. of Waterloo |
Fukasawa, Ricardo | Univ. of Waterloo |
Ricardez-Sandoval, Luis Alberto | Univ. of Waterloo |
Keywords: Optimization and Scheduling
Abstract: This paper describes the development and implementation of an optimization model to solve the integrated problem of personnel allocation and machine scheduling for industrial size multipurpose plants. Although each of these problems has been extensively studied separately, works that study an integrated approach are very limited, particularly for large-scale industrial applications. We present a mathematical formulation for the integrated problem and show the results obtained from solving large size instances from an analytical services facility. The integrated formulation can improve the results up to 22.1% compared to the case where the personnel allocation and the machine scheduling problems are solved sequentially.
|
|
13:40-14:00, Paper WeA2.3 | |
>New Multi-Commodity Flow Formulations for the Generalized Pooling Problem |
Cheng, Xin | Queen's Univ |
Tang, Kai | Queen's Univ |
Li, Xiang | Queen's Univ |
Keywords: Optimization and Scheduling
Abstract: The generalized pooling problem is involved in many planning and scheduling problems in the petrochemical industry. Compared to the standard pooling problem where the blenders (or pools) are not allowed to be connected to one another, the generalized pooling problem has a more complex network structure and allows more types of problem formulations. The state-of-the-art generalized pooling formulations adopt a multi-commodity flow (MCF) strategy that was first proposed by cite{Alfaki2013b} and proved to be stronger than the classical p-formulation. This paper proposes two new MCF formulations for the generalized pooling problem, using mixing and split fractions of blenders rather than the commodity flow fractions. The case study results show that, for some cases, the proposed formulations perform better than the existing MCF formulations, but none of the formulations dominates others for all cases. The results also show that formulations which have similar sizes and similarly tight linear programming relaxations may have dramatically different performance.
|
|
14:00-14:20, Paper WeA2.4 | |
>Application of Constrained Multi-Objective Evolutionary Algorithm in Multi-Source Compressed-Air Pipeline Optimization Problems |
Yang, Yongkuan | Northeastern Univ |
Liu, Jianchang | P.o.box 135, Northeastern Univ |
Tan, Shubin | Coll. of Information Science and Engineering, NortheasternUniv |
Wang, Honghai | Coll. of Information Science and Engineering, NortheasternUniv |
Keywords: Optimization and Scheduling, Process Applications
Abstract: To meet the request of manufacture, several compressor stations usually run at the same time. Decreasing the output pressure of compressor station is one of the major methods to reduce the power utilized by the motors of the compressors. Due to the interaction of several compressor stations with each other, how to set the output pressure of each compressor station becomes a big problem. This paper proposes the Constrained Multi-objective Optimization of Multi-Source Compressed-air Pipeline Optimization Problems (CMO-MSCPOPs) in compressed-air transmission networks of process industries. The problem formulation involves the minimization of the output pressure of each compressor station. Constraints associated with compressed-air flow rate and compressor stations guarantee the work of each downstream process. In case studies, the model is divided into two topology forms. The optimization of the model is performed using NSGA-II. The solution obtained is a set of Pareto solutions from which a decision making process is highlighted to select a specific preferred solution. Aiming to illustrate the performance of the proposed approach, the tool is applied to two typical network examples considering two compressor stations.
|
|
14:20-14:40, Paper WeA2.5 | |
>A Scheduling Method Based on NSGA2 for Steelmaking and Continuous Casting Production Process |
Li, Qing | Qingdao Univ. of Science and Tech |
Wang, Xiuying | Qingdao Univ. of Science and Tech |
Zhang, Xiaofeng | Qingdao Univ. of Science and Tech |
Keywords: Optimization and Scheduling, Process Applications
Abstract: In this paper a new non-dominated sorting genetic algorithm with elite strategy (NSGA2) based production scheduling method is proposed for complex steelmaking and continuous casting production process which is consisted of multiple refining ways. At first, the multi-objective optimization scheduling model is established according to production process and schedule requirement. And then NSGA2 is used to solve the scheduling model. There are multiple Pareto non-inferior solutions from the solving phase, and decision maker may select one of solutions based on own preference, while it is not the most optimal. To fix this problem, the optimal decision making method is put forward, combining the fuzzy membership degree and the variance weighting. Simulation experiments are carried out with the actual industry production data, which shows that the proposed method is practical for industry production.
|
|
14:40-15:00, Paper WeA2.6 | |
>A Demand Response Strategy for Continuous Processes Using Dynamic Programming Approach |
Liu, Yu | Univ. of California, Davis |
Lavoie, David | Univ. of California, Davis |
Palazoglu, Ahmet | Univ. of California at Davis |
El-Farra, Nael H. | Univ. of California, Davis |
Keywords: Optimization and Scheduling
Abstract: This work considers the problem of optimizing the energy management in the operation of dynamic continuous processes subject to demand response objectives. A dynamic programming approach that aims to lower the operating cost is developed and presented. The dynamics of process transition between operating modes, as well as the time-sensitive energy profiles, are incorporated into the optimization formulation. The merits of the proposed approach are demonstrated using a jacketed continuous stirred tank reactor where the energy required is assumed to be proportional to the material flow.
|
|
WeA3 |
6F-8 |
Systems Modeling, Analysis and Optimization of Biological Processes |
Invited Session |
Chair: Yue, Hong | Univ. of Strathclyde |
Co-Chair: Findeisen, Rolf | Otto-Von-Guericke-Univ. Magdeburg |
Organizer: Gunawan, Rudiyanto | ETH Zurich |
|
13:00-13:20, Paper WeA3.1 | |
Modelling of the GAL1 Genetic Circuit in Yeast Using Three Equations (I) |
Hsu, Chi-Ching | National Cheng Kung Univ |
Wu, Yu-Heng | National Cheng Kung Univ |
Menolascina, Filippo | Univ. of Edinburgh |
Nordling, Torbjörn E.M. | National Cheng Kung Univ |
|
13:20-13:40, Paper WeA3.2 | |
>Parameter Estimation for Signal Transduction Networks from Experimental Time Series Using Picard Iteration (I) |
von Haeseler, Friedrich | Otto-Von-Guericke Univ. Magdeburg |
Rudolph, Nadine | Otto-Von-Guericke Univ. Magdeburg |
Findeisen, Rolf | Otto-Von-Guericke-Univ. Magdeburg |
Huber, Heinrich Johann | Otto Von Guericke Univ. Magdeburg |
Keywords: Modeling and Identification
Abstract: Biological signal transduction models allow to explain and analyze biological cause- effect relationships and to establish and test new hypotheses about biological pathways. Yet their predictive capability crucially depends on the parameters involved. These parameters are usually determined from experimental data. However, due to the appearing nonlinearities, the resulting inverse problem is often ill-posed and difficult to solve. We outline how parameters can be estimated based on Picard iterations. In case of linear parameter dependence and good measurements of the involved entities, the method allows to retrieve good parameter estimates for medium size problems. The proposed method is applied to an IL-6-dependent Jak-STAT3 signalling pathway model. As shown it it is well suited for data generated by life cell imaging where accurate time series are available.
|
|
13:40-14:00, Paper WeA3.3 | |
>Model-Based State Estimation Based on Hybrid Cybernetic Models (I) |
Carius, Lisa | Otto Von Guericke Univ. Magdeburg |
Pohlodek, Johannes | Otto Von Guericke Univ. Magdeburg |
Morabito, Bruno | Univ. of Pisa |
Mangold, Michael | Max Planck Inst |
Findeisen, Rolf | Otto-Von-Guericke-Univ. Magdeburg |
Kienle, Achim | Univ. Magdeburg |
Keywords: Modeling and Identification, Batch Process Modeling and Control
Abstract: Biotechnological processes still represent a challenge for process optimization and automation as the data landscape consists of unavailable, inaccurate, delayed or missing measurement information. As a first step towards automation of biotechnological processes, methods have to be refined for estimating the unknown states with an acceptable precision, using a mathematical model of the system. Due to the technological advances, knowledge and computational powers are constantly increasing so that models of a higher complexity and predictive quality are now available. Hybrid cybernetic models offer a flexible, yet detailed description of the biotechnological process under consideration. They connect the nonlinear system dynamics to the metabolic information of the organism and allow to consider cell internal regulations. In this work we explore if this class of models can be successfully applied for real- time process monitoring. We do this by evaluating the performance of two commonly used state estimators, an unscented Kalman filter and a moving horizon estimator, which both use a hybrid cybernetic model to observe the non-linear process of poly-β-hydroxybutyrate production in the organism Cupriavidus necator. To our knowledge this is the first time that this class of models is used for model-based process observation.
|
|
14:00-14:20, Paper WeA3.4 | |
>Dynamic Modeling of Enzyme Controlled Metabolic Networks Using a Receding Time Horizon (I) |
Lindhorst, Henning | Otto-Von-Guericke-Univ. Magdeburg |
Reimers, Alexandra-M. | Freie Univ. Berlin |
Waldherr, Steffen | KU Leuven |
Keywords: Model-based Control, Optimization and Scheduling
Abstract: Microorganisms have developed complex regulatory features controlling their re-action and internal adaptation to changing environments. When modeling these organisms we usually do not have full understanding of the regulation and rely on substituting it with an optimization problem using a biologically reasonable objective function. The resulting constraint-based methods like the Flux Balance Analysis (FBA) and Resource Balance Analysis (RBA) have proven to be powerful tools to predict growth rates, by-products, and pathway usage for fixed environments. In this work, we focus on the dynamic enzyme-cost Flux Balance Analysis (deFBA), which models the environment, biomass products, and their composition dynamically and contains reaction rate constraints based on enzyme capacity. We extend the original deFBA formalism to include storage molecules and biomass-related maintenance costs. Furthermore, we present a novel usage of the receding prediction horizon as used in Model Predictive Control (MPC) in the deFBA framework, which we call the short-term deFBA (sdeFBA). This way we eliminate some mathematical artifacts arising from the formulation as an optimization problem and gain access to new applications in MPC schemes. A major contribution of this paper is a systematic approach for choosing the prediction horizon and identifying conditions to ensure solutions grow exponentially. We showcase the effects of using the sdeFBA with different horizons through a numerical example.
|
|
14:20-14:40, Paper WeA3.5 | |
>A Bilevel Programming Approach to Optimize C-Phycocyanin Bio-Production under Uncertainty |
Zhang, Dongda | Imperial Coll. London |
del Rio-Chanona, Ehecatl Antonio | Imperial Coll. London |
Keywords: Model-based Control, Process Applications
Abstract: High variability and unreliable expectations on product yields substantially hinder the industrialization of microorganism derived biochemicals as they present a risk to the profitability and safety of the underlying systems. Therefore, in this work, we propose an optimization approach to determine the lower and upper product yield expectations for the sustainable production of C-phycocyanin. Kinetic modeling is adopted in this study as it is an outstanding tool for fast prototyping, prediction and optimization of chemical and biochemical processes. On the upside, parameters in bioprocess kinetic models are used as a simplification of complex metabolic networks to enable the simulation, design and control of the process. On the downside, this conglomeration of parameters may result in significant model uncertainty. To address this shortcoming, we formulate a bilevel max-min optimization problem to obtain the worst-case scenario of our system given the uncertainty on the model parameters. By constructing parameter confidence ellipsoids, we determined the feasible region along which the parameters can minimize the system’s performance, while nutrient and light controls are used to maximize the biorenewable production. The inner minimization problem is embedded by means of the optimality conditions into the upper maximization problem and hence both are solved simultaneously. Through this approach, we determined pessimistic and optimistic scenarios for the bioproduction of C-phycocyanin and hence compute reliable expectations on the yield and profit of the process.
|
|
14:40-15:00, Paper WeA3.6 | |
>Robust Sampling Time Design for Biochemical Systems |
Yu, Hui | Univ. of Strathclyde |
Yue, Hong | Univ. of Strathclyde |
Halling, Peter | Univ. of Strathclyde |
Keywords: Optimization and Scheduling, Modeling and Identification, Process Applications
Abstract: Optimal sampling time design by considering parameter uncertainties has rarely been considered in published research. In this work, the robust experimental design (RED) for sampling time selection is investigated. The aim is to exploit the sampling strategy using which the experiment can provide the most informative data for improving parameter estimation quality. With an enzyme reaction case study system, two global sensitivity analysis (GSA) approaches, the Morris screening method and the Sobol's method, are firstly applied to find out the key parameters that have large influences to model outputs of interest. Then three different RED methods, the worst-case strategy, the Bayesian design, and the GSA-based approach, are developed to design the optimal sampling time schedule. Simulation results suggest that, among the three RED methods, the equally spaced sampling from the Bayesian design has the best robustness towards parameter uncertainties.
|
|
WeP2 |
6F-WanXin Palace Lobby B |
Poster Session I |
Poster Session |
Chair: Liu, Qiang | Northeastern Univ |
Co-Chair: Yu, Shengping | Northeastern Univ |
|
15:30-17:00, Paper WeP2.1 | |
>Model-Based Fault-Tolerant Pitch Control of an Offshore Wind Turbine |
Badihi, Hamed | Concordia Univ |
Zhang, Youmin | Concordia Univ |
Rakheja, Subhash | Concordia Univ |
Pillay, Pragasen | Concordia Univ |
Keywords: Energy Processes and Control, Model-based Control, Modeling and Identification
Abstract: Given the importance of reliability and availability issue in wind turbines, the current paper presents the design and development of a novel active fault-tolerant control scheme for an offshore wind turbine. The proposed scheme tolerates the effects of any possible fault that may happen in pitch actuators of wind turbine blades. A model-based fault detection and diagnosis system provides fault information that are accurate enough for compensation of fault effects in the pitch control loop. The effectiveness of the proposed scheme is finally evaluated through simulations on an advanced offshore wind turbine benchmark model in the presence of wind turbulences, measurement noises, and realistic fault scenarios.
|
|
15:30-17:00, Paper WeP2.2 | |
>Detection of Blockage Degree and Removing Strategies in Microreactor |
Wang, Lin | Inner Mongolia Univ. of Tech |
Yue, Hong | Univ. of Strathclyde |
Keywords: Process Applications, Modeling and Identification
Abstract: Blockage is a common problem for microreactors, and the blockage degree directly affects the removing operation. In this work, the blockage degree is first defined as the ratio of the blocking volume over the volume of mixing channel based on computational fluid dynamics (CFD) models of different blockage types. After analyzing the limitation of this standard index, a new blockage index is proposed, in which the blocking volume, the cross-sectional area and the roughness of the blocking body are all taken into account. The relationship between the pressure difference and the new index is obtained through regression of CFD data to determine the blocking degree. Meanwhile, the classification of the removing blockage is also defined. The smaller the blockage index value is, the more difficult it is to remove the blockage. A inlet angle is introduced as a new design factor in choosing removing options
|
|
15:30-17:00, Paper WeP2.4 | |
>Quality Analysis and Prediction for Start-Up Process of Injection Molding Processes |
Zou, Mingjun | Northeastern Univ |
Zhao, Luping | Northeastern Univ |
Wang, Shu | Northeastern Univ |
Chang, Yuqing | Northeastern Univ |
Wang, Fuli | Northeastern Univ |
Keywords: Batch Process Modeling and Control, Big Data Analytics and Monitoring, Process Applications
Abstract: As a typical batch processes, injection molding processes plays an important role in industry. This work focuses on the start-up process of injection molding process. Based on a deep study of the start-up process of injection molding process, a phase-shift sliding window modelling scheme is proposed in this paper for quality prediction. Firstly, during the start-up process, the characteristics through a number of batches will changes slowly, while the characteristics between several successive batches can be approximately the same. Therefore, it is reasonable to build a sliding window in the batch direction to cover different batches, and establish multiple continuous models to capture the relationship between the process variables and the quality respectively. Secondly, according to the operational characteristics of the plastication phase of injection molding process, a hypothesis that the plastication phase of the current batch has a greater impact on the quality of the next batch than the quality of the current batch is proposed, and the existence of this assumption is verified according to the simulation with experimental data. Under the premise of the above assumptions, a new quality prediction scheme is proposed and this new method is verified to be more accurate than the traditional method.
|
|
15:30-17:00, Paper WeP2.5 | |
>Data-Driven Fault Prognosis Based on Incomplete Time Slice Dynamic Bayesian Network |
Zhang, Zhengdao | Jiangnan Univ |
Dong, Feilong | Jiangnan Univ |
Xie, Linbo | Jiangnan Univ |
Keywords: Process and Control Monitoring
Abstract: Based on a dynamic Bayesian network with an incomplete time slice and a mixture of the Gaussian outputs, a data-driven fault prognosis method for model-unknown processes is proposed in this article. First, according to the requirement of fault prognosis, an incomplete time slice Bayesian network with unknown future observed node is constructed. Moreover, the future states are described by the current measurements and his historic data in the form of conditional probability. Second, according to the completed part of historical data, a parameter-learning algorithm is used to obtain network parameters and the weight coefficients of distribution components. After that, using such weight coefficients as input-output data, the subspace identification method is employed to build a forecasting model which can predict weight coefficients at next sampling time. To achieve fault prognosis, an inference algorithm is developed to predict hidden faults based on the distribution of the measurements directly. Furthermore, the remaining useful life of process is estimated via iterative one-step ahead prognosis. As an example, the proposed method is applied to a continuous stirred tank reactor system. The results demonstrate that the proposed method can efficiently predict and identify the fault, and estimate the remaining useful life of process, even though the measurements are partly missing.
|
|
15:30-17:00, Paper WeP2.6 | |
>Research on Optimization for Safe Layout of Hazardous Materials Warehouse Based on Genetic Algorithm |
Dai, Bo | Beijing Inst. of Petrochemical Tech |
Li, Yanfei | Beijing Inst. of Petrochemical Tech |
Ren, Haisheng | Beijing Univ. of Chemical Tech |
Liu, Xuejun | Beijing Inst. of Petrochemical Tech |
Li, Cuiqing | Beijing Inst. of Petrochemical Tech |
Keywords: Optimization and Scheduling, Model-based Control
Abstract: The security layout and optimization of hazardous materials have a great significance for warehousing security and warehouse utilization. In this article, a mathematic model of the warehouse safety layoutis established based on the safe distance rules and requirements. The author uses the layered thinking to divide the layout problems into stacking layout optimization and dangerous chemicals optimization. Considering the handling efficiency of the warehouse layout, the modified GA (genetic algorithm) associated with residual rectangular algorithm is used tooptimize storage layout through setting the initial population. Then, the channel positions are decided by the result of stacking layout. The experiment shows that the algorithm can obtain better layout results and improve the utilization of the warehouse on the basis of satisfying the safe distance of hazardous chemical stacking which has a good application prospect.
|
|
15:30-17:00, Paper WeP2.7 | |
>An Improved ABC Algorithm Based on Initial Population and Neighborhood Search |
Pian, Jinxiang | Shenyang Jianzhu Univ |
Wang, Guohui | Shenyang Jianzhu Univ |
Li, Boming | ShenYang JianZhu Univ |
Keywords: Optimization and Scheduling
Abstract: Abstract: The traditional artificial bee colony algorithm has the disadvantages of insufficient population diversity, strong equation-searching ability but weak developing capacity, which leads to poor quality of solution, local optimum and slow global convergence. This paper increases the population diversity by unlearning initialization, improves the quality of the solution, as well as avoids the local optimum. What’s more, we introduce the cross-operation and the global optimal value into the search process so that it can generate candidate solution next to the global optimal. Thus, it accelerates global convergence speed. The simulation results show that the optimization performance of different optimal function algorithm is better when the cross-factor is about 0.5. An improved ABC algorithm based on initial population and neighborhood search results show that the optimization accuracy is improved by about 2 times, which avoids the local optimum generally. Meanwhile, the number of iteration decreases about 8% to15%, accelerating the global convergence speed.
|
|
15:30-17:00, Paper WeP2.8 | |
>Tracking Control on Target Signal for a Class of Uncertain Neutral Systems |
Xu, Yujie | Beijing Union Univ |
Zhang, Jing | Beijing Union Univ |
Keywords: Optimization and Scheduling
Abstract: Tracking control based on target signal for a class of uncertain neutral systems is investigated in this paper. An augmented error system is constructed by combining the control system with the target signal. Through a Lyapunov-Krasovskii function and some inequalities, a stability criterion in terms of LMIs is proposed for the auto-controlled system. Then a state feed-back control is designed for the augmented error system. And tracking control, where the target signal and error signal are utilized to help deduce the static error, is obtained for the original uncertain neutral system. A numerical example is given to illustrate the validity of our proposed method.
|
|
15:30-17:00, Paper WeP2.9 | |
>Two-Stage Stochastic Optimization of a Hydrogen Network |
Gutierrez, Gloria | Univ. of Valladolid |
Galan, Anibal | Univ. of Valladolid |
Sarabia, Daniel | Univ. of Burgos |
de Prada, Cesar | Univ. of Valladolid |
Keywords: Optimization and Scheduling
Abstract: This paper discusses how to deal explicitly with uncertainty in the optimal management of the hydrogen network of a petroleum refinery. The current system is based on a RTO/MPC system for supervision and on-line optimization that includes a robust data reconciliation to estimate consistent values of the process variables and update the model parameters. It has been extended with a two-stage stochastic optimization to take care of the effect of crude changes in operation of the network. The paper analyses how to formulate the problem in order to obtain implementable solutions and presents results that compare the deterministic and stochastic solutions using real plant data.
|
|
15:30-17:00, Paper WeP2.10 | |
>A Hybrid PSO Based on Dynamic Clustering for Global Optimization |
Li, Hongru | Northeastern Univ |
Hu, Jinxing | Northeastern Univ |
Jiang, Shouyong | Northeastern Univ |
Keywords: Optimization and Scheduling
Abstract: Abstract: Particle swarm optimization is a population-based global search method, and known to suffer from premature convergence prior to discovering the true global minimizer for global optimization problems. Taking balance of local intensive exploitation and global exploration into account, a novel algorithm is presented in the paper, called dynamic clustering hybrid particle swarm optimization (DC-HPSO). In the method, particles are constantly and dynamically clustered into several groups (sub-swarms) corresponding to promising sub-regions in terms of similarity of their generalized particles. In each group, a dominant particle is chosen to take responsibility for local intensive exploitation, while the rest are responsible for exploration by maintaining diversity of the swarm. The simultaneous perturbation stochastic approximation (SPSA) is introduced into our work in order to guarantee the implementation of exploitation and the standard PSO is modified for exploration. The experimental results show the efficiency of the proposed algorithm in comparison with several other peer algorithms.
|
|
15:30-17:00, Paper WeP2.11 | |
>Integration of Parameter Approximation and Real-Time Optimization for Load Change of HTR-PM |
Yang, Cheng | Zhejiang Univ |
Wang, Kexin | Zhejiang Univ |
Shao, Zhijiang | Zhejiang Univ |
Keywords: Optimization and Scheduling, Process Applications
Abstract: Obtaining desirable process model and determining its applicable range play an important role in realizing the significant load change for HTR-PM. Based on the observation that several parameters in HTR-PM model change with conditions and sufficient sampling is impractical, a trust-region based load change strategy is developed in an iterative framework that integrates parameter approximation and real-time optimization. According to this method, the basic model is determined through a systematic approach for parameter estimation that is designed to get rid of unreliable estimation. Plant derivatives are exploited to extend applicability of the local basic model. When applying the extended model to operation of load change in the trust-region framework, model evaluation is implemented in each iteration so that the applicable range of the approximate model is appropriately determined. Consequently, both model accuracy and applicable range of the local model are considered in this iterative framework. Case study of load change from 100% to 50% reactor full power demonstrates effectiveness of the proposed method.
|
|
15:30-17:00, Paper WeP2.12 | |
>Asset Fleet Management in the Process Industry - a Conceptual Model |
Schulze Spüntrup, Frederik | Norwegian Univ. of Science and Tech |
Imsland, Lars | Norwegian Univ. of Science and Tech |
Keywords: Optimization and Scheduling, Process and Control Monitoring, Modeling and Identification
Abstract: Fleet Management is widely known from vehicle fleet management. However, this term is not elaborated in more detail for the management of asset fleets in the process industries. Since the challenges and the potential advantages are similar, a conceptual approach for fleet management in the process industry is developed. A participatory approach and the Soft Systems Methodology are used for the model development. The interests of various stakeholders have been identified and the scope for fleet management has been defined, the requirements for a fleet management system have been derived. The developed model focuses on adding the fleet perspective to the perception of assets in an industrial setting. Advanced prognosis and optimization technology combined with a partly centralized and de-centralized management of assets may be able to improve the reliability and the operational performance of the process industry in the future. The main contribution of this paper is the application of the fleet approach for assets in the process industry. The developed model utilizes experience and knowledge from different stakeholders. A holistic view of fleets in the process industry is given. The components of the conceptual model are depicted and the description of the overall system may be used as an outline for the subsequent implementation and further improvement of the fleet management model.
|
|
15:30-17:00, Paper WeP2.13 | |
>Reinforced Genetic Algorithm Using Clustering Based on Statistical Estimation (I) |
Park, Taekyoon | Seoul National Univ |
Kim, Yeonsoo | Seoul National Univ |
Lee, Jong Min | Seoul National Univ |
Keywords: Modeling and Identification, Optimization and Scheduling, Big Data Analytics and Monitoring
Abstract: Genetic algorithm(GA) is widely used to obtain solution in various optimization problems because of its robustness and convergence property. GA algorithm, however, has critical limitations which the computation time increases sharply as the complexity of the problem increases, together with the possibility that the user's arbitrary judgement is involved especially in its termination step. In order to solve these limitations, we suggest a new reinforced GA using clustering based on the statistical estimation. The similarity between each solution vector generated by GA is determined and the inefficient repetitive calculation is remarkably reduced. In addition, the statistical reliability of the obtained solution vectors can be calculated to reduce the randomness of the user in the conventional termination step.
|
|
15:30-17:00, Paper WeP2.14 | |
>Sparse Least Squares Support Vector Machines Based on Meanshift Clustering Method |
Wang, Xin | Xi'an Univ. of Tech |
Liu, Han | Xi'an Univ. of Tech |
Ma, Wenlu | Xi'an Univ. of Tech |
Keywords: Big Data Analytics and Monitoring, Modeling and Identification, Batch Process Modeling and Control
Abstract: Aim to non-sparsity solution problem of least squares support vector machines and not zero support vector value, a Meanshift Clustering Algorithm to select training samples is proposed, in which keeps the samples with large contribution value and removes the samples with small contribution value. Sparse solution and effective classification model could be obtained. The experimental results on UCI test data set show that the proposed algorithm in this paper can effectively reduce the classification error and improve training efficiency by sparsing training samples of least squares support vector machines. The feasibility and effectiveness of proposed method are proved to be valid.
|
|
15:30-17:00, Paper WeP2.15 | |
>A Kernel Connectivity-Based Outlier Factor Algorithm for Rare Data Detection in a Baking Process |
Wang, Yanxia | Queens Univ. Belfast |
Li, Kang | Queen's Univ. Belfast |
Shaojun Gan, John | School of Electronics, Electrical Engineering and Computer Scien |
Keywords: Modeling and Identification, Energy Processes and Control
Abstract: Due to strict legislation on greenhouse gas emission reduction, energy-intensive industries include the bakery industry are all under pressure to improve the energy efficiency in the manufacturing processes. In this paper, an energy monitoring system developed through the Point Energy Technology from the research group is rst introduced for the data collection in a local bakery company. The outliers in the collected data may include valuable information about the status of machines, however, they also affect the data quality and the accuracy of the consequent data analysis. In this paper, a kernel connectivity-based outlier factor based algorithm is proposed, where the concept of connectivity-based outlier factor (COF) is adopted to represent the level of a data sample being an outlier. Experiments are conducted on the dataset from the oven in a production line to evaluate the effectiveness of three kernel functions, namely the Gaussian kernel, the Laplacian kernel and polynomial kernel. The experimental results show that the Gaussian kernel and the Laplacian kernel are more effective on valid oven data detection, which is signicant for the further research work on energy management in the bakery company.
|
|
15:30-17:00, Paper WeP2.16 | |
>Research and Application of Causal Network Modeling Based on Process Knowledge and Modified Transfer Entropy |
Zhu, Qunxiong | Coll. of Information Science and Tech. Beijing Univ |
Ya, Sitai | Beijing Univ. of Chemical Tech |
Geng, Zhiqiang | Beijing Univ. of Chemical Tech |
Xu, Yuan | Coll. of Information Science and Tech. Beijing Univ |
Han, Yongming | Beijing Univ. of Chemical Tech |
He, Yan-Lin | Beijing Univ. of Chemical Tech |
Keywords: Big Data Analytics and Monitoring, Process Applications, Modeling and Identification
Abstract: Causal network modeling is an important part of alarm root cause analysis in industrial process. The transfer entropy is an effective method to model the causal network. However, there are some problems in determining the prediction horizon of transfer entropy. To solve the problems, a modified transfer entropy, which consider about the prediction horizon from one variable to another and to itself simultaneously, is proposed to improve the capacity of causality detection. Moreover, based on the data-driven and process knowledge modeling methods, an approach combining the modified transfer entropy with superficial process knowledge is designed to correct false calculations and optimize causal network models. Two case studies including a stochastic process and Tennessee Eastman process are carried out to illustrate the feasibility and effectiveness of the proposed approach.
|
|
15:30-17:00, Paper WeP2.17 | |
>A Framework and Platform for Fault Diagnosis of High-Speed Train Based on Big Data |
Xu, Quan | State Key Lab. of Synthetical Automation for Process Indus |
Liu, Qiang | Northeastern Univ |
Qin, S. Joe | Univ. of Southern California |
Keywords: Big Data Analytics and Monitoring, Process Applications
Abstract: High-speed trains are very fast (e.g. 350km/h) and operate at high traffic density, so once a fault has occurred, the consequences are disastrous. In order to better control the train operational status by timely and rapid detection of faults, we need new methods to handle and analyze the huge volumes of high-speed railway data. In this paper, we propose a novel framework and platform for high-speed train fault diagnosis based on big data technologies. The framework aims to allow researchers to focus on fault detection algorithm development and on-line application, with all the complexities of big data import, storage, management, and real-time use handled transparently by the framework. The framework uses a combination of cloud computing and edge computing and a two-level architecture that handles the massive data of train operations. The platform uses Hadoop as its basic framework and combines HDFS, HBase, Redis and MySQL database as the data storage framework. A lossless data compression method is presented to reduce the data storage space and improve data storage efficiency. In order to support various types of data analysis tasks for fault diagnosis and prognosis, the framework integrates online computation, off-line computation, stream computation, real-time computation and so on. Moreover, the platform provides fault diagnosis and prognosis as services to users and a simple case study is given to further illustrate how the basic functions of the platform are implemented.
|
|
15:30-17:00, Paper WeP2.18 | |
>A Modified Dynamic PLS for Quality Related Monitoring of Fractionation Processes |
Xu, Xue | Northeastern Univ |
Liu, Qiang | Northeastern Univ |
Ding, Jinliang | Northeastern Univ |
Keywords: Big Data Analytics and Monitoring, Process Applications
Abstract: The fractionation process is a typical dynamic process, and practitioners highly pay attention to the quality-related abnormal in the real refining processes. In this paper, a modified dynamic PLS (MDPLS) modeling method and the corresponding process monitoring strategy are proposed. The main contributions of the proposed method are in the following. First, a clear dynamic relation is captured between process data and quality indices. Moreover, the process and quality space are comprehensively divided into dynamic quality-related subspace, static quality-unrelated subspace as well as the residual space for improving the performance of monitoring. Finally, the effectiveness of the proposed algorithm is demonstrated with the data from a real fractionation process.
|
|
15:30-17:00, Paper WeP2.19 | |
>Process Monitoring Based on Performance-Triggered Scheme |
Zhang, Mingshan | East China Univ. of Science and Tech |
Yang, Jian | East China Univ. of Science and Tech |
Tan, Shuai | East China Univ. of Science and Tech |
Shi, Hongbo | East China Univ. of Science and Tech |
Keywords: Big Data Analytics and Monitoring, Process Applications, Modeling and Identification
Abstract: In process monitoring, some specific performance indexes need to be paid attention to. Therefore, the performance-triggered process monitoring scheme is proposed. Different from the traditional process monitoring method, the process is considered normal if there is no apparent anomaly happens on the performance index. In order to predict the values of performance indexes that cannot be measured in real time, ridge regression is used. And, the regression coefficients are used to pick the most relevant process variables for subsequent modelling. In this scheme, after the performance index exceeds the control limit, the monitoring of the relevant process variables is triggered to determine whether the prediction is abnormal due to the occurrence of a fault. Then, dictionary learning method and Low rank representation (LRR) are used for feature extraction and construction of the statistic. Finally, the effectiveness of the proposed method is verified by a numerical example and the Tennessee Eastman (TE) process.
|
|
15:30-17:00, Paper WeP2.20 | |
>Optimal Sensor and Actuator Scheduling in Sampled-Data Control of Spatially Distributed Processes |
Xue, Da | Univ. of California, Davis |
El-Farra, Nael H. | Univ. of California, Davis |
Keywords: Model-based Control
Abstract: This work presents an optimization-based methodology for the placement and scheduling of measurement sensors and control actuators in spatially-distributed processes with low-order dynamics and discretely-sampled output measurements. Initially, a sampled-data observer-based controller, with an inter-sample model predictor, is designed based on an approximate finite-dimensional system that captures the infinite-dimensional system's dominant dynamics. An explicit characterization of the interdependence between the stabilizing locations of the sensors and actuators and the maximum allowable sampling period is obtained. Based on this characterization, a constrained finite-horizon optimization problem is formulated to obtain the sensor and actuator locations, together the corresponding sampling period, that optimally balance the trade-off between the control performance requirements on the one hand, and the demand for reduced sampling, on the other. The objective function penalizes both the control performance cost, expressed in terms of the response speed and the control effort, and the sampling cost, expressed in terms of the sampling frequency. The optimization problem is solved in a receding horizon fashion, leading to a dynamic policy that varies the sensor and actuator spatial placement, together with the sampling period, over time. The developed methodology is illustrated through an application to a simulated diffusion-reaction process example.
|
|
15:30-17:00, Paper WeP2.21 | |
>Design and Control of a Reactive Distillation Process for Synthesizing Propylene Carbonate from Indirect Alcoholysis of Urea |
Wang, San-Jang | National Tsing Hua Univ |
Wong, David, S.H. | National Tsing-Hua Univ |
Lee, En-Ko | Center for Energy and Environmental Res. National Tsing Hua |
Keywords: Energy Processes and Control, Process Applications
Abstract: Dimethyl carbonate is a green compound with a broad variety of application. In the study, the process design and control of synthesizing propylene carbonate (PC) for the dimethyl carbonate production by using CO2 as a raw material is investigated by indirect alcoholysis of urea. This attractive indirect alcoholysis route of urea shows many advantages such as environmentally friendly chemicals, cheap raw materials, and mild and safe operation condition. Some reaction distillation (RD)-based processes for PC synthesis by this route are proposed, designed, and optimized in this work. These processes consist of two operation configurations, near neat operation and excess reactant operation. The intensified technologies of heat integration in addition to RD are used to design economic PC synthesis processes. Steady-state simulation results indicate that the novel intensified process containing a RD column and a conventional distillation column with internal vapor compression provides the most economical design. This process is operated under excess reactant and fully utilizes the special azeotrope characteristic of propylene carbonate and propylene glycol pair. This pair forms a homogeneous minimum-boiling azeotrope near the pure PG end under low pressure. However, this azeotrope vanishes under high pressure. Furthermore, steady-state analysis is used to design a simple temperature control strategy. Different desired temperature profiles can be found in the RD column under various feed flow rates. Set point of the temperature loop for maintaining bottom product purity of the RD column is then reset when throughput rate changes. Dynamic simulation results reveal that the proposed temperature control can maintain product purities at their desired values in face of feed flow disturbances.
|