Automation opportunity assessments provide insights into manufacturing inefficiencies and uncover opportunities for improvements
What is an Automation Opportunity Assessment?
An automation opportunity assessment is a study conducted by a team of automation engineers along with key manufacturing plant personnel to identify and address operational pain points related to the facility’s automation system. It often uncovers issues that are operationally related that are beyond the scope of automation. It includes pain points related to what is critical to the facility in terms of capacity, efficiency, quality, safety and environmental, waste reduction, and assorted operational hassles.
A typical assessment team may contain core members such as automation engineers, operation supervisors, operators, quality management, and maintenance staff. In addition, ad hoc members may include lab personnel, an accountant, a safety and environmental officer, and other production team members.
The goal is to find improvement action items, especially low-hanging fruit, that can forecast an expected improvement and an estimate of the associated cost involved for each action item. This could lead to process control improvement projects with outstanding ROIs.
Are your automation engineers often in crisis mode? Are they spending a lot of time off-hours addressing issues? Are you having quality issues? Are you operating below expected efficiencies? Here are a few examples of what an opportunity assessment can uncover.
Debottlenecking
1. Are you running at full capacity, and you seem to have a number of unnecessary “waits” and holdups because of your automation configuration? For processes that have batch-to-continuous transitions, it is critical for the continuous part of the process not to be throttled or starved due to insufficient supply from the batch mixing units.
Surge vessels are used to mitigate this issue. Still, oftentimes unnecessary holdups from the batch mixing processing reduce the ability of the batch units to catch up during low surge capacity upsets. Therefore, surge vessels are designed with an overall capacity in mind. Oftentimes, the original design premise is changed as improvements are made to the downstream processes allowing for capacity improvements. Unfortunately, this puts more pressure on the upstream processes to perform with the reduction in surge capacity due to the improvements made downstream the upstream processes become a new bottleneck.
Due to the control system’s batch software architecture, including arbitrary “waits” in the logic, unnecessary holdups in the upstream process begin to have more impact. Every second lost is a second that cannot be recovered, and if the holdup is repeatable for every batch, the seconds accumulate into a significant capacity constraint.
One solution may be in the way the batches are processed. For example, if multiple mixers feed the surge, can the mixes be processed simultaneously rather than in an alternating scheme? To do simultaneous charges may require piping and instrumentation changes, but this may be cheaper and more feasible than installing a larger surge vessel. It may be a matter of changing the control logic to allow for simultaneous charges, as the original design premise was for alternating charges. Can the raw material ingredients be charged simultaneously rather than sequentially? Can temperature adjustments be head-started earlier in the process to reduce processing time? Can the mixing profile be changed to be more aggressive? Can manual addition prompts be more aggressive in gaining the operator’s attention? Can calculations be done continuously in the background instead of waiting (for the control system) to process at a configured calculation step?
I have seen many examples of batch logic with arbitrary waits installed in the sequences based on the programmer’s whim. Even worse, many of these waits were hard coded. So, we have to ask ourselves, is the wait necessary, and if so, can it be reduced?
Bearing in mind that the original design premise for the batch logic may have been developed with the capacity, not in mind. As a result, waits installed in batch logic are ubiquitous and are typically done conservatively to ensure sequence enforcement irrespective of future capacity requirements.
Exception Logic – When Things Go Wrong
2. Are your operators often involved with taking actions via the control system that causes system upset and either rework or waste? Ambiguous instructions to new or less experienced operators can lead to an inadvertent wrong decision. Alarm bursts or poor system alarm management during upsets could lead operators to be reactive and overwhelmed, leading to incorrect interventions. Poorly constructed upset logic (e.g., hold, restarts) may lead to system hangs that require supervision to be notified to correct the associated delay in processing. Oftentimes, exception logic is given short shrift during automation development as the focus is usually on normal processing.
A control system with excellent alarm management and robust exception logic will significantly reduce operator errors and minimize rework and waste.
Raw Material Usage and Energy Efficiencies
3. Are your raw material and energy usage over standard? A poorly constructed temperature control loop or poorly tuned loops leading to periods of overcooling or overheating could result in substantial unnecessary energy consumption. Poor pH control could lead to unnecessary reagent usage. Poor control over raw material charges could lead to the over stoichiometric use of costly raw materials.
Sometimes all it takes is simple loop tuning to correct these issues. Other times, it may require better field instrumentation and modulating devices or more advanced process control, such as the implementation of cascaded loops, decoupling, and feedforward control.
Alarm Management
4. One of the most common issues in manufacturing facilities is substandard alarm management. This oftentimes goes hand in hand with safety performance. In my experience as a production supervisor, many of the injuries that operators sustained were initiated by an upset condition. Not all upsets are caused by poor alarm management, but in many cases, upsets could have been prevented by solid alarm management. Top-flight companies will do formal hazard analysis, sometimes multiple times at different stages of project development. Manufacturers should also perform an alarm rationalization with many of the same contributors that attend the hazard analysis. Too often, alarms are an afterthought during a project run-up and are either all set to defaults and turned on to a ton of nuisance alarms, and if the nuisance becomes a major distraction, all but the most critical alarms are turned off.
The alarm rationalization process involves specifying each alarm, including priority, and if these need to be disabled, always enabled, or conditionally enabled. These require a significant time investment to get the alarm management right, but if done well, it will return the end user 3-15% ROI in operational efficiency. Many of automation software vendors have added enhanced alarming functionality that many customers do not use to their advantage. Not using the additional functionality leads to increased operator hassles, and missed opportunities to prevent upsets, ultimately leading to urgent situations that put equipment, the operator, and perhaps the environment at risk.
It’s Time To Use The Data
5. Today’s world of data integration has created opportunities to improve product quality and efficiency substantially. Many manufacturers with automated processes do not take full advantage of the vast amount of data that is now literally at their fingertips.
The better use of data has been a common mantra over the generations of digital control. From monitoring equipment for degradation that can be integrated into a preventative or predictive maintenance system, to historical trends of key quality and secondary parameters that can be utilized to model and predict process performance.
World-class manufacturers use offline systems (digital twins) to develop and train process models that can run faster than real-time speed to chart and predict process quality performance. These models can then be used for supervisory control of key process input variables to establish tight quality control at an optimized economical cost.
Process data-driven modeling can also be used to create real-time virtual sensors for quality parameters currently measured off-line with the required time lag for completing the analytical process. In addition, a control strategy may be developed controlling the virtual measurement providing the opportunity for a real-time response to excursions rather than waiting for the excursion to be revealed by a later lab analysis spot check. A statistical process control (SPC) may be used to correct the virtual controller based on the later lab spot checks to account for any variables that are not included in the process model.
Model-based control has been done successfully for decades in continuous manufacturing processes. It can also be done in batch pharmaceutical processes in search of a “golden batch” or to control a virtual measurement. In 2002, the FDA developed and published the Process Analytical Technology (PAT) guidance. While not mandated to use, it is consistent with any pharmaceutical manufacturer seeking to utilize a risk-based approach to ensure product consistency and quality.
By Mark Durica, Controls Automation Manager