You’ve Been Given the Assignment: Why Modern EHS Leadership Requires a New Operating Model

You’ve just been given the assignment:

As Vice President of Global Environmental, Health, and Safety, of a Fortune 500 manufacturing company, you are expected to transform a globally distributed function into a proactive, resilient, and business-aligned capability. Your mandate is explicit: drive a cultural shift toward proactive safety, embed EHS excellence into the company’s operating DNA, modernize global standards, leverage advanced data analytics, redesign performance metrics, and align EHS efforts with long-term business objectives. You are also expected to integrate external benchmarks, foster cross-functional collaboration, and elevate performance across every region and function.

This is not a program refresh.
It is not a compliance initiative.
It is an operating-model transformation.

And it exists because the systems that once kept organizations safe are increasingly fragile in ways we don’t always see.


Why This Assignment Exists Now

Organizations today operate in an environment defined by rising asset complexity, accelerating automation, thinning workforce experience, tighter margins, and near-zero tolerance for catastrophic risk. At the same time, regulators, investors, and boards expect not just compliance, but demonstrable control of operational risk.

The challenge is that many EHS management systems appear strong. Procedures are in place. Audits are clean. Injury rates are low. Yet incidents—often severe ones—continue to occur in organizations that believed they were well controlled.

This is the hallmark of system fragility: systems that look stable under normal conditions but fail abruptly under stress. The assignment you’ve been given exists because traditional EHS models, while necessary, were not designed for today’s pace, complexity, and uncertainty.


The First Insight: This Is Not an EHS Problem

One of the earliest realizations in this role is that you cannot transform EHS by “fixing EHS.”

Most legacy EHS management systems were designed for predictability. They assume hazards can be fully anticipated, work can be standardized, and compliance equals control. In reality, modern operations rely heavily on human adaptation—adjusting to degraded equipment, time pressure, staffing gaps, and conflicting priorities.

Fragility emerges when EHS systems:

  • Rely on procedures that describe work as imagined, not work as done
  • Treat adaptation as deviation rather than necessity
  • Depend on lagging indicators that mask accumulating risk
  • Use static risk assessments in dynamic operating environments

Under these conditions, the system absorbs stress quietly—until it can’t. Transformation stalls when leaders mistake the absence of incidents for the presence of control.


Reframing the Mission: From Compliance to Managing System Fragility

True transformation begins by reframing the EHS mission in language the business understands:

Protect people, safeguard operations, and preserve enterprise value by keeping the organization within safe operating boundaries.

This reframing is critical because fragility is not eliminated by more rules—it is reduced by understanding system limits, monitoring drift, and strengthening controls before failure occurs. When EHS is positioned as a capability that manages system health and risk exposure, it aligns naturally with operations, engineering, finance, and strategy.

Safety becomes inseparable from operational reliability and asset integrity. EHS evolves from a reporting function into a risk intelligence function.


What Changes—and Why It Matters

At its core, the transformation is a shift in how organizations think about control:

Traditional EHS Systems

  • Reactive and event-driven
  • Focused on lagging indicators
  • Built on procedural compliance
  • Optimized locally

Modern, Resilient EHS Systems

  • Anticipatory and risk-based
  • Focused on leading indicators and weak signals
  • Designed around control effectiveness and system health
  • Oriented toward enterprise-level risk

In many organizations, sites with excellent injury rates carry the highest latent risk due to aging assets, deferred maintenance, or fragile controls. Traditional EHS systems rarely surface this reality. Modern EHS must.


Building a Unified Global Safety Culture That Reduces Fragility

One of the most visible expectations of the role is building a unified global safety culture. The common mistake is equating unity with uniformity.

Fragility increases when global standards force identical solutions onto different operating realities. High-performing organizations instead unify around common principles: how risk is evaluated, how escalation occurs, how leaders respond to bad news, and how learning is captured.

A unified culture exists when leaders everywhere ask the same questions about system health and control effectiveness—even when local conditions differ.

Theory to Practice: Building a Unified Global Safety Culture

Building this kind of unified culture requires deliberate action, not messaging. Leaders must define a small set of non-negotiable global principles—how risk is evaluated, what constitutes unacceptable exposure, when and how escalation occurs, and how leaders are expected to respond when controls fail. These principles must be reinforced through leadership routines: common risk review questions used at every site, standardized escalation thresholds tied to severity potential rather than injury outcomes, and consistent expectations for learning reviews that focus on system weaknesses instead of individual error. Global standards should specify intent and critical controls while allowing local teams to determine how those controls are implemented. Leadership development, performance evaluation, and recognition systems must reinforce transparency and early risk identification, making it clear that surfacing fragility is a leadership responsibility—not a failure.


Modernizing Standards: Designing for Real Work and Real Variability

Legacy global standards often describe ideal conditions and perfect execution. They become brittle when reality deviates.

Modern standards acknowledge that variability is normal and adaptation is inevitable. They are designed around:

  • Critical controls, not exhaustive rules
  • Intent and boundaries, not perfection
  • Support for human performance under pressure

By shifting from rulebooks to decision-support frameworks, standards reduce fragility by helping people make better decisions when conditions are imperfect—which is most of the time.

Theory to Practice: Modernizing Global Standards

Modernizing standards in practice requires rethinking both their content and how they are used. Leaders must identify and explicitly define critical controls—the small number of safeguards whose failure would result in serious harm—and ensure standards clearly describe their purpose, performance expectations, and degradation signals. Standards should define decision boundaries, clarifying what must never be compromised, what requires escalation, and where informed local judgment is expected. Field validation is essential; standards must be tested against real work through frontline engagement and learning teams to ensure they reflect actual operating conditions. Finally, standards must be embedded into daily work through planning processes, digital workflows, and leadership conversations, transforming them from compliance artifacts into tools that support safe adaptation.


Leveraging Advanced Analytics: Making Fragility Visible

Fragility persists when leaders cannot see it.

Advanced analytics is transformative not because it produces better reports, but because it exposes where systems are weakening. Leading organizations use data to monitor control effectiveness, detect weak signals, and identify patterns of drift across sites and processes.

This allows leaders to intervene while risk is still manageable. When EHS analytics can answer questions like Where is risk accumulating faster than our controls? the function moves from hindsight to foresight.

Theory to Practice: Using Analytics to Reduce Fragility

Translating analytics into reduced fragility requires redefining EHS data strategy around exposure, control effectiveness, and system health rather than incident counts. This begins with identifying indicators that signal weakening controls—such as repeated temporary fixes, permit deviations, deferred maintenance, or workload saturation—and integrating data across EHS, operations, and maintenance systems. Analytics should highlight trends and variability, not rank sites by outcomes. Most importantly, organizations must institutionalize leadership routines where data is reviewed alongside operational context, enabling proactive intervention before systems drift outside safe operating boundaries.


Redesigning Metrics: From Reassurance to Governance

Metrics shape behavior—and fragile systems are often reinforced by reassuring metrics.

Low injury rates and clean audits can coexist with high exposure. Transformational EHS leaders redesign metrics to reflect:

  • Risk exposure and severity potential
  • Control reliability and degradation
  • Learning velocity and transparency

These are not scorecards; they are governance tools. They inform capital allocation, operational priorities, and leadership focus. They help boards and executives understand whether the system is becoming stronger—or more fragile.

Theory to Practice: Redesigning EHS Metrics

Redesigning metrics requires intentional trade-offs. Organizations must reduce the prominence of lagging indicators and introduce measures that track risk exposure, quality of control verification, time-to-escalation for high-risk conditions, and the effectiveness of corrective actions. Metrics should be designed to prompt inquiry rather than judgment, encouraging leaders to ask where systems are weakening rather than who is underperforming. When metrics reward learning, transparency, and early intervention, they become stabilizing forces rather than sources of distortion.


Leadership and Culture: Where Fragility Is Either Reinforced or Reduced

The most difficult part of the assignment is cultural, and it begins with leadership.

Fragility thrives when bad news is suppressed, deviations are punished, and leaders reward the appearance of control over insight. Resilient organizations do the opposite. Leaders signal that early warning is valued, that learning outweighs blame, and that system weaknesses are leadership problems—not worker failures.

Accountability shifts from who failed to how the system allowed failure to develop. Ownership replaces enforcement.

Theory to Practice: Leading for Resilience

Reducing fragility depends on how leaders behave when risk is surfaced. Leaders must be trained and evaluated on their ability to respond constructively to weak signals—rewarding early escalation, probing for system contributors, and resisting the urge to default to individual accountability. This requires consistent leadership routines: asking the same risk-focused questions at every level, participating in learning reviews, and visibly prioritizing control reliability. Over time, these behaviors create trust and ensure that risk is addressed before it manifests as harm.


The Real Outcome of the Assignment

The assignment has already been given. The only question is whether organizations are willing to change how EHS is led.

If the transformation is successful, the result is not simply fewer incidents or better audit scores.

It is an organization that understands its own limits, detects drift early, and adapts without losing control. Leaders make better decisions under uncertainty. Operations become more reliable. EHS is recognized not as a compliance function, but as a discipline that actively reduces fragility and protects enterprise value.

Ultimately, this assignment asks a deeper question:

Will EHS remain a function that reports on safety—or will it become a leadership capability that strengthens the systems the business depends on?

In today’s operating reality, only one of those models is sufficient.

Selected References: Foundations for Modern, Resilient EHS Leadership

1. Work-as-Done vs. Work-as-Imagined

(Safety-II, Resilience Engineering, System Fragility)
Supports article sections on fragility, real work, standards design, and system drift.

  • Hollnagel, E. Safety-I and Safety-II: The Past and Future of Safety Management.
    Foundational framework for shifting EHS from rule compliance to managing system performance under variability.
  • Dekker, S. Drift into Failure (2nd ed.).
    Explains how organizations gradually migrate toward risk despite procedures and controls.
  • Woods, D. et al. Resilience Engineering: Concepts and Precepts.
    Establishes adaptive capacity and brittleness as core properties of complex systems.

2. Human & Organizational Performance (HOP 2.0)

(System Learning, Weak Signals, Predictive EHS)
Supports sections on leadership behavior, learning, and early risk detection.

  • Conklin, T. The 5 Principles of Human Performance.
    Widely adopted operational model reframing incidents as system outcomes rather than human failure.
  • Dekker, Hollnagel, Woods. Human Factors and Safety Science: A Decade of Progress.
    Connects human performance, system design, and modern operational complexity.
  • Conklin et al. Pre-Accident Investigation Framework.
    Practical methodology for event-free learning and identifying latent system weaknesses.

3. Adaptive & Dynamic Risk Management

(From Static Assessments to Live Risk Awareness)
Supports sections on analytics, leading indicators, and managing drift.

  • Hollnagel, E. Resilience Engineering in Practice.
    Practical guidance for continuous monitoring of system performance and control effectiveness.
  • NASA – Dynamic Risk Assessment and Control (DRAC) methodologies.
    Applied models for real-time risk evaluation in high-consequence environments.
  • NATO STO / Military Adaptive Risk Doctrine (post-2019).
    Influential in shaping continuous risk sensing and decision-making under uncertainty.

4. Agile & Lean Portfolio Management for EHS

(Operating-Model Transformation, Not Programs)
Supports sections on governance, prioritization, and transformation execution.

  • Scaled Agile Framework (SAFe) – Lean Portfolio Management.
    Increasingly used to manage EHS initiatives as value streams aligned with enterprise priorities.
  • McKinsey & Company. Agile at Scale (Operations & Risk applications).
    Practical guidance for integrating EHS into enterprise transformation efforts.
  • LNS Research. EHS 4.0 / Industrial Transformation.
    Strong applied linkage between digital operations, EHS governance, and analytics.

5. High-Reliability Operating Systems (HRO 2.0)

(From Culture to Integrated Control)
Supports sections on leadership, escalation, and enterprise risk governance.

  • Weick, K. & Sutcliffe, K. Managing the Unexpected (updated editions).
    Foundational HRO principles informing leadership behavior and risk sensitivity.
  • INPO / DOE High-Reliability Models (post-COVID updates).
    Applied in nuclear, energy, and chemical sectors with integrated operations and EHS oversight.
  • MIT Sloan Management Review. Digital Operations & Reliability research.
    Connects HRO principles with real-time analytics and operational control centers.

6. Risk-Based Prioritization & Value-at-Risk (VaR) Models

(Board-Relevant EHS Governance)
Supports sections on metrics, governance, and enterprise value protection.

  • COSO. Enterprise Risk Management (2017–2023 updates).
    Framework for translating operational risk into strategic and financial impact.
  • McKinsey & Company. Risk as a Strategic Capability.
    Widely used to connect operational risk to EBITDA and enterprise value.
  • CCPS / API RP 754. Risk-based and severity-weighted process safety metrics.
    Practical tools for exposure-based prioritization beyond injury rates.

7. Digital Learning & Just-in-Time Competence

(Reducing the Gap Between Knowing and Doing)
Supports sections on standards, human performance, and control reliability.

  • Ericsson, A. et al. Peak.
    Applied research underpinning microlearning, field-based coaching, and skill sustainment.
  • PwC / Accenture. Digital workforce enablement (AR, AI task guidance).
    Practical deployment models in manufacturing, energy, and infrastructure sectors.
  • ILO / EU-OSHA. Digitalization of Occupational Safety and Health (post-2020).
    Applied guidance on AI-supported learning and competence in modern work systems.
Posted in AI, Artificial Intelligence, Innovation, New View of Safety, Uncategorized | Tagged , , , , , , , , | Leave a comment

Seeing Risk Before It Hurts: An Example of How Predictive Analytics Are Redefining Safety

A modern manufacturing floor with workers in PPE performing routine tasks around heavy machinery, overlaid with a subtle digital visualization showing emerging risk. Certain work areas glow amber and red like a heat map, while others remain green, indicating real-time safety conditions before an incident occurs. Faint AI-style lines and nodes connect people, machines, and sensors, suggesting predictive intelligence quietly monitoring and interpreting risk in the background.

This article describes a predictive injury prevention concept currently under development, not a finished or commercially available system. The work reflects an active effort to design, test, and refine an approach that could move occupational safety from reactive analysis toward real-time risk anticipation. The next step for this concept is a pilot phase, to be pursued through collaboration with a qualified technology partner capable of helping translate theory and design into a working implementation.

For most of its history, occupational safety has depended on learning from what has already gone wrong. Injuries occur, investigations follow, and controls are strengthened in hopes of preventing recurrence. While this approach has delivered meaningful progress, it leaves a persistent gap: the period of time when risk is forming but no one is yet hurt. Advances in predictive analytics and artificial intelligence now make it possible—at least in concept—to close that gap by identifying emerging risk conditions and intervening earlier than traditional systems allow.

More than a decade ago, I argued that the EHS profession needed to prepare for a fundamental shift in how risk would be identified and controlled—one driven by emerging digital and analytical capabilities that were only beginning to take shape. My vision in 2014 was that EHS could, and should, move beyond static indicators and retrospective analysis toward systems capable of continuously sensing conditions, integrating diverse data streams, and seeing risk before it hurts. While the term “AI” was not yet common in professional safety conversations, the intent was clear: use advanced analytics to proactively manage risk as a dynamic system rather than react to its failures. Today, the convergence of computer vision, machine learning, and causal modeling makes it possible to actively pursue that vision, translating early foresight into a concrete design effort aimed at redefining how safety risk is recognized, understood, and acted upon in real time.

The following example outlines how such a predictive approach could function in a manufacturing environment. It is intentionally presented as a design framework rather than a finished solution, with the goal of encouraging discussion, critique, and collaboration across the EHS and technology communities. The emphasis is on how modern data integration and causal analytics might be applied to injury prevention, and what new capabilities could emerge if these tools are implemented thoughtfully and responsibly.


The Limits of Reactive Safety Systems

Traditional injury prevention systems are inherently retrospective. Even many “leading indicators” are signals that something did exist rather than confirmation that it does exist right now. Audits, observations, and lagging metrics provide valuable insight, but they are episodic and often disconnected from the moment-to-moment realities of work.

As manufacturing systems become more complex, tightly coupled, and sensitive to production pressure, risk increasingly emerges dynamically. Unsafe behaviors, degraded equipment condition, environmental stressors, and organizational demands can align quickly, creating exposure that may not be visible through conventional reporting cycles.


A Shift Toward Real-Time Risk Awareness

Recent advances in artificial intelligence enable a fundamentally different approach. Instead of relying solely on periodic reviews, safety systems can now maintain continuous awareness of operating conditions. Rather than asking what went wrong, they can ask what is happening right now—and what combination of factors makes an injury more likely in this moment.

The predictive injury prevention concept described here is built around that shift. Its purpose is not to replace existing EHS processes, but to augment them with a real-time layer of risk intelligence that operates continuously alongside traditional systems.


Integrating Disparate Data Streams

One of the greatest challenges in advancing predictive safety is not the lack of data, but the fragmentation of it. In most manufacturing organizations, information relevant to injury risk exists across multiple systems that were never designed to work together. Video feeds, EHS management systems, Wearable devices, maintenance platforms, employee feedback/reporting systems, and production databases each capture a partial view of reality, often using different structures, time scales, and levels of data quality.

The predictive injury prevention concept addresses this challenge by using artificial intelligence not just as an analytical engine, but as an integration and data-preparation layer. Before any causal modeling occurs, AI is used to harmonize these disparate inputs into a form that can meaningfully support structural equation analysis.

From Raw Signals to Comparable Inputs

The first role of AI in the system is signal normalization. Video-based AI generates high-frequency observations—counts or rates of unsafe behaviors and conditions detected in specific zones. EHS systems produce lower-frequency, event-driven data such as hazard reports, near misses, and corrective action updates. Operational systems generate continuous performance data tied to production cycles, shifts, or equipment states.

Machine learning algorithms are used to align these inputs onto a common analytical timeline and spatial context. This includes time-window aggregation (for example, rolling 5–15 minute intervals), zone-level mapping, and shift-based normalization. The goal is to ensure that data describing behavior, system condition, and operational pressure are comparable and synchronized, rather than evaluated in isolation.

Data Quality, Noise Reduction, and Contextual Weighting

Raw data—particularly from video analytics—can be noisy. AI plays a critical role in filtering false positives, de-duplicating repeated observations, and weighting signals based on confidence and relevance. For example, repeated detections of the same behavior by the same individual in a short period are treated differently than multiple independent detections across a work group.

Natural language processing is applied to free-text fields in hazard reports, near-miss narratives, and employee concerns. These narratives are classified, clustered, and scored for relevance to specific risk drivers, allowing qualitative inputs to be translated into structured indicators without losing nuance.

Operational data are similarly contextualized. Production rates are evaluated relative to historical baselines rather than absolute values, distinguishing normal high output from abnormal stress. Maintenance indicators are adjusted for asset criticality and operating mode. In this way, AI ensures that the data feeding the model reflect meaningful deviations, not background variation.

Constructing Latent Variables for Structural Equation Modeling

Once data are cleaned, aligned, and contextualized, AI assists in feature construction—the process of grouping related observable indicators into candidate latent variables suitable for structural equation modeling. This step is critical, as SEM depends on theoretically sound groupings that reflect real-world risk mechanisms.

For example, AI-driven clustering and correlation analysis may confirm that PPE violations, line-of-fire exposure, and unsafe lifting consistently co-occur under similar conditions, supporting their use as indicators of a latent “Unsafe Acts” construct. Similarly, delayed preventive maintenance, rising vibration levels, and increased breakdown frequency may form a coherent “System Condition” construct.

Importantly, this process is guided by safety theory and professional judgment, not automated pattern recognition alone. AI accelerates discovery and validation, but human oversight ensures that constructs remain interpretable, defensible, and aligned with how work is actually performed.

Preparing Data for Causal Analysis

Before structural equation modeling is executed, AI-driven preprocessing ensures that the data meet the assumptions required for stable causal analysis. This includes handling missing data, standardizing variables, identifying outliers that represent true signals rather than errors, and testing for temporal stability.

The system also evaluates whether relationships between variables remain consistent over time or vary under different operating conditions. Where appropriate, models are adapted to account for site-specific or process-specific differences, allowing the causal structure to remain valid without forcing uniformity where it does not exist.

Creating a Living Risk Model

This integration process is not a one-time exercise. As new data are collected and conditions change, the AI layer continuously re-evaluates indicator performance, latent construct validity, and model fit. When new patterns emerge—such as a shift in how production stress influences behavior—the system flags these changes for review and model refinement.

The result is a living risk model: one that evolves with the operation, improves with experience, and maintains alignment between data, theory, and practice. This model also operates in real time, continuously integrating all data as it is recieved.

By using AI to integrate and prepare data for structural equation modeling, the system transforms disconnected signals into a coherent representation of risk. This foundation is what enables predictive analytics to move beyond correlation, providing reliable, explainable insight into how and why injury risk is forming in real time.


Understanding Risk Through Causal Modeling

Most safety analytics struggle with the same fundamental limitation: they treat risk factors as independent signals. An unsafe behavior is counted, a maintenance backlog is tracked, a production rate is monitored—each measured, trended, and reviewed largely on its own. While this provides visibility, it does not explain how these factors interact to create injury risk, nor does it help leaders understand which combinations of conditions matter most in a given moment.

Predictive injury prevention requires a different analytical approach—one that is explicitly designed to model cause-and-effect relationships in complex systems. This is where structural equation modeling (SEM) becomes a critical enabling technology.

SEM allows multiple observable signals to be grouped into broader, underlying risk drivers—often referred to as latent variables. These latent variables represent conditions that cannot be measured directly but are inferred from patterns in real-world data. For example, repeated PPE violations, frequent line-of-fire exposure, and unsafe lifting behaviors may collectively indicate an underlying behavioral risk state. Similarly, missed preventive maintenance, increasing breakdown frequency, and abnormal vibration levels may indicate system degradation that increases exposure even when work practices appear unchanged.

The power of SEM lies in its ability to model how these latent risk drivers influence one another and contribute—individually and in combination—to overall injury risk. Rather than assuming that all unsafe acts carry equal weight at all times, the model estimates how strongly each driver contributes to risk under current conditions, and how those contributions change as the system evolves.

In the predictive injury prevention concept, this modeling approach enables the calculation of an instantaneous risk level for a specific operating area. That risk level is not simply the sum of recent events. It reflects the structure of the system: how production pressure amplifies behavioral risk, how degraded equipment condition increases the consequence of minor errors, and how effective (or ineffective) corrective action processes dampen or accelerate exposure.

An Example Structural Equation Model

To make this more concrete, consider a simplified example of how risk might be modeled using SEM in a manufacturing environment.

First, observable data are grouped into latent drivers:

  • Unsafe Acts (UA):
    Indicated by PPE violations, line-of-fire exposure, unsafe lifting, and bypassed guards.
  • System Condition (SC):
    Indicated by preventive maintenance compliance, breakdown frequency, and equipment reliability measures.
  • Operational Stress (OS):
    Indicated by production rate deviation, unplanned changeovers, and yield instability.
  • Safety Response Capability (SRC):
    Indicated by hazard reporting rate, corrective action timeliness, and near-miss follow-up quality.

These latent variables are then related to overall injury risk through a structural equation such as:

Instantaneous Risk (IR) =
0.45 × Unsafe Acts

  • 0.30 × Operational Stress
    − 0.25 × System Condition
    − 0.20 × Safety Response Capability

In this example, unsafe acts have the strongest direct influence on risk, but their effect is moderated by operational stress and system condition. High production pressure increases the impact of unsafe behaviors, while strong maintenance performance and timely corrective actions reduce overall risk, even when behaviors are not perfect.

The model can also include interaction effects, such as:

IR = … + 0.10 × (Unsafe Acts × Operational Stress)

This term reflects a reality familiar to most practitioners: the same behavior that is tolerated under stable conditions becomes far more dangerous when the system is under stress.

Importantly, these coefficients are not assumed—they are estimated from actual site data and recalibrated as conditions change. As new hazards are reported, corrective actions are completed, or production stabilizes, the relationships update, allowing the model to distinguish between short-term noise and meaningful shifts in risk.

Why This Matters in Practice

This causal structure is what allows the system to move beyond alerting and into guidance. When a monitored area transitions from green to yellow or red, the system does not simply state that risk is high. It can explain why—for example, that rising production pressure combined with delayed maintenance has amplified the impact of recurring unsafe lifting behaviors—and point to corrective actions most likely to reduce risk quickly.

Just as importantly, the model can confirm when interventions work. If a targeted maintenance action or workflow adjustment reduces the contribution of a specific risk driver, the overall risk score reflects that improvement in near real time. Over time, this creates a learning loop in which the organization gains insight not only into where risk exists, but into which controls are most effective under which conditions.

By modeling safety as a dynamic system rather than a static checklist, structural equation modeling provides the analytical backbone that makes predictive injury prevention both credible and actionable. It allows EHS professionals and operations leaders to see risk forming, understand its drivers, and intervene with precision—before someone gets hurt.


From Analytics to Action

To be usable at the front line, the system translates complex analytics into a simple visual signal: green, yellow, or red. A green state indicates stable conditions with effective controls. Yellow signals elevated risk requiring timely attention and local correction. Red indicates a critical condition where immediate intervention is warranted.

Behind each color is a clear explanation of the dominant risk drivers and a prioritized set of suggested corrective actions. This allows supervisors and EHS professionals to respond quickly and decisively, without sorting through dashboards or debating which metric matters most.


Explainability and Learning by Design

A key design principle of the system is explainability. The model is intended to support professional judgment, not replace it. Users can see which factors are driving risk, how those factors have changed over time, and whether previous interventions successfully reduced exposure.

As new data is collected, the system recalibrates. When corrective actions reduce risk, the model learns from that success. When new patterns emerge, it adapts. Over time, this creates a feedback loop that strengthens both predictive accuracy and organizational learning.


Supporting the Human Side of Safety

Equally important is how the system interacts with people. The intent is not surveillance, but early recognition and prevention. Feedback mechanisms allow users to validate or challenge AI observations, improving trust and model accuracy simultaneously.

When elevated risk is detected, the system can also trigger targeted coaching prompts or short, task-specific learning reminders. In this way, technology reinforces safe behavior at the moment it matters most, supporting—not replacing—the conversations that are central to effective safety leadership.


The Potential Benefits of Predictive Injury Prevention

When implemented effectively, the benefits of this approach would significant. Organizations will gain continuous risk awareness instead of periodic snapshots, enabling earlier and more precise intervention. Injuries can be reduced by addressing exposure before harm occurs rather than after.

Safety, maintenance, and operations gain a shared, data-driven view of system health, improving coordination and reducing friction between priorities. Leaders gain predictive insight instead of retrospective explanation, allowing resources to be focused where they will have the greatest impact. Most importantly, employees benefit from safer, more stable work environments where risks are recognized and controlled before someone gets hurt.

Predictive analytics do not replace the fundamentals of EHS practice—they strengthen them. By combining modern analytical tools with established safety principles, this approach offers a practical path toward fewer surprises, faster learning, and more reliable protection of people in complex manufacturing systems.

Posted in AI, Artificial Intelligence, EHS Management, Injury Prevention, Machine Learning, psychological-safety | Tagged , , , , , , , , , , , , | Leave a comment

A Discussion About the Future of OSH with AI+Humans

I recently had the privilege of joining ISHN Magazine for a thought-provoking conversation on how AI is reshaping the work of Environmental, Health, and Safety professionals. Dave Johnson—longtime leader and past editor of ISHN—reached out after reading my article on building a “digital twin” of myself, and asked if I’d explore the implications of AI for the future of safety and work on his podcast.

For me, this wasn’t just another interview; it felt like a full-circle moment. As a young safety professional, I studied ISHN Magazine to absorb the wisdom of leaders who had spent decades in the field. Those pages were my classroom, my compass, and my early window into what excellence looked like. Now, decades into my own career, sitting across from Dave and talking about the frontier of AI, I couldn’t help but reflect on how far our profession—and the world around it—has come.

What strikes me most today is the paradox of experience: the more years I accumulate, the more I realize how much remains undiscovered. Every week still brings a new lesson, a new insight, a new perspective. And with AI entering the EHS landscape, that learning curve isn’t just continuing—it’s accelerating. We’re standing at the threshold of an era where human expertise and machine intelligence don’t compete; they amplify one another. The velocity of knowledge is about to shift from incremental to exponential.

AI won’t replace the human essence of what we do—but it will expose us to patterns we’ve never seen, risks we’ve never quantified, and possibilities we’ve never imagined. It challenges us not just to adapt, but to reinvent the way we think, decide, and lead. That’s where the real opportunity lies.

With that spirit in mind, Dave and I dove into a candid conversation about the present and future of our profession—where it’s headed, what might disrupt it next, and how we can shape a safer, smarter world of work.

Stay tuned. This journey is only beginning…

Click here to listen to the podcast: https://www.ishn.com/media/podcasts/5177-all-things-safety/play/140-an-ehs-pro-clones-himself-with-ai

Posted in Artificial Intelligence, Career Skills, Technical Skills, Uncategorized | Tagged , , , , , , , | Leave a comment

Human + AI: The Power of Synergistic Collaboration

I was recently the subject of an interview by the American Society of Safety Professionals (ASSP) regarding my work with AI and occupational safety. In that conversation, we touched on one of the most important questions facing professionals today: What is the impact of AI on the future of people at work?

Would professions such as occupational safety be replaced by artificial intelligence? My opinion is clear—people will not be replaced by AI. Instead, a world-changing collaboration between people and AI is unfolding. This article explores that future: not one of replacement, but of synergistic collaboration—where human insight and machine intelligence create something far more powerful together than either could alone.


Defining Synergistic Collaboration

In the context of human–AI interaction, synergistic collaboration represents the next evolution of teamwork—one that transcends tools and transactions to create adaptive systems of shared intelligence.

“Synergistic collaboration in human–AI interaction refers to the co-adaptive process through which human cognitive, social, ethical, and resilient capacities—enabling effective functioning under uncertainty and ambiguity—are combined with AI’s computational, analytical, and predictive strengths, creating an integrated system whose joint performance exceeds what either agent could achieve alone.”
Adapted from Klein et al. (2004); Bradshaw et al. (2013); Song, B., Zhu, Q., & Luo, J. (2024); refined by Brandon (2025)

This expanded view emphasizes human cognitive resilience—the ability to perform effectively through uncertainty and ambiguity—as a defining trait of successful human–AI teaming. It acknowledges that while machines excel at scale and precision, humans contribute meaning, adaptability, and ethical grounding. The synergy arises not from similarity, but from the complementary strengths of both forms of intelligence.


Leading in the Age of Shared Intelligence

Leading in the age of shared intelligence requires a profound shift in how leaders think about expertise, authority, and decision-making. No longer is intelligence centralized in a few senior decision-makers or confined within organizational boundaries. Today’s effective leaders operate in a dynamic ecosystem where human cognition, artificial intelligence, and organizational systems continuously interact to form a collective intelligence network. This era demands that leaders not only integrate digital tools but also cultivate an environment where data, insights, and human judgment converge fluidly.

In this new paradigm, leadership is defined less by command and control and more by curation, orchestration, and sense-making. Leaders must guide organizations to extract meaning from complexity, ensuring that technology enhances—not replaces—human insight. They foster systems that enable collaboration across disciplines, time zones, and levels of expertise, using AI and advanced analytics to augment pattern recognition and scenario foresight. At the same time, they safeguard ethical judgment, accountability, and the distinctly human dimensions of empathy, creativity, and moral reasoning.

The most successful leaders in this context demonstrate adaptive intelligence—the ability to learn, unlearn, and reframe perspectives at the speed of change. They understand that shared intelligence is not simply about connectivity, but about creating conditions for collective sensemaking—where humans and intelligent systems together identify risks, generate innovations, and make more resilient decisions. In this role, the leader acts as a translator between machine logic and human purpose, ensuring that organizational intelligence remains directed toward long-term sustainability, human well-being, and responsible performance.

Effective human–AI collaboration depends on properly calibrated trust—users must neither over-rely on AI outputs nor dismiss them prematurely. Over-trusting AI can lead to complacency, missed errors, or unsafe decisions, while under-trusting can result in ignoring valuable insights and underutilizing the technology. Trust calibration involves ongoing interaction, feedback, and experience, allowing users to develop an accurate sense of when AI recommendations are reliable and when human judgment should prevail. By fostering calibrated trust, organizations can maximize the benefits of AI while maintaining human oversight, ethical decision-making, and resilient performance in complex or uncertain environments.

This concept aligns with my Representative Definition of AI, which defines artificial intelligence as “the dynamic and iterative capacity of systems to sense, process, learn from, and act upon data in a manner that augments or emulates aspects of human cognition and decision-making—continuously refined through human oversight and contextual feedback.” (Refined by R. C. Brandon, 2025, integrating sources from ISO/IEC JTC 1/SC 42 Artificial Intelligence Standards, the European Commission’s AI Act, and the U.S. National Institute of Standards and Technology [NIST] AI Risk Management Framework).


Implications for EHS and Sustainability Leadership

For EHS and sustainability leaders, the age of shared intelligence redefines both the scale and the tempo of decision-making. The traditional model—where data was gathered, analyzed, and acted upon within fixed reporting cycles—is being replaced by real-time sensing, predictive analytics, and AI-augmented foresight. This creates the opportunity for organizations to identify weak signals of risk, anticipate emerging hazards, and intervene before adverse events occur. Yet, it also demands a higher level of system literacy and ethical awareness from leaders who must interpret and act within increasingly complex digital ecosystems.

In this environment, the EHS leader becomes not only a risk manager but also a systems integrator and intelligence steward. Success depends on the ability to connect human insight with digital capability—to blend field knowledge, operational data, and machine learning outputs into coherent, actionable intelligence. Shared intelligence enables adaptive control systems, autonomous monitoring, and context-aware safety management; but it is the leader’s role to ensure that these capabilities are used in service of human-centered performance and sustainable operations.

Moreover, shared intelligence reshapes how culture and accountability are built. Safety and sustainability excellence emerge not just from compliance systems, but from collective situational awareness—a shared understanding across people and machines of what is happening, what matters most, and what actions must be taken. Leaders must nurture organizational cultures that view data as a dialogue, not a verdict—where AI insights trigger inquiry, not blind acceptance. This balance between trust and verification, between digital insight and human sensemaking, defines the essence of leadership in this era.

Ultimately, the EHS and sustainability leader in the age of shared intelligence must serve as the ethical compass for intelligent systems—ensuring that automated decisions remain aligned with human values, regulatory integrity, and societal good. By mastering the orchestration of human and artificial cognition, these leaders will shape the next frontier of resilience: organizations that learn faster, adapt smarter, and sustain themselves responsibly in a world defined by interconnected intelligence.


Key Leadership Capabilities in Shared Intelligence Systems

As AI becomes a true collaborator rather than a mere tool, EHS and sustainability leaders will need to evolve their competencies to thrive in a world of shared intelligence. The following capabilities are emerging as essential for effectiveness and credibility in this new context:

1. Digital Fluency and System Sensemaking
Leaders must understand not just how AI tools operate, but how they think—how data is structured, how models learn, and where cognitive blind spots may arise. The ability to interpret machine-generated insights, challenge assumptions, and integrate those insights into complex human systems is now a critical leadership skill.

2. Cognitive Resilience and Adaptive Thinking
AI systems excel in structured environments; humans excel in uncertainty. Leaders who demonstrate cognitive resilience—maintaining clarity, adaptability, and ethical grounding amid ambiguity—will ensure that organizations remain balanced between algorithmic precision and human intuition.

3. Ethical and Responsible AI Stewardship
EHS and sustainability inherently deal with human welfare, environmental stewardship, and societal trust. Leaders must establish governance models for AI that emphasize transparency, fairness, and accountability, ensuring intelligent systems are aligned with the organization’s values and duty of care.

4. Human–Machine Collaboration Design
Effective collaboration between people and AI requires intentional design. Leaders should focus on workflows, interfaces, and decision structures that leverage each side’s strengths—AI for data synthesis and pattern recognition; humans for judgment, context, and empathy.

5. Learning Agility and Foresight Leadership
The velocity of technological change demands continuous learning and anticipation. The most effective leaders will cultivate curiosity, experiment with emerging tools, and proactively explore how shared intelligence can strengthen both safety and sustainability performance.


The Feedback Loop: Maximizing Human–AI Agency and Success

At the heart of effective human–AI collaboration lies a simple but powerful principle: the feedback loop. Just as high-reliability organizations rely on continuous learning cycles to improve safety and operational outcomes, human–AI systems thrive when information flows bidirectionally—between humans and intelligent systems—in a continuous, adaptive loop. This feedback loop is the mechanism that transforms interaction into true collaboration, allowing both humans and AI to co-evolve, adapt, and improve performance over time.

In this model, AI continuously generates insights, identifies patterns, and predicts potential outcomes, while humans provide contextual interpretation, ethical oversight, and domain expertise. Feedback occurs at multiple levels: humans adjust AI models through corrective input, reinforce desired behaviors through oversight, and calibrate trust based on observed system performance. Conversely, AI provides humans with timely alerts, scenario analyses, and decision-support recommendations that inform real-time action.

The feedback loop empowers human operators by enhancing agency—ensuring that humans remain in control of critical decisions rather than being passive recipients of machine output. It also strengthens AI effectiveness, because algorithms improve as they receive human insight and corrective guidance, creating a mutually reinforcing cycle of learning. This continuous interplay allows teams to respond to ambiguity, adapt to emerging hazards, and navigate complex environments more effectively than either humans or AI could alone.

In EHS and sustainability contexts, the feedback loop is particularly impactful. For example, predictive safety analytics can flag unusual equipment behavior, but it is the human practitioner who interprets the operational context, validates the alert, and determines the corrective action. The AI system then incorporates the human response, refining its predictive models for future scenarios. Over time, this cycle builds resilient, adaptive systems where both human judgment and AI intelligence are maximized.

In short, the feedback loop is not just a technical design principle—it is the structural foundation for synergistic collaboration, ensuring that human insight and AI capability continuously inform, enhance, and amplify each other. Leaders who intentionally design and maintain these loops will unlock the full potential of shared intelligence, driving safer, more sustainable, and more innovative outcomes.


Training the Next Generation of Human–AI Collaborators

As AI increasingly supports decision-making, a key challenge emerges: ensuring that emerging professionals develop the deep insight and judgment traditionally acquired through years of immersive, problem-intensive work. Previous generations of EHS, safety, and sustainability professionals built their expertise through sustained engagement with complex, high-stakes problems—learning to recognize subtle patterns, anticipate emergent risks, and generate creative solutions under pressure. This cognitive “muscle memory” of intense mental effort was essential for expert judgment and decision-making.

In the AI era, organizations must develop methods to replicate or accelerate this depth of learning. Structured experiential training, scenario-based simulations, mentorship programs, and guided problem-solving exercises can help bridge the gap, allowing less experienced professionals to internalize patterns of reasoning and decision frameworks that historically took decades to acquire. By combining these human development methods with AI-driven insights, emerging professionals can build both the intuition of seasoned experts and the analytical leverage of intelligent systems, ensuring that the next generation is capable of fully effective human–AI collaboration.


Conclusion

Synergistic collaboration between humans and AI represents not a loss of professional identity, but an evolution of leadership itself. As I shared in my ASSP interview, the future of work—particularly in EHS and sustainability—will not be defined by machines replacing people, but by people and intelligent systems learning to think together. When guided by resilient, ethical, and visionary leadership, this collaboration has the power to elevate decision-making, protect workers and communities, and drive sustainable performance across industries.


Key References

  • Bradshaw, J. M., Hoffman, R. R., Woods, D. D., & Johnson, M. (2013). The Seven Deadly Myths of “Autonomous Systems.” IEEE Intelligent Systems, 28(3), 54–61.
  • Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a “team player.” IEEE Intelligent Systems, 19(6), 91–95.
  • Song, B., Zhu, Q., & Luo, J. (2024). Human-AI collaboration by design. Proceedings of the Design Society, 4, 2247–2256.
  • Refinement: Brandon, R. C. (2025). Definition of Synergistic Collaboration in Human–AI Interaction, LeadingEHS.com.

Addendum 11/8/25

I have been thinking more about this subject, focusing on the technology that will be necessary to fully unlock the potential of Human+AI synergistic collaboration at scale and speed. Below is a brief primer on the tech needed and a possible timeline to availability.

The Emerging Human+AI Interface Frontier

As synergistic collaboration between humans and AI continues to evolve, the next wave of innovation will focus on deepening the connection between human cognition and artificial systems. Several emerging technologies are advancing this goal, each moving us closer to seamless, real-time collaboration.

1. Brain–Computer Interfaces (BCIs)
Within the next five to seven years, both invasive and non-invasive BCIs are expected to become viable for industrial and operational use. These interfaces will enable monitoring of cognitive load, fatigue, and situational awareness, allowing AI systems to dynamically adjust support levels or alert strategies. Early pilot programs are already underway in healthcare, defense, and high-risk industries.

2. Neuromorphic Computing
Neuromorphic hardware, designed to mimic the brain’s neural structure, is progressing rapidly. These systems allow ultra-fast, low-power processing that supports real-time decision-making—critical for safety-sensitive environments. Within the next decade, such architectures may underpin adaptive safety systems capable of interpreting human signals and environmental data simultaneously.

3. Adaptive Cognitive Modeling
Perhaps the most immediately applicable innovation, adaptive cognitive models use AI to understand and predict human intent, stress responses, and decision patterns. By learning from continuous interaction, these models will enable AI systems to complement rather than compete with human decision-making—reinforcing resilience, trust calibration, and shared situational awareness.

Within the next five to seven years, early industrial applications of brain–computer interfaces are expected, primarily in cognitive monitoring and fatigue management. Neuromorphic computing will likely enter operational use in this same period for real-time sensor analysis and adaptive safety controls. Adaptive cognitive modeling is already emerging and will see broad industrial deployment by the early 2030s.

Together, these developments mark the beginning of what may be called the “shared cognition era”—where human expertise and AI intelligence operate as a cohesive system. While true neural integration remains a decade or more away, the groundwork is being laid today. For EHS and sustainability leaders, this evolution underscores the importance of shaping AI not as a replacement for human judgment, but as a partner in enhancing safety, performance, and cognitive resilience.

Posted in AI, Artificial Intelligence, EHS Management, Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

Why Board Participation by Safety Professionals Strengthens Organizational Governance

The Future of Business — And Its Impact on Our Society — is Decided in the Boardroom

The next evolution of the safety profession won’t be written in compliance manuals or field reports—it will be shaped in boardrooms, where the choices that define the next century of business and work life are being made.

As technology accelerates and work becomes more complex, the moral and operational questions facing boards are no longer abstract—they touch the human condition itself: how we design systems, value people, and define progress.

If you’re a safety leader, you belong in those conversations. Your insight into risk, resilience, and human performance is vital to how organizations will navigate the age of AI, automation, and climate disruption.
If you’re a board member, bring that voice to the table. Demand it. Because the enterprises that thrive in the coming century will be those that understand this truth: the protection and advancement of human potential is the ultimate measure of success.

For too long, the work of occupational safety and health (OSH) professionals has been viewed primarily through an operational lens—focused on compliance, risk control, and protecting workers from harm. While these responsibilities remain essential, the modern enterprise increasingly recognizes that safety and health are not merely support functions—they are strategic levers for performance, resilience, and trust.

That’s why it’s time for more OSH leaders to take a seat at the table where these strategic levers are pulled: the boardroom.

Safety Leadership as Governance Leadership

When safety professionals participate in board-level work—whether as members, advisors, or contributors—they bring a systems-level perspective that connects operational reality to organizational intent. Safety and health leaders understand how risk actually manifests in daily work, how culture influences outcomes, and how governance decisions cascade into human performance.

Boards benefit greatly from that perspective. It grounds high-level strategy in practical understanding, ensuring that decisions about growth, innovation, and transformation are informed by the real conditions that determine whether an organization will execute safely and sustainably.

Case Study: Anticipating the AI Transformation in Workplace Safety


Several years ago, I was invited to advise the leaders of a technology company exploring how artificial intelligence could transform its service offerings. At the time, AI’s practical application in occupational health and safety was still emerging—but I could see its potential to fundamentally change how organizations prevent incidents, manage risk, and protect workers.

In collaboration with the executives and the technical team, I helped the company understand the real-world use cases and operational challenges that safety professionals face every day. I also made clear the economic realities of this market and helped them develop their business strategy to be ready to compete when the market emerged. Together, we mapped how future AI capabilities could support predictive analytics, exposure monitoring, and decision support within safety management systems.

That early investment in strategic foresight paid off. As AI technologies matured, the company was already positioned with a deep understanding of workforce safety needs, the ethical considerations surrounding data use in the workplace and the economic landscape that existed. Over the past two years, they have leveraged that head start to successfully launch AI-driven solutions that are now helping organizations strengthen safety performance and compliance.

This experience underscored a powerful lesson: when occupational safety and health expertise is integrated into strategic planning—especially at the board level—it can shape innovation, guide responsible technology adoption, and directly influence an organization’s long-term success.

From Compliance to Strategic Value

Board participation elevates the OSH discipline beyond compliance and incident prevention. It reframes safety as a governance competency—central to enterprise risk management, ESG performance, and brand integrity.

Practitioners who serve in board or advisory capacities bring deep insight into the interdependence between safety, sustainability, and financial results. They help boards see that protecting people and advancing performance are not competing priorities, but mutually reinforcing ones. This perspective strengthens resilience, builds investor confidence, and enhances stakeholder value.

Translating Technical Expertise into Strategic Insight

One of the most important contributions OSH professionals can make at the board level is translating data and technical information into strategic insight. Boards don’t just need dashboards—they need meaning.

Safety leaders with executive experience know how to tell the story behind the metrics: what the indicators reveal about culture, capability, and system health. They can articulate leading indicators of risk in ways that guide oversight, inform capital allocation, and shape long-term priorities.

Strengthening Governance Through Human-Centered Thinking

Every organization ultimately depends on the capability, creativity, and well-being of its people. Boards that integrate safety and health expertise into their governance processes are better equipped to make decisions that reflect that reality.

OSH professionals bring a human-systems perspective that complements financial, legal, and operational expertise—reminding boards that the improvement of human performance and the preservation of human potential are the truest measures of organizational success.

Mutual Benefit: What Practitioners Gain

Participation on boards is not only valuable to the organizations served—it also strengthens the profession itself. Board engagement exposes OSH leaders to broader governance, financial, and strategic contexts, deepening their business acumen and expanding their influence. It cultivates cross-disciplinary understanding and enhances the ability to communicate safety’s value in the language of the boardroom.

In short, it develops the next generation of safety executives who can lead at the intersection of people, performance, and purpose.

Research Supporting Board Leadership and the Business Value of Safety

Organizations with strong safety cultures consistently outperform their peers—driven by boards that are informed, engaged, and aligned around the protection of people as a strategic priority. According to Delves, Bremen, and Huddleston (2022), effective risk management can support higher and more consistent shareholder returns and create a more sustainable business over the long term. While direct evidence linking the presence of a dedicated safety professional on the board to superior financial returns is still emerging, extensive research shows that investment in safety correlates strongly with improved business performance, risk mitigation, and brand reputation. Having a safety expert on the board ensures that decisions are made with a full understanding of their potential safety impacts, helping leadership balance innovation and performance with the responsibility to protect people and operations.

Empirical research reinforces this link between governance and safety outcomes. A study by Lixiong Guo (University of Mississippi) and Zhiyan Wang (Wingate University) analyzed injury and illness data from 377 parent firms between 1996 and 2008. Firms that transitioned to more independent boards experienced a 9–10% reduction in workplace injury and illness rates, largely due to increased safety investments and the inclusion of safety metrics in executive compensation. The researchers concluded that board independence—especially when aligned with long-term or socially responsible investors—enhances both corporate social performance and shareholder value.

Strong safety governance is therefore not just a compliance function—it is a strategic driver of performance and resilience. Boards that integrate health and safety expertise are better positioned to safeguard people, protect the organization’s reputation, and optimize long-term enterprise value.

A Call to the Profession

The future of occupational safety and health depends on our ability to connect what happens at the worksite to what happens in the boardroom. By participating in boards and governance structures—whether corporate, academic, or nonprofit—safety professionals can ensure that decisions made at the highest levels are informed by the realities of work, effective risk identification and management, and the principles of human performance.

When safety professionals serve on boards, they don’t just represent compliance—they represent the conscience of sustainable business. And that is leadership in its highest form.

References

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Human Fallibility Meets System Design: Strategies for a Safer, Smarter Workplace

Prelude

While reviewing past presentations, I came across a human factors course I taught for BLR in a webinar a few years ago. It was an exciting opportunity, as human factors is an area I consider essential for creating safer workplaces, particularly in complex manufacturing operations. This work also coincided with my achievement of becoming an instrument-rated private pilot—a role where managing human error is a constant imperative. Exploring these concepts in depth inspired the development of an engaging presentation, which serves as the foundation for this article.

“We cannot change the human condition, but we can change the conditions under which humans work.”
—James Reason

And now, with the rise of AI, we have powerful new tools to change those conditions faster and smarter than ever before.


Introduction

Workplace accidents rarely stem from a single point of failure. More often, they are the result of a chain of errors, oversights, and latent conditions that align in just the wrong way. Human factors analysis provides a powerful framework for understanding how and why these errors occur—and more importantly, how to prevent them.

This article explores human error reduction, human factors psychology, and the Human Factors Analysis and Classification System (HFACS). It also outlines strategies organizations can apply to identify, control, and prevent workplace accidents, with real-world examples from aviation and chemical manufacturing.


Human Factors Overview

Human factors is the study of how humans interact with their environment, tools, systems, and organizations. It draws from psychology, engineering, ergonomics, and organizational science to design safer, more effective workplaces.

Key definitions include:

  • Human Factors (Murrell, 1965): The scientific study of the relationship between humans and their working environment.
  • Human Factors Psychology (Meister, 1989; Sanders & McCormick, 1993): The study of how humans accomplish work-related tasks in the context of human-machine systems, applying knowledge about human abilities and limitations to design tools, jobs, and environments.
  • Human Error (Reason, 1990): A failure in a planned sequence of mental or physical activities that does not achieve the intended outcome, without interference from outside chance.
  • Human Performance Improvement (DOE, 2009): The application of systems and models to reduce human error, manage controls, and improve outcomes by addressing the environment and conditions that shape behavior.

In short, human factors is about designing work to fit people, rather than expecting people to fit poorly designed systems.


Human Fallibility and Performance Modes

Human beings are inherently fallible. Even highly trained, competent professionals make mistakes—particularly under stress, distraction, or in poorly designed systems.

Research identifies three performance modes that influence error likelihood:

  1. Skill-Based Mode: Actions are automatic, such as driving a familiar route. Errors here are often slips or lapses in attention. Typical error rate: 1 in 1,000 to 1 in 10,000 actions.
  2. Rule-Based Mode: Workers follow learned rules to adapt to changing conditions. Errors often involve misinterpretation or applying the wrong rule to a situation. Typical error rate: about 1 in 100 to 1 in 1,000 decisions.
  3. Knowledge-Based Mode: Responses are required in unfamiliar or novel situations. Errors often stem from incomplete mental models or poor situational awareness. Typical error rate: as high as 1 in 2 to 1 in 10 decisions.

Understanding these modes matters because they allow leaders to predict when errors are likely and design interventions accordingly. For example, automation can reduce reliance on memory in skill-based tasks, training can reinforce rule-based responses, and simulations can prepare workers for rare knowledge-based scenarios.

When I was training to become an instrument-rated pilot, I quickly realized how easy it is to lose situational awareness—the overall perception a pilot has of their current position, tasks, and the requirements needed to safely operate the aircraft. At that time, I was flying with the older instrument panels commonly referred to as “steam gauges.” These round dials, with needles pointing to number scales altitude, airspeed, and rate of descent or climb, provided essential information—but under pressure, interpreting them accurately and quickly could be difficult.

Over the years, aviation has shifted from these analog systems to digital “glass cockpits” that provide data-rich, graphic displays. These modern systems often include moving maps, integrated performance indicators, and more intuitive visuals, making it easier for pilots to interpret critical information in real time. Secondary systems—like iPads equipped with advanced navigation apps—add another layer of redundancy by displaying additional maps, alerts, and even voice cues. Together, these innovations significantly enhance situational awareness and allow pilots to recover it more quickly if lost.

Aviation accidents such as Eastern Air Lines Flight 401 (1972), where crew fixation on a landing gear indicator light led to unnoticed altitude loss and a crash, illustrate how human fallibility interacts with performance modes. Similarly, in chemical manufacturing, the 2005 BP Texas City refinery explosion was linked to rule-based and knowledge-based performance breakdowns under abnormal startup conditions.

The U.S. Department of Energy (DOE) has applied these principles extensively through its Human Performance Improvement (HPI) Handbook. The handbook translates concepts like error-likely situations, performance modes, and latent organizational weaknesses into practical tools for industrial operations. DOE facilities use HPI to anticipate where human limitations intersect with complex systems—such as nuclear operations, maintenance, and high-hazard chemical processes. By embedding practices like pre-job briefs, peer checks, and error precursors into daily work, HPI enables organizations to systematically reduce the frequency and severity of errors. This framework has proven so effective in the energy sector that many manufacturing and chemical companies have since adopted its methods as a model for operational reliability and safety.


The Human Factors Analysis and Classification System (HFACS)

HFACS (Human Factors Analysis and Classification System) was developed by Douglas Wiegmann and Scott Shappell for the U.S. Navy and Marine Corps, building on James Reason’s influential “Swiss Cheese Model” of accident causation. HFACS provides a comprehensive framework for understanding how human error contributes to accidents by identifying failures at multiple organizational and operational levels. Its structure allows investigators and safety professionals to look beyond immediate mistakes and uncover deeper systemic issues.

The framework categorizes failures into four primary levels:

  1. Organizational Influences – These are the overarching factors that shape how work is performed, including resource allocation, safety culture, management priorities, and organizational policies. Deficiencies at this level can create conditions that make errors more likely, such as insufficient staffing, inadequate training programs, or conflicting safety and production pressures.
  2. Unsafe Supervision – This level focuses on how supervisors and managers guide and control operations. It includes failures in planning, inadequate oversight, failure to correct known problems, and poor enforcement of procedures. For example, a supervisor who allows shortcuts or fails to provide timely feedback can inadvertently set the stage for unsafe acts.
  3. Preconditions for Unsafe Acts – This level addresses the situational, environmental, and personal factors that increase the likelihood of errors or violations. Examples include fatigue, stress, poor communication, ergonomic challenges, or high-pressure operational conditions. These preconditions often interact with organizational and supervisory factors to create a heightened risk environment.
  4. Unsafe Acts – These are the errors or violations committed by individuals, which are often the most visible contributors to accidents. HFACS differentiates between errors (slips, lapses, or mistakes due to knowledge or skill gaps) and violations (deliberate departures from rules or procedures). Understanding these distinctions helps organizations tailor interventions to prevent recurrence.

By examining incidents through the HFACS lens, organizations can systematically identify the root and systemic causes of accidents, rather than focusing solely on frontline human error. Its structured approach facilitates targeted corrective actions, training, and policy changes to reduce risk. While initially applied in aviation and nuclear power, HFACS has increasingly been adopted in complex industrial settings, including chemical manufacturing, where understanding human error is critical to operational safety.

In chemical manufacturing operations, HFACS provides a practical framework to analyze incidents ranging from process upsets to near-misses. By mapping errors to organizational influences, supervisory practices, preconditions, and unsafe acts, safety teams can identify patterns that contribute to risk, such as inadequate procedure enforcement, high workload periods, or recurring training gaps. Applying HFACS in these environments supports proactive interventions—modifying processes, improving supervision, enhancing training, and reinforcing safety culture—to prevent accidents before they occur. This approach aligns human factors analysis directly with operational excellence, helping to create safer, more resilient manufacturing systems.


Applications Beyond Accident Investigation

Human factors analysis is valuable in many contexts:

  • Accident Investigations: HFACS provides structure for identifying systemic and individual contributors to accidents.
  • Product & Equipment Design: Norman’s Human Design Principles emphasize simplicity, visibility, natural mapping, and design for error.
  • Litigation: Human factors analysis can clarify whether accidents stemmed from negligence, systemic flaws, or unforeseeable conditions.
  • Job & Procedure Design: Well-designed procedures reduce cognitive load and make safe actions the path of least resistance.

Strategies for Reducing Human Error

Strategies for Reducing Human Error
Preventing accidents requires more than training—it requires systems thoughtfully designed to anticipate, detect, and tolerate human fallibility. By layering multiple strategies, organizations can build robust defenses that reduce both the likelihood and impact of errors. Below are five complementary strategies, illustrated with examples from aviation and chemical manufacturing, along with practical guidance for application.

1. Error Elimination
The most effective approach is to remove hazards entirely, so that no mistake can activate them. This strategy focuses on designing systems where risk simply cannot exist.

  • Aviation: Modern fly-by-wire systems replace mechanical linkages with computerized controls, eliminating entire categories of potential pilot and maintenance errors. By removing direct mechanical dependencies, these systems prevent errors before they can arise.
  • Chemical Manufacturing: Replacing highly toxic solvents with safer alternatives removes both the exposure risk for operators and the potential for catastrophic chemical releases. By designing out the hazard, the system inherently becomes safer.

How to Apply:

  • Conduct a hazard audit to identify elements that can be removed or replaced.
  • Substitute high-risk materials, processes, or equipment with inherently safer alternatives.
  • Simplify system designs to remove unnecessary complexity that could introduce errors.

2. Error Occurrence Reduction
This strategy aims to make errors less likely through system design, standardization, and procedural controls. By reducing opportunities for mistakes, human performance becomes more reliable.

  • Aviation: Standardizing cockpit layouts across aircraft models helps pilots operate controls instinctively, reducing the chance of confusing throttle, flap, or landing gear levers.
  • Chemical Manufacturing: Hose connections that are keyed or color-coded prevent operators from connecting incompatible lines, thereby avoiding hazardous chemical mixing and process errors.

How to Apply:

  • Use standard operating procedures (SOPs) consistently across teams.
  • Design interfaces, tools, and controls to reduce complexity and the potential for confusion.
  • Apply ergonomics principles to ensure workspaces align with natural human behavior.

3. Error Detection
Even the best-designed systems cannot prevent all errors. Detection strategies focus on identifying mistakes quickly, allowing timely intervention before harm occurs.

  • Aviation: Takeoff configuration warnings alert pilots if flaps, trim, or other critical controls are incorrectly set, providing immediate feedback to prevent accidents.
  • Chemical Manufacturing: Distributed control systems continuously monitor process conditions, triggering alarms as parameters drift toward unsafe limits. Rapid detection enables operators to intervene before a process deviation escalates into a serious incident.

How to Apply:

  • Implement real-time monitoring systems for critical parameters.
  • Use alarms, indicators, or dashboards that provide clear, immediate feedback.
  • Regularly audit systems to ensure detection mechanisms are functioning correctly.

4. Error Recovery
When errors occur, systems should allow safe correction. Recovery strategies give operators the ability to intervene or normalize conditions without catastrophic consequences.

  • Aviation: Pilots are trained to execute a “go-around” if a landing approach becomes unstable, making recovery a normal, supported action rather than forcing continuation under unsafe conditions.
  • Chemical Manufacturing: Pressure relief valves and emergency shutdown protocols allow systems to stabilize safely if process limits are exceeded, preventing explosions or uncontrolled releases.

How to Apply:

  • Establish clear recovery procedures and train personnel to execute them under stress.
  • Design fail-safe and fail-soft mechanisms that allow safe system operation after an error.
  • Simulate error scenarios regularly to ensure recovery measures are effective and well understood.

5. Error Consequence Reduction
Despite the best prevention and detection systems, some errors will occur. This strategy minimizes the severity of outcomes to protect people, equipment, and the environment.

  • Aviation: Redundant hydraulic, electrical, and navigation systems allow aircraft to continue safe operation even if individual components fail, reducing the risk of disaster.
  • Chemical Manufacturing: Secondary containment, such as spill basins or dikes, limits the spread of leaks, safeguarding workers and the surrounding environment from exposure or contamination.

How to Apply:

  • Incorporate redundancy in critical systems to maintain operation despite failures.
  • Install physical barriers, spill containment, or other engineering controls to limit consequences.
  • Conduct risk assessments to identify potential worst-case scenarios and design mitigation strategies accordingly.

Integrated Approach:
Together, these strategies create a layered “defense-in-depth” system. By anticipating human fallibility and designing operations to prevent, detect, recover from, and mitigate errors, organizations strengthen resilience and ensure safer operations in both aviation and chemical manufacturing.

Peer Checking: Lessons from Aviation

A useful example of human factors error reduction strategies used in aviation that I have personal experience with is the practice of readback between pilots and air traffic controllers. When a controller issues an instruction, the pilot is expected to repeat back the critical elements of that instruction. If the pilot’s readback is accurate, the controller responds with “readback correct, proceed.” This process ensures that instructions are both received and understood before being carried out, reducing the chance of miscommunication in high-stakes environments.

Although this is a very specific aviation example, the principle of peer checking has broad application in industrial settings. Having a second set of eyes involved in critical steps introduces additional perspectives on the situation, constraints, and potential risks. This shared verification not only strengthens accuracy but also brings in diverse risk awareness, making operations more resilient to error.


Human Error Assessment and Reduction Technique (HEART)

While developing training for a client focused on human error reduction, I discovered the HEART tool. It serves as an excellent complement to the other human factors concepts covered in this article, enhancing our ability to assess and mitigate potential errors effectively.

The Human Error Assessment and Reduction Technique (HEART) is a well-established method for evaluating human reliability in operational systems. Developed by British ergonomist Jeremy Williams, HEART provides a structured framework to identify potential error points and quantify the likelihood of human error in a given task.

HEART relies on 38 recognized “error-producing conditions”, which cover a broad range of factors that can increase the probability of mistakes, including time pressure, complexity, inadequate training, or environmental stressors. By systematically assessing these conditions, organizations can better understand where human performance may be vulnerable and take proactive steps to mitigate risk.

This technique is highly adaptable and can be applied to key operations across industries, from chemical manufacturing to aviation. By mapping tasks against HEART’s error-producing situations, safety professionals can prioritize interventions, redesign procedures, improve training, and implement controls that enhance overall system reliability.

Ultimately, HEART serves as a powerful tool for turning human factors insights into practical safety improvements, helping organizations reduce errors and create safer, more resilient operational environments.


How AI Enhances Human Performance Across the Error Spectrum

AI strengthens human reliability not in just one area, but across the entire journey of work—from anticipating risks before they occur, to recognizing mistakes as they unfold, to helping workers recover quickly and limiting consequences.

Anticipating and Preventing Errors
AI excels at analyzing vast streams of operational data to spot patterns that humans might overlook. By flagging early warning signs—such as subtle process deviations, fatigue risks, or environmental triggers—AI shifts organizations from reactive problem-solving to proactive error prevention. In doing so, it creates space for humans to focus on higher-level decision-making rather than monitoring every detail.

Recognizing Errors in Real Time
Once work is underway, AI systems act like an extra set of eyes and ears. Real-time monitoring tools can detect anomalies as they develop, from equipment vibration signals to unusual process parameters, alerting workers before a small misstep escalates. This immediate feedback loop reduces the likelihood of latent errors compounding into serious incidents.

Supporting Recovery and Corrective Action
Even with strong systems in place, errors still occur. AI can help workers recover more effectively by offering context-specific guidance, such as step-by-step corrective procedures or decision support during unexpected events. Much like an experienced mentor, AI doesn’t just point out that something is wrong—it helps chart the safest path back to stability.

Mitigating Consequences When Things Go Wrong
Finally, when errors do slip through, AI contributes to reducing their impact. Automated shutdown systems, predictive containment measures, or rapid communication tools can limit harm to people, equipment, and the environment. By acting faster than human reflexes allow, AI provides an additional safeguard when every second counts.

In Summary:
AI doesn’t replace human judgment—it augments it. By predicting, detecting, correcting, and mitigating errors, AI strengthens system resilience, reduces risk, and supports safer, more reliable operations across complex industries like aviation, chemical manufacturing, and energy.


Conclusion

Human error is not a moral failing—it is a predictable outcome of human limitations interacting with complex systems. By studying these interactions through human factors analysis, organizations can build safer, more reliable, and more resilient operations.

Aviation’s adoption of HFACS and human performance tools shows what is possible when human fallibility is acknowledged and managed. Chemical manufacturing and other high-risk industries can—and must—apply the same lessons.

When leaders design systems that anticipate mistakes, build in detection and recovery, and minimize consequences, they protect workers, safeguard communities, and ensure sustainable performance.


Final Thought

We can’t eliminate human fallibility—but we can design systems that anticipate it, tolerate it, and prevent it from turning into tragedy.

That’s the real value of human factors analysis: creating workplaces where people and systems succeed together.


References and Resources

  • Reason, J. (1990). Human Error. Cambridge University Press.
  • Wiegmann, D., & Shappell, S. (2003). A Human Error Approach to Aviation Accident Analysis: The Human Factors Analysis and Classification System. Ashgate.
  • U.S. Department of Energy. Human Performance Improvement Handbook.
  • Sanders, M., & McCormick, E. (1993). Human Factors in Engineering and Design.
  • Norman, D. (2013). The Design of Everyday Things.
  • Williams, J.C. (1985) HEART – A proposed method for achieving high reliability in process operation by means of human factors engineering technology. in Proceedings of a Symposium on the Achievement of Reliability in Operating Plant, Safety and Reliability Society (SaRS). NEC, Birmingham
  • Brandon, C. (2024, November 24). Harnessing AI to revolutionize safety and EHS management: A vision for the future. LeadingEHS.com. https://leadingehs.com/2024/11/24/harnessing-ai-to-revolutionize-safety-and-ehs-management-a-vision-for-the-future/
  • Zhao, Y., Zhang, J., & Li, X. (2024). Artificial intelligence for safety and reliability: A descriptive review. Journal of Cleaner Production, 396, 136365. https://doi.org/10.1016/j.jclepro.2023.136365
  • Khurram, M.; Zhang, C.; Muhammad, S.; Kishnani, H.; An, K.; Abeywardena, K.; Chadha, U.; Behdinan, K. Artificial Intelligence in Manufacturing Industry Worker Safety: A New Paradigm for Hazard Prevention and Mitigation. Processes 2025, 13, 1312. https://doi.org/10.3390/pr13051312

Posted in AI, Artificial Intelligence, Design for Safety | Tagged , , , , , , , , , , , , , | Leave a comment

Integrating Safety, Health, and Purpose: The Evolution of Early Intervention in Industry — A Pioneer’s Perspective

An example of Early Injury Intervention: An Athletic Trainer & CEIS helps a maintenance employee improve his posture to decrease neck and shoulder fatigue from his tasks.

Leading a team of passionate, forward-thinking healthcare practitioners in the early days of workplace wellbeing was nothing short of exhilarating. We didn’t just follow the rules—we challenged them, exploring new ways to keep people safe, healthy, and thriving on the job. A recent conversation with a former colleague from those days reminded me of the impact of that work and inspired me to put my reflections into this article. For EHS leaders and practitioners committed to redefining occupational health, I hope it sparks fresh ideas and bold approaches.

After that conversation with my former colleague, I found myself contemplating the challenges we faced, solutions we developed, and memories from that time. What struck me most was not just what we accomplished, but what it meant—to me personally, to the young professionals I worked alongside, and to the organizations and workers we served. Ten years later, with the perspective of continued growth in the field of industrial safety and the evolution of early injury intervention into mainstream practice, I decided it was time to revisit and reinterpret that work. This article is my attempt to document why it mattered then, why it matters now, and what lessons it offers for the future.

For decades, safety professionals and occupational health providers worked in silos. Safety sought to prevent accidents, while medicine treated injuries once they had already occurred. The result was a costly and incomplete system where too many employees slipped through the cracks.

Early intervention filled this gap. By embedding healthcare expertise, educated on the environment, directly in the workplace, we transformed a reactive cycle into a proactive system—one that not only prevented injuries but also reshaped how organizations thought about their responsibility for worker well-being.

As Vice President of Operations at ATI Worksite Solutions, I had the privilege of leading a team of over 300 healthcare professionals who were pioneering a new approach to protecting workers in industrial environments. We recognized a gap between traditional reactive injury management and proactive prevention programs. Out of this realization, we helped advance a model of early intervention that has since reshaped the way companies think about occupational safety, health, and employee wellbeing.

From the Athletic Field to the Factory Floor

Our method was rooted in the idea of adapting the unique expertise of Certified Athletic Trainers to the workplace. These professionals—specially trained as Certified Early Intervention Specialists™ (CEIS™)—blended sports medicine, ergonomics, safety, psychology, and injury prevention science into one role. Instead of waiting for injuries to occur, they engaged workers in real time, on the floor, through encounters: one-on-one coaching, injury triage, safe lifting techniques, stretching programs, wellness education, and ergonomic improvements.

The impact was powerful. By being visible, approachable, and trusted, CEIS™ professionals fostered an early reporting culture where employees no longer felt they had to “work through” discomfort until it became a recordable injury. Instead, minor issues could be addressed before escalating. As we described in our paper:

“The frequent presence of the Athletic Trainer among the workforce builds rapport… employees begin to trust the Athletic Trainer as an expert in early intervention and realize they now have an effective alternative to working until the pain becomes disabling.”

Why Early Injury Intervention Works

Traditional EHS systems, while vital, often leave a timing gap. Reactive tools—like accident investigations—teach us after harm has occurred. Proactive tools—like training and audits—look toward the future. But what about the critical “now” moment, when pain first appears or risk is first observed? That’s where early intervention fits.

By responding within hours of discomfort emerging, early intervention specialists help workers reverse injury progression. Instead of weeks of rehabilitation and restricted duty, employees often returned to full function in days.

For example, when comparing two industrial sites—one with a full-time CEIS™ and another with only part-time coverage—Workers’ Compensation claim costs decreased by 50% in just four months at the full-time site. The results were so compelling that the part-time site quickly transitioned to full-time support.

Examples of How Early Injury Intervention Works

I’ll never forget a machinist at a major automotive manufacturer who came to our on-site specialist with early signs of shoulder strain. In a traditional system, he likely would have “worked through it” until the injury required medical treatment and lost time. Instead, within minutes he was coached through stretches, posture changes, and light task modifications. Within days he was back to full strength—never entering the workers’ comp system, never losing wages, and never missing a beat in his career

Here is another example of how early intervention is effective in the industrial environment. An employee has back pain from lifting boxes frequently throughout his 8-hour day. As soon as he feels pain or discomfort he contacts the Athletic Trainer to come assess him or the trainer spots his unusual body motion and inquires as to his level of discomfort. The Athletic Trainer has an encounter with the employee within hours of the onset of pain. The employee is given some instructions on pre-established job-specific stretches that are posted within his department, as well as some tips on safe lifting techniques and body mechanics. The employee is reminded that icing would prevent worsening of his discomfort. The employee may be placed on protective limitations to prevent the condition from worsening to the point he can no longer perform the essential functions of his job. Daily follow up occurs from the Athletic Trainer to monitor improvement or detect the need for referral to traditional healthcare professionals for formal assessment and treatment. If the employee is compliant with the recommendations given, he should start to feel better within 24-48 hours and should continue with any job method modifications, stretching exercises and rest cycle recommendations from the Athletic Trainer in the upcoming days or weeks. The reversal of injury progression is verified and allows the introduction of a pre-established strengthening regimen that will allow the employee to increase tolerance to the physical stressors of the job that the injury originated from.

These examples illustrate the power of early intervention: small informed actions, taken early, prevent long-term harm for both employees and employers.

Agile Safety for a Changing Workplace

The workplaces of the 21st century are fast-moving, lean, and often stressful environments. Early intervention methods proved agile, adapting to real-time needs in a way that aligned with modern business pressures. They reduced costs rather than added to them, supported aging workforces, and met rising expectations for safe, meaningful work.

One global manufacturer of container glass found the results so striking that they expanded the program to multiple sites, including several in California where workers’ compensation costs were historically high. Within just 12 months, they saw a 92% decrease in workers’ compensation direct spend across their California sites.

The outcomes were clear:

  • Recordable injuries were reduced.
  • Claim frequency and severity were reduced.
  • Commercial health insurance costs decreased.
  • Health screening participation and employee morale increased.

In short, early intervention created safer workplaces, healthier employees, and measurable business value.

My Contributions to a Developing Field

While the clinical expertise resided in the healthcare professionals we placed on-site, my role as Vice President of Operations was to design, scale, and institutionalize early intervention as a discipline in occupational health and safety. This work not only delivered immediate results for clients but also helped establish a new professional field at the intersection of occupational medicine and safety.

Defining and Professionalizing the Model

I contributed directly to the evolution of the Certified Early Intervention Specialist™ (CEIS™) framework, helping shape how athletic trainers could adapt their sports medicine expertise into industrial environments. This included building training structures, compliance protocols, and integration pathways that blended clinical care, ergonomics, OSHA regulatory requirements, and EHS management.

Scaling and Delivering Results Across Industries

I guided the national expansion of early intervention programs into aerospace, automotive, glass, food, pharmaceuticals, and distribution sectors. Each implementation was tailored to unique operational risks, labor structures, and cultural expectations. Under my operational leadership, ATI Worksite Solutions transformed early intervention from a promising idea into a proven, repeatable, and scalable system that organizations could rely on for consistent performance.

Leveraging Deep Heavy Industry Experience

A critical differentiator of our success was the ability to integrate early intervention seamlessly into the realities of demanding industrial environments. Drawing on my extensive experience protecting employees in heavy industry settings—including aerospace, metals, glass, and chemical production—I ensured that our programs were not only clinically sound but also operationally relevant. This gave my team the advantage of deep contextual knowledge, enabling them to fully align their efforts with production demands, workforce dynamics, and safety-critical operations. The result was maximum impact in keeping employees safe, healthy, and able to contribute to the mission of their organizations.

Data-Driven Outcomes and ROI Validation

One of my central contributions was embedding rigorous measurement and business case validation into early intervention. I championed the use of performance metrics, client sentiment and return-on-investment analytics, showing clients tangible outcomes such as:

  • 50% reduction in Workers’ Compensation claim costs within four months at pilot sites.
  • 92% decrease in workers’ compensation spend across California operations for a global glass manufacturer.
  • Reductions in OSHA recordables, improved wellness participation, and measurable gains in morale and productivity.

By making outcomes visible, I ensured that early intervention was not seen as a “soft” wellness initiative, but as a core business strategy that aligned with corporate cost, productivity, and compliance goals.

Integrating Occupational Safety and Medicine

Historically, safety and medicine operated in silos: safety professionals focused on preventing incidents, while occupational medicine treated injuries after the fact. My work demonstrated that the two could be seamlessly integrated through real-time, on-site intervention. This approach not only reduced injuries but also reshaped organizational culture—creating early reporting environments where prevention became part of daily operations.

Alignment with NIOSH Total Worker Health®

The philosophy behind early intervention aligned naturally with what later became mainstream under NIOSH’s Total Worker Health® (TWH) approach. TWH emphasizes policies, programs, and practices that integrate protection from work-related safety and health hazards with promotion of injury prevention, well-being, and overall worker health.

Our early intervention model anticipated this integration by:

  • Bringing together safety and health disciplines into one role at the point of work.
  • Promoting wellness alongside injury prevention, with CEIS™ specialists addressing nutrition, stretching, strengthening, and healthy lifestyle coaching.
  • Building a culture of health where employees trusted the system enough to report early, and organizations could respond in real time.

In many ways, the CEIS™ framework was an early embodiment of the Total Worker Health vision—creating workplaces that didn’t just prevent injuries but actively supported longer, healthier, and more satisfying careers.

Advancing the Profession and Thought Leadership

Beyond operations, I worked to establish early intervention as a recognized field. This included:

  • Authoring research and professional papers, including Early Injury Intervention Methods Bridge the Gap Between Reactive and Proactive Injury Prevention Systems. (Presented at ASSP’s SAFETY2015 in Dallas TX)
  • Presenting at national forums and safety congresses, raising awareness and influencing adoption among EHS leaders.
  • Mentoring professionals and building interdisciplinary teams, ensuring the sustainability and growth of the CEIS™ model, a proven and reliable method to bring holistic wellbeing to industrial workforces.

Developing the Next Generation of Leaders

One of the greatest joys of my time leading ATI Worksite Solutions was not only advancing early intervention in industry, but also developing the remarkable healthcare practitioners who made it possible. Many were just beginning their careers when they joined our team. I had the privilege of mentoring them as they grew—not just as medical and occupational safety professionals, but as leaders capable of shaping entire workplace cultures.

We spent countless hours together learning how to translate clinical expertise into meaningful impact on the factory floor, how to build trust with industrial workers, and how to understand the unique pressures faced by plant leaders. I emphasized the importance of being reliable, capable, and indispensable to our client organizations. In short, we were not simply providing a service; we were becoming strategic partners in creating safer, healthier, and more productive workplaces.

The five years I spent leading operations at ATI Worksite Solutions were transformative—not only for the industry, but also for all of us on the team. Watching these young professionals flourish has been one of the most rewarding aspects of my career. Many have gone on to make significant contributions of their own. One especially proud example is the founding of the Industrial Athletic Trainers Society by a former member of our team—a powerful testament to the momentum and influence of this work.

In mentoring them, I learned as much as I taught: that the future of our profession depends on empowering the next generation with both technical expertise and the confidence to lead with purpose. Their success continues to multiply the impact of early intervention across industries, and their legacy is as much a part of this story as mine.

The Full Impact of a Holistic Approach: Creating Safer Jobs and Fostering Well-being

For decades, organizations treated occupational safety and health (OSH) and employee well-being as separate domains. Traditional OSH—what most simply call “safety”—was focused on health protection: preventing accidents, exposures, and injuries. Meanwhile, wellness and health promotion programs emphasized health enhancement: encouraging nutrition, exercise, and lifestyle improvements outside the core safety system.

The A-ha moment came when forward-thinking companies began asking: What if these two streams weren’t separate? What if safety and health promotion were integrated into a single, holistic system of care for employees?

The Power of Integration

Research by Loeppke et al. (2015) demonstrated that integrating health protection and health promotion delivers measurable benefits beyond what either can achieve alone. The two fields reinforce one another, creating a whole greater than the sum of its parts:

  • Improved safety outcomes: Workers who are healthier overall are less likely to suffer musculoskeletal injuries, fatigue-related errors, or chronic disease complications that impair safety.
  • Enhanced health outcomes: A safer workplace reduces physical and psychological stressors that otherwise undermine wellness efforts.
  • Cultural transformation: When organizations treat health and safety as inseparable, they create a Culture of Well-being—where employees feel valued not just for their output, but as whole people.

From Compliance to Culture

Traditional safety systems often emphasize compliance—meeting OSHA or regulatory standards. Integrated systems go beyond compliance to embed health and safety into daily work practices, leadership priorities, and organizational values.

  • A lockout-tagout procedure is health protection.
  • A stretching and ergonomics coaching program is health promotion.
  • But when combined—ensuring equipment is safe while also preparing employees’ bodies for safe operation—they form a seamless protective web that reduces both acute accidents and long-term strain.

This shift reframes the safety profession itself: from “preventing harm” to “creating the conditions for people to thrive.”

Holistic Impact on Business and Workers

An integrated approach creates impact on multiple levels:

For Workers:

  • Safer jobs with fewer injuries and exposures.
  • Reduced stress and fatigue, leading to higher engagement.
  • Improved long-term health trajectories, with lower risks of chronic disease.
  • A greater sense of purpose and belonging at work.

For Organizations:

  • Reduced workers’ compensation costs and healthcare spend.
  • Fewer lost workdays and restrictions, driving productivity gains.
  • Stronger employer brand and ability to attract/retain younger workers who expect healthy, mission-aligned workplaces.
  • Alignment with frameworks like NIOSH Total Worker Health®, which are increasingly viewed as best practice.

For Society:

  • Reduced burden on healthcare systems.
  • Longer, healthier working lives.
  • More sustainable organizations that balance profit with people and purpose.

A Culture of Well-being: The Endgame

The integration of OSH and health promotion doesn’t just prevent injuries—it creates workplaces that actively improve people’s lives. This is the true “A-ha moment”:

  • Safety protects.
  • Wellness empowers.
  • Together, they create well-being.

And well-being is what transforms organizations. Workers in these environments don’t just avoid harm—they gain health, resilience, and satisfaction. In turn, businesses gain loyalty, performance, and long-term sustainability.

As Loeppke et al. (2015) concluded, aligning health and safety strategies yields measurable benefits. But the impact extends further: it reshapes the relationship between workers and their employers into a partnership built on care, trust, and shared success.

A Vision for the Future of Work

Drawing on broader workforce megatrends, I also advanced the case that early intervention was part of a larger transformation in how we think about health at work. At conferences such as the OSHU Pain at Work Conference, I emphasized that:

  • Musculoskeletal conditions remain the leading cause of workplace disability.
  • A “Culture of Safety” must evolve into a “Culture of Wellbeing”—where prevention, well-being, and human sustainability are core to business.
  • Health and safety cannot remain in silos; they must be integrated into a Total Worker Health™ approach that reflects changing employee expectations and the future of work.

And increasingly, those expectations are being shaped by younger generations entering the workforce. Millennials and Gen Z don’t just want a paycheck; they want work that is healthy, meaningful, and aligned with a greater mission than enriching shareholders. They expect employers to provide safe, sustainable, and satisfying workplaces where their well-being is valued and where the company’s purpose resonates with their own values. Early intervention, integrated health models, and Total Worker Health® speak directly to this demand—making organizations more attractive to top talent while strengthening long-term resilience.

In many ways, this work represented a paradigm shift. We demonstrated that occupational safety is not just about preventing catastrophic accidents, and occupational medicine is not just about treating injuries after they occur. The real power lies in the space in between, where early intervention can change the trajectory of worker health, safety performance, and organizational resilience.

Looking Ahead – A Call to Action

The evidence is clear: early injury intervention works. It reduces injuries, improves well-being, lowers costs, and builds trust between workers and organizations. It was an early model of the integrated approach that NIOSH has since advanced through Total Worker Health®—and it has never been more relevant.

Now is the time for forward-thinking companies to:

  • Break down silos between health, safety, and well-being.
  • Embed prevention and intervention into daily work, not just after-the-fact programs.
  • Invest in agile, human-centered systems that adapt to worker needs in real time.
  • Embrace Total Worker Health® as both a business strategy and a social responsibility.
  • Meet the expectations of new generations of workers, who want healthy workplaces that align with purpose, sustainability, and shared value.

The workplaces that thrive in the future will be those that go beyond compliance, beyond traditional safety, and embrace integrated models of health and performance. As leaders, we have both the tools and the responsibility to make work not only safer, but healthier, more meaningful, and more sustainable.

The next evolution of early injury intervention will be shaped by technology. AI-enabled health analytics, wearable sensors, and real-time ergonomics feedback will expand the reach of early intervention specialists and provide data-driven insights we could only imagine a decade ago.

Just as athletic trainers on the factory floor bridged the gap between safety and health, these technologies—when combined with human expertise—will allow organizations to predict and prevent risks with even greater precision. Companies that embrace this next frontier will not only protect their workforce but will also lead in building the sustainable, people-centered workplaces of the future.

The choice is in front of us: will we wait until employees are injured and disengaged, or will we build workplaces where people live longer, healthier, and more satisfied lives—while contributing to a mission bigger than themselves?

Ref: Loeppke, Ronald R., et al. (2015).  “Integrating health and safety in the workplace: how closely aligning health and safety strategies can yield measurable benefits.” Journal of occupational and environmental medicine 57.5: 585-597.

Posted in Culture, Injury Prevention, psychological-safety | Tagged , , , , , , , , , , , , , , | Leave a comment

Stop Work Authority: The Ultimate Expression of Safety, Empowerment, and Respect

In the realm of industrial safety, few practices are as powerful—or as underleveraged—as Stop Work Authority (SWA). When properly understood and embraced, SWA is far more than a compliance protocol. It becomes a declaration of trust, a signal of psychological safety, and a cornerstone of empowered leadership. It creates an organizational posture where safe outcomes are not coincidental or dependent on vigilance alone—they are systematically produced by a workforce that is engaged, alert, and authorized to act.

Stop Work Authority gives every employee—regardless of role or rank—the right and responsibility to halt operations if they believe something is unsafe. On paper, it’s a straightforward safety control. But in practice, its value is exponentially greater. Constructive use of SWA is one of the most powerful actions leadership can take to cultivate a workplace culture where safe work is not just possible—it’s expected and sustainable.

Psychological Safety in Action

Empowering people to speak up when something doesn’t feel right sends a clear message: you matter, your perspective counts, and your safety is non-negotiable. This goes to the heart of psychological safety, a vital ingredient in any high-performing safety culture. When workers feel safe to express concerns without fear of judgment or retaliation, they are more likely to intervene early, preventing incidents before they escalate.

When organizations genuinely support the use of SWA, they:

  • Remove fear of retaliation for stopping work, especially in situations involving higher-status personnel or production pressure.
  • Normalize open conversations about hazards and near-misses, building trust and transparency across teams.
  • Encourage feedback, learning, and mutual accountability, where each team member feels responsible for the wellbeing of others.

In these environments, employees don’t second-guess whether they’ll be supported—they know they will be. This psychological safety becomes a foundation for resilience and proactive behavior.

Empowerment Beyond Words

Too often, “empowerment” is a buzzword. SWA turns it into reality. It gives workers the authority and autonomy to exercise their judgment in the face of uncertainty. That’s not just about stopping work—it’s about starting ownership. It shifts the employee mindset from being a passive observer to an active steward of safety.

The impact of this empowerment includes:

  • Sharper hazard recognition skills across all levels of the workforce, as employees become more engaged in risk assessment.
  • A shift from top-down command to distributed leadership, where each worker becomes a safety leader in their own right.
  • Greater pride in personal and team-level safety performance, reinforcing the intrinsic value of safety as a shared goal.

When people are trusted, they tend to rise to the occasion. SWA proves that trust is a two-way street—one where respect, accountability, and shared vigilance move together.

A Management Philosophy, Not Just a Policy

SWA should never be treated as a back-pocket clause. It needs to be a visible and vocal part of the organization’s management philosophy. That means leaders must champion it—not just permit it. They must actively model its importance by praising appropriate use and showing zero tolerance for intimidation or reprisal.

When leadership embraces SWA constructively—even when the decision to stop is ultimately deemed unnecessary—they’re signaling something profound:

  • Safety matters more than speed, and no task is worth compromising a life.
  • Insight from the frontlines is valued and necessary for continuous improvement.
  • Learning is always more important than blame, especially in dynamic and high-risk environments.

This cultural posture builds resilience, not just compliance. It helps transform “policy on paper” into a living, breathing philosophy of care and courage.

Real-World Example: A Critical Stop in a Chemical Plant

This hypothetical example in a chemical operation setting illustrates the power of Stop Work Authority in protecting lives and operations.

During a routine maintenance turnaround, a group of outside contractors was issued a safe work permit to perform mechanical work on a heat exchanger in an isolated area. According to the permit, their work was restricted to bolt removal and external inspection only, with no internal entry or confined space activities authorized.

However, a sharp-eyed operations technician performing rounds noticed two contractors preparing to enter the exchanger with tools and headlamps—clearly intending to go inside. Recognizing the serious deviation from the permit scope, the technician immediately called a stop to the job, contacted the area supervisor, and ensured the team stood down.

Upon review, it was confirmed that the contractors had misunderstood the scope and believed the permit had been updated to include confined space entry for internal inspection activities. It had not. Thanks to the technician’s intervention:

  • A potential confined space entry without atmospheric testing, rescue planning, or lockout verification was avoided.
  • The contractors were retrained on site procedures and permit boundaries.
  • The permit system was reviewed for clarity, and a new validation checkpoint was added before work begins.

Importantly, the technician was recognized during the next all-hands meeting—not just for stopping the job, but for embodying the company’s core values of vigilance, courage, and care for others. This is what effective SWA looks like: not punitive, not reactive, but constructive, preventative, and deeply human.

Tracking Stops to Foster Participation

One of the most effective ways to reinforce the value of Stop Work Authority is to track and review the number of jobs stopped over time. This simple metric provides real insight into how engaged the workforce is—and whether the culture truly supports intervention.

When approached constructively, tracking SWA usage:

  • Normalizes the act of stopping work, turning it into a routine and expected behavior rather than a rare exception.
  • Reveals trends and recurring hazards, helping leadership prioritize improvements in equipment, processes, or communication.
  • Encourages peer learning, especially when job stops are discussed in safety meetings or shared as case studies.

Crucially, these numbers should never be weaponized. High numbers don’t imply dysfunction, and low numbers don’t necessarily mean everything is safe. The goal is not to reduce the count, but to understand and support safe decision-making at the point of risk.

Tracking trends over time helps organizations answer critical questions like:

  • Are we seeing participation from all departments and shifts?
  • Are the same hazards prompting repeated stops?
  • Are supervisors recognizing and supporting SWA use consistently?

When used with integrity, this data becomes a leadership tool—not just a lagging indicator. It can help validate safety program effectiveness and uncover blind spots that formal audits might miss.

Building a Culture of Learning

Every time an employee uses Stop Work Authority, it’s a chance to learn. Maybe they identified a genuine hazard. Maybe they misunderstood a procedure. Either way, the organization wins—because the system gets smarter.

Encouraging SWA helps embed a continuous improvement mindset. Key takeaways can be reviewed, shared, and used to refine training, procedures, and communication channels. It transforms safety from a static compliance function into a dynamic, adaptive system powered by frontline intelligence.

Instead of seeing stops as interruptions, forward-thinking companies see them as investments in safer outcomes. Each stop becomes a data point, a dialogue, and a demonstration of the values that define a healthy safety culture.


Bottom line: Stop Work Authority is more than a safety mechanism. It’s a cultural multiplier. It empowers employees, demonstrates deep respect for their insight, and reinforces the psychological safety necessary for sustained excellence. When leadership supports its constructive use—and actively tracks and celebrates its application—SWA becomes a catalyst for safer work and stronger teams, every single day.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

AI as a Strategic Partner: Building a Digital Twin to Advance Safety and Sustainability

Introduction

The challenges of leading Environmental, Health, and Safety (EHS) efforts across global, high-risk operations have never been more intense. Executive leaders today are asked to navigate volatile regulations, emerging technologies, ESG mandates, cultural transformation, and shifting workforce expectations—all while maintaining integrity, accountability, and performance.

After three decades serving in senior roles across chemicals, aerospace, metals, and occupational health, I confronted a core dilemma: how can one maintain consistent leadership presence and effectiveness when scope outpaces availability?

As my scope of influence expanded across global operations and governance platforms, I found myself wrestling with three critical questions that traditional leadership models struggled to fully answer:

  • How can I scale my leadership without diluting my impact?
  • How do I ensure consistent, values-driven messaging across time zones, sectors, and constituencies?
  • How can I future-proof knowledge transfer and mission alignment as we prepare the next generation of safety professionals?

In response, I made a bold move: I built an Executive Digital Twin. This is not a chatbot or novelty AI experiment. It’s a custom-trained leadership proxy designed to reflect my strategic voice, professional standards, and decision-making principles—extending the reach and responsiveness of an executive without diluting its values.

I was uniquely well-positioned to create a professional digital twin because of the extensive documentation I’ve maintained throughout my career. A foundational resource was the body of articles I’ve published on my website, LeadingEHS.com, which capture not only my subject matter expertise but also my communication style and strategic perspective. My LinkedIn profile provided another deep well of information, offering detailed insights into my roles, achievements, and thought leadership over time.

Additionally, I drew heavily from historical records of my work in past professional positions —particularly my current role—where I’ve led high-impact initiatives, authored key EHS communications, and developed frameworks that have shaped organizational performance. My long-standing involvement with ASSP was equally valuable. From board-level governance contributions to volunteer leadership roles and national committee work, those records helped refine the twin’s understanding of professional association strategy, DEI leadership, and member engagement.

Finally, my published works and innovation papers—including articles like Essential Mistakes for EHS&S Leaders to Avoid—added further depth, enabling the twin to reflect not only what I’ve done, but how I think. This robust and diverse content ecosystem ensured that the digital twin isn’t just technically accurate—it’s authentically me in both tone and intent.


Why Build a Digital Twin?

Leadership is not just about presence—it’s about influence, clarity, and accessibility. With increasing demands from regulatory agencies, boards of directors, site operations, and nonprofit governance bodies, I needed a mechanism to:

  • Deliver timely, values-driven guidance across a dispersed global network
  • Scale institutional knowledge to support onboarding, succession planning, and daily operations
  • Model modern leadership by aligning digital innovation with ethical stewardship
  • Reduce response lag in fast-moving, high-consequence environments

My goal wasn’t to automate leadership—it was to amplify and protect it.


How It Was Built

The “Chetwin DT Executive Twin” was created using OpenAI’s GPT technology and meticulously engineered to mirror my operational logic, safety philosophy, and communication tone. Development followed a three-tiered methodology:

1. Strategic Knowledge Base

I curated and structured content from across my career to form a living knowledge engine. This included:

  • My 2025 vision for safety excellence and team alignment
  • Detailed leadership expectations for global EHS staff
  • My complete Director-at-Large platform for ASSP, reflecting governance and DEI commitments
  • Innovation frameworks like the Health and Safety Opportunity Index (HSOI) I developed to quantify risk reduction performance

These inputs became the foundation from which the twin draws real-time guidance, context, and scenario-based coaching.

2. Executive Persona Engineering

The twin was configured to deliver output with the same tone, structure, and discipline I bring to the boardroom or a plant floor. It tailors communications to varied audiences—CEOs, site leaders, regulators, and young professionals—while maintaining clarity, humility, and actionable candor.

It leverages analogies and coaching language that I frequently use—drawing from aviation, literature (history & fiction), economics, and organizational psychology—to connect abstract principles with personal meaning.

3. Continuous Intelligence Integration

The twin updates monthly to reflect real-time developments from ISO, NIOSH, UNGC, CDP, EcoVadis, and others. It incorporates strategic inputs from evolving trends in AI governance, sustainability metrics, PSM modernization, and total worker health. This ensures it’s not only historically accurate but also future-ready.


What the Twin Does

The Executive Twin already delivers tangible value across a variety of high-impact functions—serving as both a force multiplier and a strategic safeguard in critical leadership workflows.

Strategic Memo Development: It produces high-quality drafts for safety directives, board communications, and performance alignment documents that reflect not just my voice, but the strategic intent behind each message. Whether it’s articulating a proactive risk management plan or framing a cultural transformation initiative, the twin ensures that messaging remains consistent, timely, and aligned with enterprise goals.

Coaching and Scenario Guidance: It acts as a coaching companion for site-level and functional leaders, using embedded frameworks like Hazard Recognition Plus (HRP), the hierarchy of controls, and stop work authority protocols. This ensures frontline leaders can get immediate, tailored guidance on how to approach complex EHS situations—whether they’re navigating compliance in emerging markets or managing workforce behavior during periods of operational stress.

Governance and Association Engagement: The twin is especially effective in supporting professional association and nonprofit leadership. It helps prepare for board meetings, develop DEI strategies, craft governance language, and engage with member constituencies. In my work with ASSP, for example, the twin draws from years of involvement to help translate emerging member needs into actionable strategies, bridging operational insight with organizational mission.

Crisis Support and Risk Communication: During high-pressure scenarios—such as critical incidents, public disclosures, or ESG-related concerns—the Executive Twin can generate rapid first-draft communications, talking points, and action frameworks. It supports swift decision-making without sacrificing tone, credibility, or regulatory alignment, helping leaders respond with both precision and empathy.

Its presence enables a level of responsiveness, consistency, and thought partnership that would be difficult to sustain manually. For example, the twin enables faster decision cycles, better clarity in execution, and higher confidence across stakeholder groups. It does not replace the judgment or accountability of executive leadership—it enhances it by providing a reliable, values-driven resource that’s always available to support clarity, continuity, and confidence in moments that matter most.


How I Have Put it to Use

I’ve already begun leveraging the Executive Twin to support several high-value leadership functions—and the results have been both practical and transformative. One of its most powerful applications is in deriving insights from EHS performance data. The twin helps translate complex trends into actionable narratives, articulated in my own professional voice, and tailored for operational teams who need both clarity and context.

It has also significantly accelerated the development of executive communications and reports, reducing the time required while enhancing both strategic depth and audience relevance. I use it to respond quickly and concisely to executive-level queries, ensuring that my answers are both accurate and aligned with my established tone and priorities.

In my day-to-day work, the twin serves as a trusted editor and reviewer, helping refine my written communications for content quality, readability, and brevity. It constructively critiques drafts to sharpen their effectiveness and ensure the messaging lands with the intended clarity and purpose.

Perhaps most compelling, the twin acts as an idea generator, offering fresh perspectives, innovative solutions, and emerging technologies that I might not otherwise have encountered as quickly. This creative augmentation makes it not only a strategic assistant but also a thought partner in navigating complex and evolving EHS challenges.


Why This Matters

We are entering a new chapter in the EHS profession—one defined not just by regulations and scorecards, but by our ability to lead with humanity at scale. In this chapter, the most effective leaders will be those who can bridge empathy and analytics, foresight and accessibility. It’s a moment where success is no longer measured solely by lagging indicators or compliance audits, but by how effectively we translate risk awareness into protective action, turn innovation into operational advantage, and embed equity and trust into every decision.

The Executive Digital Twin represents more than a technological step forward—it marks the emergence of a new leadership infrastructure. One that honors legacy knowledge and professional ethics while answering the calls of speed, transparency, and global inclusion. It enables leaders to be present without being stretched, to be responsive without being reactive, and to transfer wisdom without waiting for turnover.

To the EHS profession, this model sends a powerful signal: digital transformation is not a disruption to fear, nor a mandate from outside forces. It is a design space we can claim. We have the opportunity—and arguably the obligation—to shape these tools with our values, our voice, and our vision. In doing so, we don’t just keep pace with change—we lead it, on behalf of the people, communities, and futures we are called to protect.


Final Thought

I didn’t build this twin to replace myself. I built it to preserve and scale a leadership philosophy rooted in stewardship, strategic clarity, and human dignity. In times of crisis or transition, leaders must offer not only direction but resilience—and resilience today means being ready to respond across more domains than ever before.

The most important work in EHS still happens person-to-person, on the floor and in the field. But the thinking that supports it, the culture that enables it, and the strategy that sustains it—all of that can be scaled.

This is my Executive Twin. What might yours look like?

Posted in Uncategorized | Tagged , , , , , , , , | 2 Comments

From Hazard to Control: Managing Combustible Dust in Real-World Operations

Introduction and Context

In a recent discussion among safety professionals that I was part of, the topic of combustible dust management came up in the context of demonstrating the business value of risk reduction. One of the central questions was how to determine what level of fugitive combustible dust accumulation is acceptable in industrial operations. This is a critical concern in industries such as metals, chemicals, wood products, and agriculture, where combustible dust is not a theoretical hazard but a real and persistent threat to safety and continuity.

“Combustible dust doesn’t give second chances. The time to understand it, control it, and engineer it out of your process is before it becomes a headline—or a memorial.”
— Chet Brandon

Given my background in managing combustible dust risks—including early career experience at Elkem Metals North America (formerly Union Carbide Ferro-Alloys)—this topic is both professionally significant and deeply personal. During my time there, I worked with a colleague who had lost his brother in a dust explosion at the very site where we then worked. That tragedy underscored the reality that these hazards are not abstract—they have lasting human consequences. Elkem had a long-standing legacy of handling explosive metal dusts, and I was fortunate to learn from some of the most seasoned process engineers and safety professionals in the industry. Many of them had first-hand experience with serious incidents and shared their hard-earned lessons with a sense of urgency and purpose. One meaningful outcome of that formative experience was co-authoring a technical paper on dust explosion hazards with one of those veteran process engineers—a resource I reference later in this post.

This article provides a detailed discussion on evaluating and managing combustible dust accumulation in industrial settings. It also highlights key insights from the paper “Prevention and Control of Dust Explosions in Industry” by Ronald C. Brandon and Dale S. Machir—a foundational reference for understanding the technical and practical aspects of dust explosion prevention.


Fundamentals of Dust Explosions

In my career, I’ve seen how easily a dust explosion can move from a theoretical risk to a devastating reality. In the paper I co-authored with Dale Machir—Prevention and Control of Dust Explosions in Industry—we focused on unpacking the fundamentals of how dust explosions occur and, more importantly, how they can be prevented through sound engineering and disciplined operational control. At the heart of every dust explosion are five essential conditions—what we often call the “Dust Explosion Pentagon.” These include the presence of a combustible dust, dispersion of that dust into a cloud, an oxidizing atmosphere (usually air), some level of confinement, and an ignition source. When those five elements align, the result can be a rapid, high-energy deflagration with the potential for serious injury, loss of life, and major facility damage.

One key point we emphasized in the paper is the dual-stage nature of most significant dust explosions. A small primary event—often inside a piece of equipment like a filter or transfer line—can loft layers of accumulated dust into the air, setting the stage for a much larger and far more dangerous secondary explosion. That’s where we see the real devastation. In several incidents I’ve studied or been briefed on, the secondary blast has traveled through process areas, igniting dust layers in multiple rooms or areas and escalating the damage exponentially. These are the scenarios that destroy buildings and take lives.

Understanding the materials involved is critical. Combustible dust hazards aren’t limited to wood or grain products; many metal dusts, plastic resins, and even food ingredients like powdered milk or sugar can pose explosion risks. What makes a dust dangerous is often its particle size, moisture content, and how easily it becomes airborne. Fine, dry particles with a high surface area ignite quickly and burn intensely. In the metals industry—where I spent much of my early career—we routinely worked with aluminum, chromium, manganese, and silicon dusts that could ignite with a static discharge or overheated surface if not properly managed. Later in my career I also managed materials in dust form such as wielding fume, coal and related substances, graphite, and polymers.

Another important lesson I’ve learned through years of managing combustible dust risks across multiple facilities—often producing what appeared to be the same materials—is that no two dusts are truly alike. Even when the base material is chemically identical, variations in processing methods, particle size distribution, moisture content, and surface area can result in significant differences in ignition sensitivity, deflagration severity, and explosibility. I’ve seen firsthand how assumptions based on “similar” materials from different sites can lead to dangerously flawed risk assessments.

That’s why it is absolutely critical to characterize each site-specific dust using standardized testing protocols—most importantly, per ASTM E1226, which defines how to measure key parameters like the maximum explosion pressure (Pmax) and maximum rate of pressure rise (dP/dt). These aren’t just technical details—they’re the backbone of sound combustible dust hazard analysis. And to get valid, actionable data, the tests must be performed using a 20-liter sphere apparatus, which is the recognized standard test chamber for dust explosibility. While smaller devices (like the 1-liter Hartmann tube) may provide general indications, only the 20-liter sphere delivers the accuracy and repeatability needed for engineering design and safety decisions.

Using the correct test method is just as important as conducting the test itself. If you’re basing your hazard analysis or explosion protection strategy on unverified or low-fidelity data, you’re essentially flying blind. This is especially critical when designing deflagration venting, suppression systems, or isolation barriers—any of which depend on having a reliable Pmax and Kst value derived from the 20-liter sphere.

And this isn’t a one-time check-the-box task. Any significant change in the process—raw materials, equipment, throughput, or even housekeeping practices—should trigger a formal Management of Change (MOC) review. That review must include a reassessment of combustible dust hazards, and, where applicable, retesting of the dust to identify any shift in its ignition or explosion characteristics. I’ve seen cases where a small change in the grinding process or drying temperature created dust with dramatically more reactive properties.

Combustible dust management is not about memorizing the properties of a material—it’s about staying vigilant to how those properties can shift, and building systems that recognize, test, and respond accordingly. That vigilance starts with getting the science right.

In the paper, Dale and I discussed the importance of lab testing to characterize dust behavior. You can’t manage what you don’t understand. Parameters like Minimum Explosible Concentration (MEC), Minimum Ignition Energy (MIE), and Kst (a measure of explosion severity) tell you how easily your dust will ignite and how violently it will burn. A dust with a high Kst value—especially in the St-2 or St-3 range—demands aggressive controls, both in terms of equipment design and operational discipline.

Ignition sources often go unnoticed until it’s too late. It doesn’t take an open flame to trigger an event. I’ve seen or investigated situations where hot bearings, friction sparks, or even a spontaneous static discharge in a duct system led to an explosion. The risk is compounded in systems that transport dust over long distances—like pneumatic conveyors or central vacuum systems—because ignition can occur upstream and propagate rapidly downstream if isolation is inadequate.

The core message I’ve tried to reinforce throughout my career—and that Dale and I made clear in the paper—is that dust explosions are preventable. These aren’t random acts of nature. They are the result of known physical conditions that, if allowed to develop unchecked, will eventually align and cause harm. When we understand the science, commit to testing and analysis, and apply sound engineering principles, we can break the chain of events before it leads to an explosion. That’s the real takeaway: dust explosion prevention isn’t about luck—it’s about doing the work, understanding the hazards, and implementing reliable, system-based controls.


Assessing Acceptable Accumulation Levels

Determining an acceptable level of dust accumulation requires a risk-based approach that considers both the nature of the dust and the context in which it is present. The commonly cited benchmark—1/32 inch (0.8 mm) of dust over more than 5% of the floor area—is drawn from NFPA 654 and should be seen as a minimum action threshold, not a definitive safe limit. This threshold is particularly conservative for low-density dusts (bulk density <75 lb/ft³), which can reach explosible airborne concentrations even at relatively thin layer depths.

Key assessment factors include particle size distribution, moisture content, ignition sensitivity, and the tendency of the dust to become airborne. Fine, dry particles with low minimum ignition energy (MIE) pose the greatest threat. The particle size distribution is also a factor. Generally speaking, the finer the dust, the greater the ignition hazard. Another rule of thumb I use is that dusts with a high fraction of 150 mesh (Tyler sieve) and lower need to be evaluated for combustibility. Additionally, environmental conditions such as airflow, vibration, and human or machine activity can disturb settled dust, making it easily suspendable in the air.

The surface on which dust accumulates also matters. Dust on elevated or hidden surfaces—beams, rafters, piping, light fixtures—can go unnoticed and uncleaned for extended periods. These areas pose a high risk for secondary explosions if the dust is later dislodged and ignited by an initial event. Risk increases significantly if fugitive dust is allowed to accumulate in or around ventilation ducts, enclosures, or process equipment.

To measure dust accumulation, a variety of tools and techniques are available. Depth gauges, dust combs, and rulers can provide quick field estimates of layer thickness. More precise methods include collecting a known volume of dust with a scoop and weighing it to determine bulk density. This allows for a more accurate estimation of the potential airborne dust concentration. Surface area calculations should be performed to determine what percentage of the total room or equipment area is affected. These measurements should be documented and repeated periodically to identify trends and determine the effectiveness of dust control measures.

Visual indicators can also play a role. For example, if the surface color is obscured or if a finger swipe leaves a clear trace in the dust, this often indicates that dust has exceeded the 1/32-inch threshold. However, visual cues are subjective and should not replace quantitative measurements when making decisions about hazard level.

A comprehensive Dust Hazard Analysis (DHA), as required by NFPA 652, integrates all these data points to provide a complete picture of the combustible dust risk in a facility. A DHA includes an inventory of all combustible dust-producing processes, identification of potential ignition sources, analysis of containment or confinement factors, and a review of current housekeeping and mitigation systems. From this, site-specific acceptable accumulation levels can be established and aligned with a hierarchy of controls to manage risk effectively.


Prevention and Mitigation Strategies

In our paper, Prevention and Control of Dust Explosions in Industry, Dale Machir and I emphasized that engineering controls are the foundation of any truly effective combustible dust prevention strategy. While administrative controls like training and housekeeping play important roles, they should be viewed as secondary layers of defense. The real key lies in how the system is designed from the start—because once dust escapes into the general work environment, the risk profile increases dramatically and your margin for error narrows.

Local exhaust ventilation (LEV) should be installed as close to the point of dust generation as possible. Capturing dust at the source—before it can migrate to surfaces or become airborne—is one of the most effective ways to prevent accumulation and dispersion. Too often, I’ve seen systems that rely on general dilution ventilation or distant collection points, which are simply not sufficient for high-risk dusts.

We also highlighted the critical role of deflagration venting, particularly in enclosed vessels or dust collectors. These vents are engineered to relieve internal pressure in the event of an explosion, minimizing structural damage and reducing the risk of injury to personnel. Proper vent sizing, duct routing, and positioning relative to occupied areas are essential design considerations. It’s not enough to simply install a vent panel and assume the system is protected—there must be a documented basis for its performance, ideally supported by dust testing data and compliant with NFPA standards.

For systems involving pneumatic transport of dust, particularly over long distances or between process zones, spark detection and suppression is another key layer of protection. These systems monitor for thermal anomalies or sparks within the conveying line and activate suppression agents or system shutdown protocols before ignition sources can reach a dust collector or silo—where an explosion could easily propagate.

Equally important is the design of the dust collection system itself. A properly engineered dust collector must do more than just move material—it must prevent leakage, control static buildup through proper grounding and bonding, and include explosion isolation mechanisms such as chemical suppression, fast-acting valves, or rotary airlocks. In addition, dust collectors must be equipped with appropriately sized explosion vent panels or flameless venting devices that are designed to safely relieve internal pressure during a deflagration. These vents should be located to discharge to a safe area away from personnel and critical equipment, and should be installed in accordance with the collector’s tested design parameters. Without proper venting, the collector becomes a pressure vessel during an explosion event—potentially turning a localized incident into a catastrophic failure.

A poorly maintained or incorrectly specified collector is one of the most common points of failure in dust control systems.

That said, housekeeping still matters—greatly. It must be frequent, systematic, and verifiable, especially in elevated or concealed areas where dust can settle unnoticed. However, we were clear in the paper that housekeeping should never be relied upon as the primary control strategy. If you’re constantly cleaning up dust that’s escaping from process equipment, that’s not a control measure—that’s an indicator of a failed system design. The goal should always be to prevent the dust from escaping in the first place, through effective containment, enclosure, and point-source control.

We called attention to the importance of training, maintenance, and change management as integral parts of the combustible dust control system. Workers need to understand not only the visible risks of accumulated dust but also the invisible ones—like static energy or poor duct routing. Maintenance teams should be trained to recognize compromised seals, worn gaskets, or ungrounded components. And critically, every process modification—whether it’s a change in material, a layout shift, or new equipment—should trigger a combustible dust impact review. If that review isn’t built into the facility’s Management of Change (MOC) system, you’re flying blind.

Finally, we emphasized that emergency management is an essential—yet often underdeveloped—component of a comprehensive combustible dust safety strategy. Too often, facilities focus heavily on engineering controls and housekeeping, while overlooking the need to prepare for the possibility of an event. We advocated for site-specific emergency response plans that recognize the unique characteristics of dust explosions, including the potential for secondary explosions, intense thermal energy, and blast pressures that can compromise structural integrity. We recommended that emergency response planning include coordination with local fire departments and emergency services, clear protocols for evacuation and accountability, and training for personnel on how to respond safely without inadvertently creating additional hazards—such as dispersing accumulated dust while attempting to intervene. A well-informed and well-rehearsed response team is critical because, in a dust incident, seconds matter. While prevention remains the primary objective, effective emergency preparedness is a necessary safeguard when all other layers of protection are tested.

If you’d like to dive deeper into the fundamentals and real-world lessons behind combustible dust prevention, I encourage you to read the paper Dale Machir and I co-authored on the topic. It covers both the science and the practical strategies we’ve applied in industrial environments. You can access the full paper here: Prevention and Control of Dust Explosions in Industry.

If you are looking to go even further in the understanding and effective management of combustible dust hazards, this book is highly authoritative: Dust Explosions in the Process Industries, by Rolf Eckoff.

At the end of the day, preventing combustible dust explosions is not about any one control—it’s about integrating engineering, operations, and organizational discipline into a cohesive system. That was the core message of our paper, and it remains just as relevant today as when we first wrote it.


Spreading the Word on Combustible Dust Hazards and Control

I still perform training on the topic of dust explosions prevention and control to continue to make industrial organizations aware of the risk and the control methods. When I started my career in the industrial safety field, dust explosion knowledge was still very low for most safety professionals. My time with a company that had managed the hazards for decades gave me a wonderful opportunity to fully learn the science and practical management actions for this unique area of knowledge. An example of the training I typically provide is given in the presentation at this link: Example Combustible Dust Training Material by Chet Brandon

Dale and I developed a demonstration device to visually illustrate the fundamental principles of dust explosions, inspired by the original Hartmann Tube used in early combustible dust testing. Our version was a simplified cylindrical chamber equipped with an ignition source and a method to uniformly disperse dust particles into a suspended cloud. What made it especially effective for educational purposes was the visual demonstration of explosion pressure—a thick paper “vent” sealed the top of the tube and would burst outward upon ignition, mimicking a deflagration vent panel. The simplicity of the setup makes it a powerful teaching tool, especially for audiences new to the topic. I still have the device today and occasionally use it during presentations to help drive home the physics behind combustible dust hazards. You can see a video of it in action in one of my presentations: Hartmann Demonstration by Chet Brandon

I’m also encouraged that the National Fire Protection Association (NFPA), through the development of NFPA 652: Standard on the Fundamentals of Combustible Dust, captured and codified many of the core principles that Dale and I—and many others in this field—have emphasized over the years. This standard provides a foundational framework for hazard identification, Dust Hazard Analysis (DHA), and risk-based control strategies, helping to bridge the gap between theory, practice, and regulation. I conducted training on this NFPA Combustible Dust standard several years ago. You can view that material here: The Combustible Dust Threat by Chet Brandon

Note: In 2024 the NFPA combined several of it’s combustible dust related standards, including 652, into one new standard: NFPA 660, Standard for Combustible Dusts and Particulate Solids (2025). It was published in December of 2024.


Conclusion and Practical Takeaways

Combustible dust hazards remain one of the most underestimated risks in industrial operations, yet they are entirely preventable with the right combination of technical understanding, disciplined controls, and organizational commitment. Over the years, I’ve seen firsthand the consequences of both strong and weak dust management systems—and the difference often comes down to leadership, culture, and follow-through. Prevention is not just a function of engineering and housekeeping—it’s a mindset that must be built into design, operations, maintenance, and emergency preparedness.

I’m proud to continue sharing this knowledge, not only because of where I started in this field, but because I’ve seen how powerful it is when teams truly understand the science and the stakes. We owe it to our workers, our communities, and our profession to treat combustible dust as the serious hazard it is—and to manage it with the same rigor we apply to any other major industrial risk.

Stay safe, stay informed—and don’t let dust settle on your safety program!

Posted in Uncategorized | Tagged , , , , , , | Leave a comment