Integrating Safety, Health, and Purpose: The Evolution of Early Intervention in Industry — A Pioneer’s Perspective

An example of Early Injury Intervention: An Athletic Trainer & CEIS helps a maintenance employee improve his posture to decrease neck and shoulder fatigue from his tasks.

Leading a team of passionate, forward-thinking healthcare practitioners in the early days of workplace wellbeing was nothing short of exhilarating. We didn’t just follow the rules—we challenged them, exploring new ways to keep people safe, healthy, and thriving on the job. A recent conversation with a former colleague from those days reminded me of the impact of that work and inspired me to put my reflections into this article. For EHS leaders and practitioners committed to redefining occupational health, I hope it sparks fresh ideas and bold approaches.

After that conversation with my former colleague, I found myself contemplating the challenges we faced, solutions we developed, and memories from that time. What struck me most was not just what we accomplished, but what it meant—to me personally, to the young professionals I worked alongside, and to the organizations and workers we served. Ten years later, with the perspective of continued growth in the field of industrial safety and the evolution of early injury intervention into mainstream practice, I decided it was time to revisit and reinterpret that work. This article is my attempt to document why it mattered then, why it matters now, and what lessons it offers for the future.

For decades, safety professionals and occupational health providers worked in silos. Safety sought to prevent accidents, while medicine treated injuries once they had already occurred. The result was a costly and incomplete system where too many employees slipped through the cracks.

Early intervention filled this gap. By embedding healthcare expertise, educated on the environment, directly in the workplace, we transformed a reactive cycle into a proactive system—one that not only prevented injuries but also reshaped how organizations thought about their responsibility for worker well-being.

As Vice President of Operations at ATI Worksite Solutions, I had the privilege of leading a team of over 300 healthcare professionals who were pioneering a new approach to protecting workers in industrial environments. We recognized a gap between traditional reactive injury management and proactive prevention programs. Out of this realization, we helped advance a model of early intervention that has since reshaped the way companies think about occupational safety, health, and employee wellbeing.

From the Athletic Field to the Factory Floor

Our method was rooted in the idea of adapting the unique expertise of Certified Athletic Trainers to the workplace. These professionals—specially trained as Certified Early Intervention Specialists™ (CEIS™)—blended sports medicine, ergonomics, safety, psychology, and injury prevention science into one role. Instead of waiting for injuries to occur, they engaged workers in real time, on the floor, through encounters: one-on-one coaching, injury triage, safe lifting techniques, stretching programs, wellness education, and ergonomic improvements.

The impact was powerful. By being visible, approachable, and trusted, CEIS™ professionals fostered an early reporting culture where employees no longer felt they had to “work through” discomfort until it became a recordable injury. Instead, minor issues could be addressed before escalating. As we described in our paper:

“The frequent presence of the Athletic Trainer among the workforce builds rapport… employees begin to trust the Athletic Trainer as an expert in early intervention and realize they now have an effective alternative to working until the pain becomes disabling.”

Why Early Injury Intervention Works

Traditional EHS systems, while vital, often leave a timing gap. Reactive tools—like accident investigations—teach us after harm has occurred. Proactive tools—like training and audits—look toward the future. But what about the critical “now” moment, when pain first appears or risk is first observed? That’s where early intervention fits.

By responding within hours of discomfort emerging, early intervention specialists help workers reverse injury progression. Instead of weeks of rehabilitation and restricted duty, employees often returned to full function in days.

For example, when comparing two industrial sites—one with a full-time CEIS™ and another with only part-time coverage—Workers’ Compensation claim costs decreased by 50% in just four months at the full-time site. The results were so compelling that the part-time site quickly transitioned to full-time support.

Examples of How Early Injury Intervention Works

I’ll never forget a machinist at a major automotive manufacturer who came to our on-site specialist with early signs of shoulder strain. In a traditional system, he likely would have “worked through it” until the injury required medical treatment and lost time. Instead, within minutes he was coached through stretches, posture changes, and light task modifications. Within days he was back to full strength—never entering the workers’ comp system, never losing wages, and never missing a beat in his career

Here is another example of how early intervention is effective in the industrial environment. An employee has back pain from lifting boxes frequently throughout his 8-hour day. As soon as he feels pain or discomfort he contacts the Athletic Trainer to come assess him or the trainer spots his unusual body motion and inquires as to his level of discomfort. The Athletic Trainer has an encounter with the employee within hours of the onset of pain. The employee is given some instructions on pre-established job-specific stretches that are posted within his department, as well as some tips on safe lifting techniques and body mechanics. The employee is reminded that icing would prevent worsening of his discomfort. The employee may be placed on protective limitations to prevent the condition from worsening to the point he can no longer perform the essential functions of his job. Daily follow up occurs from the Athletic Trainer to monitor improvement or detect the need for referral to traditional healthcare professionals for formal assessment and treatment. If the employee is compliant with the recommendations given, he should start to feel better within 24-48 hours and should continue with any job method modifications, stretching exercises and rest cycle recommendations from the Athletic Trainer in the upcoming days or weeks. The reversal of injury progression is verified and allows the introduction of a pre-established strengthening regimen that will allow the employee to increase tolerance to the physical stressors of the job that the injury originated from.

These examples illustrate the power of early intervention: small informed actions, taken early, prevent long-term harm for both employees and employers.

Agile Safety for a Changing Workplace

The workplaces of the 21st century are fast-moving, lean, and often stressful environments. Early intervention methods proved agile, adapting to real-time needs in a way that aligned with modern business pressures. They reduced costs rather than added to them, supported aging workforces, and met rising expectations for safe, meaningful work.

One global manufacturer of container glass found the results so striking that they expanded the program to multiple sites, including several in California where workers’ compensation costs were historically high. Within just 12 months, they saw a 92% decrease in workers’ compensation direct spend across their California sites.

The outcomes were clear:

  • Recordable injuries were reduced.
  • Claim frequency and severity were reduced.
  • Commercial health insurance costs decreased.
  • Health screening participation and employee morale increased.

In short, early intervention created safer workplaces, healthier employees, and measurable business value.

My Contributions to a Developing Field

While the clinical expertise resided in the healthcare professionals we placed on-site, my role as Vice President of Operations was to design, scale, and institutionalize early intervention as a discipline in occupational health and safety. This work not only delivered immediate results for clients but also helped establish a new professional field at the intersection of occupational medicine and safety.

Defining and Professionalizing the Model

I contributed directly to the evolution of the Certified Early Intervention Specialist™ (CEIS™) framework, helping shape how athletic trainers could adapt their sports medicine expertise into industrial environments. This included building training structures, compliance protocols, and integration pathways that blended clinical care, ergonomics, OSHA regulatory requirements, and EHS management.

Scaling and Delivering Results Across Industries

I guided the national expansion of early intervention programs into aerospace, automotive, glass, food, pharmaceuticals, and distribution sectors. Each implementation was tailored to unique operational risks, labor structures, and cultural expectations. Under my operational leadership, ATI Worksite Solutions transformed early intervention from a promising idea into a proven, repeatable, and scalable system that organizations could rely on for consistent performance.

Leveraging Deep Heavy Industry Experience

A critical differentiator of our success was the ability to integrate early intervention seamlessly into the realities of demanding industrial environments. Drawing on my extensive experience protecting employees in heavy industry settings—including aerospace, metals, glass, and chemical production—I ensured that our programs were not only clinically sound but also operationally relevant. This gave my team the advantage of deep contextual knowledge, enabling them to fully align their efforts with production demands, workforce dynamics, and safety-critical operations. The result was maximum impact in keeping employees safe, healthy, and able to contribute to the mission of their organizations.

Data-Driven Outcomes and ROI Validation

One of my central contributions was embedding rigorous measurement and business case validation into early intervention. I championed the use of performance metrics, client sentiment and return-on-investment analytics, showing clients tangible outcomes such as:

  • 50% reduction in Workers’ Compensation claim costs within four months at pilot sites.
  • 92% decrease in workers’ compensation spend across California operations for a global glass manufacturer.
  • Reductions in OSHA recordables, improved wellness participation, and measurable gains in morale and productivity.

By making outcomes visible, I ensured that early intervention was not seen as a “soft” wellness initiative, but as a core business strategy that aligned with corporate cost, productivity, and compliance goals.

Integrating Occupational Safety and Medicine

Historically, safety and medicine operated in silos: safety professionals focused on preventing incidents, while occupational medicine treated injuries after the fact. My work demonstrated that the two could be seamlessly integrated through real-time, on-site intervention. This approach not only reduced injuries but also reshaped organizational culture—creating early reporting environments where prevention became part of daily operations.

Alignment with NIOSH Total Worker Health®

The philosophy behind early intervention aligned naturally with what later became mainstream under NIOSH’s Total Worker Health® (TWH) approach. TWH emphasizes policies, programs, and practices that integrate protection from work-related safety and health hazards with promotion of injury prevention, well-being, and overall worker health.

Our early intervention model anticipated this integration by:

  • Bringing together safety and health disciplines into one role at the point of work.
  • Promoting wellness alongside injury prevention, with CEIS™ specialists addressing nutrition, stretching, strengthening, and healthy lifestyle coaching.
  • Building a culture of health where employees trusted the system enough to report early, and organizations could respond in real time.

In many ways, the CEIS™ framework was an early embodiment of the Total Worker Health vision—creating workplaces that didn’t just prevent injuries but actively supported longer, healthier, and more satisfying careers.

Advancing the Profession and Thought Leadership

Beyond operations, I worked to establish early intervention as a recognized field. This included:

  • Authoring research and professional papers, including Early Injury Intervention Methods Bridge the Gap Between Reactive and Proactive Injury Prevention Systems. (Presented at ASSP’s SAFETY2015 in Dallas TX)
  • Presenting at national forums and safety congresses, raising awareness and influencing adoption among EHS leaders.
  • Mentoring professionals and building interdisciplinary teams, ensuring the sustainability and growth of the CEIS™ model, a proven and reliable method to bring holistic wellbeing to industrial workforces.

Developing the Next Generation of Leaders

One of the greatest joys of my time leading ATI Worksite Solutions was not only advancing early intervention in industry, but also developing the remarkable healthcare practitioners who made it possible. Many were just beginning their careers when they joined our team. I had the privilege of mentoring them as they grew—not just as medical and occupational safety professionals, but as leaders capable of shaping entire workplace cultures.

We spent countless hours together learning how to translate clinical expertise into meaningful impact on the factory floor, how to build trust with industrial workers, and how to understand the unique pressures faced by plant leaders. I emphasized the importance of being reliable, capable, and indispensable to our client organizations. In short, we were not simply providing a service; we were becoming strategic partners in creating safer, healthier, and more productive workplaces.

The five years I spent leading operations at ATI Worksite Solutions were transformative—not only for the industry, but also for all of us on the team. Watching these young professionals flourish has been one of the most rewarding aspects of my career. Many have gone on to make significant contributions of their own. One especially proud example is the founding of the Industrial Athletic Trainers Society by a former member of our team—a powerful testament to the momentum and influence of this work.

In mentoring them, I learned as much as I taught: that the future of our profession depends on empowering the next generation with both technical expertise and the confidence to lead with purpose. Their success continues to multiply the impact of early intervention across industries, and their legacy is as much a part of this story as mine.

The Full Impact of a Holistic Approach: Creating Safer Jobs and Fostering Well-being

For decades, organizations treated occupational safety and health (OSH) and employee well-being as separate domains. Traditional OSH—what most simply call “safety”—was focused on health protection: preventing accidents, exposures, and injuries. Meanwhile, wellness and health promotion programs emphasized health enhancement: encouraging nutrition, exercise, and lifestyle improvements outside the core safety system.

The A-ha moment came when forward-thinking companies began asking: What if these two streams weren’t separate? What if safety and health promotion were integrated into a single, holistic system of care for employees?

The Power of Integration

Research by Loeppke et al. (2015) demonstrated that integrating health protection and health promotion delivers measurable benefits beyond what either can achieve alone. The two fields reinforce one another, creating a whole greater than the sum of its parts:

  • Improved safety outcomes: Workers who are healthier overall are less likely to suffer musculoskeletal injuries, fatigue-related errors, or chronic disease complications that impair safety.
  • Enhanced health outcomes: A safer workplace reduces physical and psychological stressors that otherwise undermine wellness efforts.
  • Cultural transformation: When organizations treat health and safety as inseparable, they create a Culture of Well-being—where employees feel valued not just for their output, but as whole people.

From Compliance to Culture

Traditional safety systems often emphasize compliance—meeting OSHA or regulatory standards. Integrated systems go beyond compliance to embed health and safety into daily work practices, leadership priorities, and organizational values.

  • A lockout-tagout procedure is health protection.
  • A stretching and ergonomics coaching program is health promotion.
  • But when combined—ensuring equipment is safe while also preparing employees’ bodies for safe operation—they form a seamless protective web that reduces both acute accidents and long-term strain.

This shift reframes the safety profession itself: from “preventing harm” to “creating the conditions for people to thrive.”

Holistic Impact on Business and Workers

An integrated approach creates impact on multiple levels:

For Workers:

  • Safer jobs with fewer injuries and exposures.
  • Reduced stress and fatigue, leading to higher engagement.
  • Improved long-term health trajectories, with lower risks of chronic disease.
  • A greater sense of purpose and belonging at work.

For Organizations:

  • Reduced workers’ compensation costs and healthcare spend.
  • Fewer lost workdays and restrictions, driving productivity gains.
  • Stronger employer brand and ability to attract/retain younger workers who expect healthy, mission-aligned workplaces.
  • Alignment with frameworks like NIOSH Total Worker Health®, which are increasingly viewed as best practice.

For Society:

  • Reduced burden on healthcare systems.
  • Longer, healthier working lives.
  • More sustainable organizations that balance profit with people and purpose.

A Culture of Well-being: The Endgame

The integration of OSH and health promotion doesn’t just prevent injuries—it creates workplaces that actively improve people’s lives. This is the true “A-ha moment”:

  • Safety protects.
  • Wellness empowers.
  • Together, they create well-being.

And well-being is what transforms organizations. Workers in these environments don’t just avoid harm—they gain health, resilience, and satisfaction. In turn, businesses gain loyalty, performance, and long-term sustainability.

As Loeppke et al. (2015) concluded, aligning health and safety strategies yields measurable benefits. But the impact extends further: it reshapes the relationship between workers and their employers into a partnership built on care, trust, and shared success.

A Vision for the Future of Work

Drawing on broader workforce megatrends, I also advanced the case that early intervention was part of a larger transformation in how we think about health at work. At conferences such as the OSHU Pain at Work Conference, I emphasized that:

  • Musculoskeletal conditions remain the leading cause of workplace disability.
  • A “Culture of Safety” must evolve into a “Culture of Wellbeing”—where prevention, well-being, and human sustainability are core to business.
  • Health and safety cannot remain in silos; they must be integrated into a Total Worker Health™ approach that reflects changing employee expectations and the future of work.

And increasingly, those expectations are being shaped by younger generations entering the workforce. Millennials and Gen Z don’t just want a paycheck; they want work that is healthy, meaningful, and aligned with a greater mission than enriching shareholders. They expect employers to provide safe, sustainable, and satisfying workplaces where their well-being is valued and where the company’s purpose resonates with their own values. Early intervention, integrated health models, and Total Worker Health® speak directly to this demand—making organizations more attractive to top talent while strengthening long-term resilience.

In many ways, this work represented a paradigm shift. We demonstrated that occupational safety is not just about preventing catastrophic accidents, and occupational medicine is not just about treating injuries after they occur. The real power lies in the space in between, where early intervention can change the trajectory of worker health, safety performance, and organizational resilience.

Looking Ahead – A Call to Action

The evidence is clear: early injury intervention works. It reduces injuries, improves well-being, lowers costs, and builds trust between workers and organizations. It was an early model of the integrated approach that NIOSH has since advanced through Total Worker Health®—and it has never been more relevant.

Now is the time for forward-thinking companies to:

  • Break down silos between health, safety, and well-being.
  • Embed prevention and intervention into daily work, not just after-the-fact programs.
  • Invest in agile, human-centered systems that adapt to worker needs in real time.
  • Embrace Total Worker Health® as both a business strategy and a social responsibility.
  • Meet the expectations of new generations of workers, who want healthy workplaces that align with purpose, sustainability, and shared value.

The workplaces that thrive in the future will be those that go beyond compliance, beyond traditional safety, and embrace integrated models of health and performance. As leaders, we have both the tools and the responsibility to make work not only safer, but healthier, more meaningful, and more sustainable.

The next evolution of early injury intervention will be shaped by technology. AI-enabled health analytics, wearable sensors, and real-time ergonomics feedback will expand the reach of early intervention specialists and provide data-driven insights we could only imagine a decade ago.

Just as athletic trainers on the factory floor bridged the gap between safety and health, these technologies—when combined with human expertise—will allow organizations to predict and prevent risks with even greater precision. Companies that embrace this next frontier will not only protect their workforce but will also lead in building the sustainable, people-centered workplaces of the future.

The choice is in front of us: will we wait until employees are injured and disengaged, or will we build workplaces where people live longer, healthier, and more satisfied lives—while contributing to a mission bigger than themselves?

Ref: Loeppke, Ronald R., et al. (2015).  “Integrating health and safety in the workplace: how closely aligning health and safety strategies can yield measurable benefits.” Journal of occupational and environmental medicine 57.5: 585-597.

Posted in Culture, Injury Prevention, psychological-safety | Tagged , , , , , , , , , , , , , , | Leave a comment

Stop Work Authority: The Ultimate Expression of Safety, Empowerment, and Respect

In the realm of industrial safety, few practices are as powerful—or as underleveraged—as Stop Work Authority (SWA). When properly understood and embraced, SWA is far more than a compliance protocol. It becomes a declaration of trust, a signal of psychological safety, and a cornerstone of empowered leadership. It creates an organizational posture where safe outcomes are not coincidental or dependent on vigilance alone—they are systematically produced by a workforce that is engaged, alert, and authorized to act.

Stop Work Authority gives every employee—regardless of role or rank—the right and responsibility to halt operations if they believe something is unsafe. On paper, it’s a straightforward safety control. But in practice, its value is exponentially greater. Constructive use of SWA is one of the most powerful actions leadership can take to cultivate a workplace culture where safe work is not just possible—it’s expected and sustainable.

Psychological Safety in Action

Empowering people to speak up when something doesn’t feel right sends a clear message: you matter, your perspective counts, and your safety is non-negotiable. This goes to the heart of psychological safety, a vital ingredient in any high-performing safety culture. When workers feel safe to express concerns without fear of judgment or retaliation, they are more likely to intervene early, preventing incidents before they escalate.

When organizations genuinely support the use of SWA, they:

  • Remove fear of retaliation for stopping work, especially in situations involving higher-status personnel or production pressure.
  • Normalize open conversations about hazards and near-misses, building trust and transparency across teams.
  • Encourage feedback, learning, and mutual accountability, where each team member feels responsible for the wellbeing of others.

In these environments, employees don’t second-guess whether they’ll be supported—they know they will be. This psychological safety becomes a foundation for resilience and proactive behavior.

Empowerment Beyond Words

Too often, “empowerment” is a buzzword. SWA turns it into reality. It gives workers the authority and autonomy to exercise their judgment in the face of uncertainty. That’s not just about stopping work—it’s about starting ownership. It shifts the employee mindset from being a passive observer to an active steward of safety.

The impact of this empowerment includes:

  • Sharper hazard recognition skills across all levels of the workforce, as employees become more engaged in risk assessment.
  • A shift from top-down command to distributed leadership, where each worker becomes a safety leader in their own right.
  • Greater pride in personal and team-level safety performance, reinforcing the intrinsic value of safety as a shared goal.

When people are trusted, they tend to rise to the occasion. SWA proves that trust is a two-way street—one where respect, accountability, and shared vigilance move together.

A Management Philosophy, Not Just a Policy

SWA should never be treated as a back-pocket clause. It needs to be a visible and vocal part of the organization’s management philosophy. That means leaders must champion it—not just permit it. They must actively model its importance by praising appropriate use and showing zero tolerance for intimidation or reprisal.

When leadership embraces SWA constructively—even when the decision to stop is ultimately deemed unnecessary—they’re signaling something profound:

  • Safety matters more than speed, and no task is worth compromising a life.
  • Insight from the frontlines is valued and necessary for continuous improvement.
  • Learning is always more important than blame, especially in dynamic and high-risk environments.

This cultural posture builds resilience, not just compliance. It helps transform “policy on paper” into a living, breathing philosophy of care and courage.

Real-World Example: A Critical Stop in a Chemical Plant

This hypothetical example in a chemical operation setting illustrates the power of Stop Work Authority in protecting lives and operations.

During a routine maintenance turnaround, a group of outside contractors was issued a safe work permit to perform mechanical work on a heat exchanger in an isolated area. According to the permit, their work was restricted to bolt removal and external inspection only, with no internal entry or confined space activities authorized.

However, a sharp-eyed operations technician performing rounds noticed two contractors preparing to enter the exchanger with tools and headlamps—clearly intending to go inside. Recognizing the serious deviation from the permit scope, the technician immediately called a stop to the job, contacted the area supervisor, and ensured the team stood down.

Upon review, it was confirmed that the contractors had misunderstood the scope and believed the permit had been updated to include confined space entry for internal inspection activities. It had not. Thanks to the technician’s intervention:

  • A potential confined space entry without atmospheric testing, rescue planning, or lockout verification was avoided.
  • The contractors were retrained on site procedures and permit boundaries.
  • The permit system was reviewed for clarity, and a new validation checkpoint was added before work begins.

Importantly, the technician was recognized during the next all-hands meeting—not just for stopping the job, but for embodying the company’s core values of vigilance, courage, and care for others. This is what effective SWA looks like: not punitive, not reactive, but constructive, preventative, and deeply human.

Tracking Stops to Foster Participation

One of the most effective ways to reinforce the value of Stop Work Authority is to track and review the number of jobs stopped over time. This simple metric provides real insight into how engaged the workforce is—and whether the culture truly supports intervention.

When approached constructively, tracking SWA usage:

  • Normalizes the act of stopping work, turning it into a routine and expected behavior rather than a rare exception.
  • Reveals trends and recurring hazards, helping leadership prioritize improvements in equipment, processes, or communication.
  • Encourages peer learning, especially when job stops are discussed in safety meetings or shared as case studies.

Crucially, these numbers should never be weaponized. High numbers don’t imply dysfunction, and low numbers don’t necessarily mean everything is safe. The goal is not to reduce the count, but to understand and support safe decision-making at the point of risk.

Tracking trends over time helps organizations answer critical questions like:

  • Are we seeing participation from all departments and shifts?
  • Are the same hazards prompting repeated stops?
  • Are supervisors recognizing and supporting SWA use consistently?

When used with integrity, this data becomes a leadership tool—not just a lagging indicator. It can help validate safety program effectiveness and uncover blind spots that formal audits might miss.

Building a Culture of Learning

Every time an employee uses Stop Work Authority, it’s a chance to learn. Maybe they identified a genuine hazard. Maybe they misunderstood a procedure. Either way, the organization wins—because the system gets smarter.

Encouraging SWA helps embed a continuous improvement mindset. Key takeaways can be reviewed, shared, and used to refine training, procedures, and communication channels. It transforms safety from a static compliance function into a dynamic, adaptive system powered by frontline intelligence.

Instead of seeing stops as interruptions, forward-thinking companies see them as investments in safer outcomes. Each stop becomes a data point, a dialogue, and a demonstration of the values that define a healthy safety culture.


Bottom line: Stop Work Authority is more than a safety mechanism. It’s a cultural multiplier. It empowers employees, demonstrates deep respect for their insight, and reinforces the psychological safety necessary for sustained excellence. When leadership supports its constructive use—and actively tracks and celebrates its application—SWA becomes a catalyst for safer work and stronger teams, every single day.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

AI as a Strategic Partner: Building a Digital Twin to Advance Safety and Sustainability

Introduction

The challenges of leading Environmental, Health, and Safety (EHS) efforts across global, high-risk operations have never been more intense. Executive leaders today are asked to navigate volatile regulations, emerging technologies, ESG mandates, cultural transformation, and shifting workforce expectations—all while maintaining integrity, accountability, and performance.

After three decades serving in senior roles across chemicals, aerospace, metals, and occupational health, I confronted a core dilemma: how can one maintain consistent leadership presence and effectiveness when scope outpaces availability?

As my scope of influence expanded across global operations and governance platforms, I found myself wrestling with three critical questions that traditional leadership models struggled to fully answer:

  • How can I scale my leadership without diluting my impact?
  • How do I ensure consistent, values-driven messaging across time zones, sectors, and constituencies?
  • How can I future-proof knowledge transfer and mission alignment as we prepare the next generation of safety professionals?

In response, I made a bold move: I built an Executive Digital Twin. This is not a chatbot or novelty AI experiment. It’s a custom-trained leadership proxy designed to reflect my strategic voice, professional standards, and decision-making principles—extending the reach and responsiveness of an executive without diluting its values.

I was uniquely well-positioned to create a professional digital twin because of the extensive documentation I’ve maintained throughout my career. A foundational resource was the body of articles I’ve published on my website, LeadingEHS.com, which capture not only my subject matter expertise but also my communication style and strategic perspective. My LinkedIn profile provided another deep well of information, offering detailed insights into my roles, achievements, and thought leadership over time.

Additionally, I drew heavily from historical records of my work in past professional positions —particularly my current role—where I’ve led high-impact initiatives, authored key EHS communications, and developed frameworks that have shaped organizational performance. My long-standing involvement with ASSP was equally valuable. From board-level governance contributions to volunteer leadership roles and national committee work, those records helped refine the twin’s understanding of professional association strategy, DEI leadership, and member engagement.

Finally, my published works and innovation papers—including articles like Essential Mistakes for EHS&S Leaders to Avoid—added further depth, enabling the twin to reflect not only what I’ve done, but how I think. This robust and diverse content ecosystem ensured that the digital twin isn’t just technically accurate—it’s authentically me in both tone and intent.


Why Build a Digital Twin?

Leadership is not just about presence—it’s about influence, clarity, and accessibility. With increasing demands from regulatory agencies, boards of directors, site operations, and nonprofit governance bodies, I needed a mechanism to:

  • Deliver timely, values-driven guidance across a dispersed global network
  • Scale institutional knowledge to support onboarding, succession planning, and daily operations
  • Model modern leadership by aligning digital innovation with ethical stewardship
  • Reduce response lag in fast-moving, high-consequence environments

My goal wasn’t to automate leadership—it was to amplify and protect it.


How It Was Built

The “Chetwin DT Executive Twin” was created using OpenAI’s GPT technology and meticulously engineered to mirror my operational logic, safety philosophy, and communication tone. Development followed a three-tiered methodology:

1. Strategic Knowledge Base

I curated and structured content from across my career to form a living knowledge engine. This included:

  • My 2025 vision for safety excellence and team alignment
  • Detailed leadership expectations for global EHS staff
  • My complete Director-at-Large platform for ASSP, reflecting governance and DEI commitments
  • Innovation frameworks like the Health and Safety Opportunity Index (HSOI) I developed to quantify risk reduction performance

These inputs became the foundation from which the twin draws real-time guidance, context, and scenario-based coaching.

2. Executive Persona Engineering

The twin was configured to deliver output with the same tone, structure, and discipline I bring to the boardroom or a plant floor. It tailors communications to varied audiences—CEOs, site leaders, regulators, and young professionals—while maintaining clarity, humility, and actionable candor.

It leverages analogies and coaching language that I frequently use—drawing from aviation, literature (history & fiction), economics, and organizational psychology—to connect abstract principles with personal meaning.

3. Continuous Intelligence Integration

The twin updates monthly to reflect real-time developments from ISO, NIOSH, UNGC, CDP, EcoVadis, and others. It incorporates strategic inputs from evolving trends in AI governance, sustainability metrics, PSM modernization, and total worker health. This ensures it’s not only historically accurate but also future-ready.


What the Twin Does

The Executive Twin already delivers tangible value across a variety of high-impact functions—serving as both a force multiplier and a strategic safeguard in critical leadership workflows.

Strategic Memo Development: It produces high-quality drafts for safety directives, board communications, and performance alignment documents that reflect not just my voice, but the strategic intent behind each message. Whether it’s articulating a proactive risk management plan or framing a cultural transformation initiative, the twin ensures that messaging remains consistent, timely, and aligned with enterprise goals.

Coaching and Scenario Guidance: It acts as a coaching companion for site-level and functional leaders, using embedded frameworks like Hazard Recognition Plus (HRP), the hierarchy of controls, and stop work authority protocols. This ensures frontline leaders can get immediate, tailored guidance on how to approach complex EHS situations—whether they’re navigating compliance in emerging markets or managing workforce behavior during periods of operational stress.

Governance and Association Engagement: The twin is especially effective in supporting professional association and nonprofit leadership. It helps prepare for board meetings, develop DEI strategies, craft governance language, and engage with member constituencies. In my work with ASSP, for example, the twin draws from years of involvement to help translate emerging member needs into actionable strategies, bridging operational insight with organizational mission.

Crisis Support and Risk Communication: During high-pressure scenarios—such as critical incidents, public disclosures, or ESG-related concerns—the Executive Twin can generate rapid first-draft communications, talking points, and action frameworks. It supports swift decision-making without sacrificing tone, credibility, or regulatory alignment, helping leaders respond with both precision and empathy.

Its presence enables a level of responsiveness, consistency, and thought partnership that would be difficult to sustain manually. For example, the twin enables faster decision cycles, better clarity in execution, and higher confidence across stakeholder groups. It does not replace the judgment or accountability of executive leadership—it enhances it by providing a reliable, values-driven resource that’s always available to support clarity, continuity, and confidence in moments that matter most.


How I Have Put it to Use

I’ve already begun leveraging the Executive Twin to support several high-value leadership functions—and the results have been both practical and transformative. One of its most powerful applications is in deriving insights from EHS performance data. The twin helps translate complex trends into actionable narratives, articulated in my own professional voice, and tailored for operational teams who need both clarity and context.

It has also significantly accelerated the development of executive communications and reports, reducing the time required while enhancing both strategic depth and audience relevance. I use it to respond quickly and concisely to executive-level queries, ensuring that my answers are both accurate and aligned with my established tone and priorities.

In my day-to-day work, the twin serves as a trusted editor and reviewer, helping refine my written communications for content quality, readability, and brevity. It constructively critiques drafts to sharpen their effectiveness and ensure the messaging lands with the intended clarity and purpose.

Perhaps most compelling, the twin acts as an idea generator, offering fresh perspectives, innovative solutions, and emerging technologies that I might not otherwise have encountered as quickly. This creative augmentation makes it not only a strategic assistant but also a thought partner in navigating complex and evolving EHS challenges.


Why This Matters

We are entering a new chapter in the EHS profession—one defined not just by regulations and scorecards, but by our ability to lead with humanity at scale. In this chapter, the most effective leaders will be those who can bridge empathy and analytics, foresight and accessibility. It’s a moment where success is no longer measured solely by lagging indicators or compliance audits, but by how effectively we translate risk awareness into protective action, turn innovation into operational advantage, and embed equity and trust into every decision.

The Executive Digital Twin represents more than a technological step forward—it marks the emergence of a new leadership infrastructure. One that honors legacy knowledge and professional ethics while answering the calls of speed, transparency, and global inclusion. It enables leaders to be present without being stretched, to be responsive without being reactive, and to transfer wisdom without waiting for turnover.

To the EHS profession, this model sends a powerful signal: digital transformation is not a disruption to fear, nor a mandate from outside forces. It is a design space we can claim. We have the opportunity—and arguably the obligation—to shape these tools with our values, our voice, and our vision. In doing so, we don’t just keep pace with change—we lead it, on behalf of the people, communities, and futures we are called to protect.


Final Thought

I didn’t build this twin to replace myself. I built it to preserve and scale a leadership philosophy rooted in stewardship, strategic clarity, and human dignity. In times of crisis or transition, leaders must offer not only direction but resilience—and resilience today means being ready to respond across more domains than ever before.

The most important work in EHS still happens person-to-person, on the floor and in the field. But the thinking that supports it, the culture that enables it, and the strategy that sustains it—all of that can be scaled.

This is my Executive Twin. What might yours look like?

Posted in Uncategorized | Tagged , , , , , , , , | 2 Comments

From Hazard to Control: Managing Combustible Dust in Real-World Operations

Introduction and Context

In a recent discussion among safety professionals that I was part of, the topic of combustible dust management came up in the context of demonstrating the business value of risk reduction. One of the central questions was how to determine what level of fugitive combustible dust accumulation is acceptable in industrial operations. This is a critical concern in industries such as metals, chemicals, wood products, and agriculture, where combustible dust is not a theoretical hazard but a real and persistent threat to safety and continuity.

“Combustible dust doesn’t give second chances. The time to understand it, control it, and engineer it out of your process is before it becomes a headline—or a memorial.”
— Chet Brandon

Given my background in managing combustible dust risks—including early career experience at Elkem Metals North America (formerly Union Carbide Ferro-Alloys)—this topic is both professionally significant and deeply personal. During my time there, I worked with a colleague who had lost his brother in a dust explosion at the very site where we then worked. That tragedy underscored the reality that these hazards are not abstract—they have lasting human consequences. Elkem had a long-standing legacy of handling explosive metal dusts, and I was fortunate to learn from some of the most seasoned process engineers and safety professionals in the industry. Many of them had first-hand experience with serious incidents and shared their hard-earned lessons with a sense of urgency and purpose. One meaningful outcome of that formative experience was co-authoring a technical paper on dust explosion hazards with one of those veteran process engineers—a resource I reference later in this post.

This article provides a detailed discussion on evaluating and managing combustible dust accumulation in industrial settings. It also highlights key insights from the paper “Prevention and Control of Dust Explosions in Industry” by Ronald C. Brandon and Dale S. Machir—a foundational reference for understanding the technical and practical aspects of dust explosion prevention.


Fundamentals of Dust Explosions

In my career, I’ve seen how easily a dust explosion can move from a theoretical risk to a devastating reality. In the paper I co-authored with Dale Machir—Prevention and Control of Dust Explosions in Industry—we focused on unpacking the fundamentals of how dust explosions occur and, more importantly, how they can be prevented through sound engineering and disciplined operational control. At the heart of every dust explosion are five essential conditions—what we often call the “Dust Explosion Pentagon.” These include the presence of a combustible dust, dispersion of that dust into a cloud, an oxidizing atmosphere (usually air), some level of confinement, and an ignition source. When those five elements align, the result can be a rapid, high-energy deflagration with the potential for serious injury, loss of life, and major facility damage.

One key point we emphasized in the paper is the dual-stage nature of most significant dust explosions. A small primary event—often inside a piece of equipment like a filter or transfer line—can loft layers of accumulated dust into the air, setting the stage for a much larger and far more dangerous secondary explosion. That’s where we see the real devastation. In several incidents I’ve studied or been briefed on, the secondary blast has traveled through process areas, igniting dust layers in multiple rooms or areas and escalating the damage exponentially. These are the scenarios that destroy buildings and take lives.

Understanding the materials involved is critical. Combustible dust hazards aren’t limited to wood or grain products; many metal dusts, plastic resins, and even food ingredients like powdered milk or sugar can pose explosion risks. What makes a dust dangerous is often its particle size, moisture content, and how easily it becomes airborne. Fine, dry particles with a high surface area ignite quickly and burn intensely. In the metals industry—where I spent much of my early career—we routinely worked with aluminum, chromium, manganese, and silicon dusts that could ignite with a static discharge or overheated surface if not properly managed. Later in my career I also managed materials in dust form such as wielding fume, coal and related substances, graphite, and polymers.

Another important lesson I’ve learned through years of managing combustible dust risks across multiple facilities—often producing what appeared to be the same materials—is that no two dusts are truly alike. Even when the base material is chemically identical, variations in processing methods, particle size distribution, moisture content, and surface area can result in significant differences in ignition sensitivity, deflagration severity, and explosibility. I’ve seen firsthand how assumptions based on “similar” materials from different sites can lead to dangerously flawed risk assessments.

That’s why it is absolutely critical to characterize each site-specific dust using standardized testing protocols—most importantly, per ASTM E1226, which defines how to measure key parameters like the maximum explosion pressure (Pmax) and maximum rate of pressure rise (dP/dt). These aren’t just technical details—they’re the backbone of sound combustible dust hazard analysis. And to get valid, actionable data, the tests must be performed using a 20-liter sphere apparatus, which is the recognized standard test chamber for dust explosibility. While smaller devices (like the 1-liter Hartmann tube) may provide general indications, only the 20-liter sphere delivers the accuracy and repeatability needed for engineering design and safety decisions.

Using the correct test method is just as important as conducting the test itself. If you’re basing your hazard analysis or explosion protection strategy on unverified or low-fidelity data, you’re essentially flying blind. This is especially critical when designing deflagration venting, suppression systems, or isolation barriers—any of which depend on having a reliable Pmax and Kst value derived from the 20-liter sphere.

And this isn’t a one-time check-the-box task. Any significant change in the process—raw materials, equipment, throughput, or even housekeeping practices—should trigger a formal Management of Change (MOC) review. That review must include a reassessment of combustible dust hazards, and, where applicable, retesting of the dust to identify any shift in its ignition or explosion characteristics. I’ve seen cases where a small change in the grinding process or drying temperature created dust with dramatically more reactive properties.

Combustible dust management is not about memorizing the properties of a material—it’s about staying vigilant to how those properties can shift, and building systems that recognize, test, and respond accordingly. That vigilance starts with getting the science right.

In the paper, Dale and I discussed the importance of lab testing to characterize dust behavior. You can’t manage what you don’t understand. Parameters like Minimum Explosible Concentration (MEC), Minimum Ignition Energy (MIE), and Kst (a measure of explosion severity) tell you how easily your dust will ignite and how violently it will burn. A dust with a high Kst value—especially in the St-2 or St-3 range—demands aggressive controls, both in terms of equipment design and operational discipline.

Ignition sources often go unnoticed until it’s too late. It doesn’t take an open flame to trigger an event. I’ve seen or investigated situations where hot bearings, friction sparks, or even a spontaneous static discharge in a duct system led to an explosion. The risk is compounded in systems that transport dust over long distances—like pneumatic conveyors or central vacuum systems—because ignition can occur upstream and propagate rapidly downstream if isolation is inadequate.

The core message I’ve tried to reinforce throughout my career—and that Dale and I made clear in the paper—is that dust explosions are preventable. These aren’t random acts of nature. They are the result of known physical conditions that, if allowed to develop unchecked, will eventually align and cause harm. When we understand the science, commit to testing and analysis, and apply sound engineering principles, we can break the chain of events before it leads to an explosion. That’s the real takeaway: dust explosion prevention isn’t about luck—it’s about doing the work, understanding the hazards, and implementing reliable, system-based controls.


Assessing Acceptable Accumulation Levels

Determining an acceptable level of dust accumulation requires a risk-based approach that considers both the nature of the dust and the context in which it is present. The commonly cited benchmark—1/32 inch (0.8 mm) of dust over more than 5% of the floor area—is drawn from NFPA 654 and should be seen as a minimum action threshold, not a definitive safe limit. This threshold is particularly conservative for low-density dusts (bulk density <75 lb/ft³), which can reach explosible airborne concentrations even at relatively thin layer depths.

Key assessment factors include particle size distribution, moisture content, ignition sensitivity, and the tendency of the dust to become airborne. Fine, dry particles with low minimum ignition energy (MIE) pose the greatest threat. The particle size distribution is also a factor. Generally speaking, the finer the dust, the greater the ignition hazard. Another rule of thumb I use is that dusts with a high fraction of 150 mesh (Tyler sieve) and lower need to be evaluated for combustibility. Additionally, environmental conditions such as airflow, vibration, and human or machine activity can disturb settled dust, making it easily suspendable in the air.

The surface on which dust accumulates also matters. Dust on elevated or hidden surfaces—beams, rafters, piping, light fixtures—can go unnoticed and uncleaned for extended periods. These areas pose a high risk for secondary explosions if the dust is later dislodged and ignited by an initial event. Risk increases significantly if fugitive dust is allowed to accumulate in or around ventilation ducts, enclosures, or process equipment.

To measure dust accumulation, a variety of tools and techniques are available. Depth gauges, dust combs, and rulers can provide quick field estimates of layer thickness. More precise methods include collecting a known volume of dust with a scoop and weighing it to determine bulk density. This allows for a more accurate estimation of the potential airborne dust concentration. Surface area calculations should be performed to determine what percentage of the total room or equipment area is affected. These measurements should be documented and repeated periodically to identify trends and determine the effectiveness of dust control measures.

Visual indicators can also play a role. For example, if the surface color is obscured or if a finger swipe leaves a clear trace in the dust, this often indicates that dust has exceeded the 1/32-inch threshold. However, visual cues are subjective and should not replace quantitative measurements when making decisions about hazard level.

A comprehensive Dust Hazard Analysis (DHA), as required by NFPA 652, integrates all these data points to provide a complete picture of the combustible dust risk in a facility. A DHA includes an inventory of all combustible dust-producing processes, identification of potential ignition sources, analysis of containment or confinement factors, and a review of current housekeeping and mitigation systems. From this, site-specific acceptable accumulation levels can be established and aligned with a hierarchy of controls to manage risk effectively.


Prevention and Mitigation Strategies

In our paper, Prevention and Control of Dust Explosions in Industry, Dale Machir and I emphasized that engineering controls are the foundation of any truly effective combustible dust prevention strategy. While administrative controls like training and housekeeping play important roles, they should be viewed as secondary layers of defense. The real key lies in how the system is designed from the start—because once dust escapes into the general work environment, the risk profile increases dramatically and your margin for error narrows.

Local exhaust ventilation (LEV) should be installed as close to the point of dust generation as possible. Capturing dust at the source—before it can migrate to surfaces or become airborne—is one of the most effective ways to prevent accumulation and dispersion. Too often, I’ve seen systems that rely on general dilution ventilation or distant collection points, which are simply not sufficient for high-risk dusts.

We also highlighted the critical role of deflagration venting, particularly in enclosed vessels or dust collectors. These vents are engineered to relieve internal pressure in the event of an explosion, minimizing structural damage and reducing the risk of injury to personnel. Proper vent sizing, duct routing, and positioning relative to occupied areas are essential design considerations. It’s not enough to simply install a vent panel and assume the system is protected—there must be a documented basis for its performance, ideally supported by dust testing data and compliant with NFPA standards.

For systems involving pneumatic transport of dust, particularly over long distances or between process zones, spark detection and suppression is another key layer of protection. These systems monitor for thermal anomalies or sparks within the conveying line and activate suppression agents or system shutdown protocols before ignition sources can reach a dust collector or silo—where an explosion could easily propagate.

Equally important is the design of the dust collection system itself. A properly engineered dust collector must do more than just move material—it must prevent leakage, control static buildup through proper grounding and bonding, and include explosion isolation mechanisms such as chemical suppression, fast-acting valves, or rotary airlocks. In addition, dust collectors must be equipped with appropriately sized explosion vent panels or flameless venting devices that are designed to safely relieve internal pressure during a deflagration. These vents should be located to discharge to a safe area away from personnel and critical equipment, and should be installed in accordance with the collector’s tested design parameters. Without proper venting, the collector becomes a pressure vessel during an explosion event—potentially turning a localized incident into a catastrophic failure.

A poorly maintained or incorrectly specified collector is one of the most common points of failure in dust control systems.

That said, housekeeping still matters—greatly. It must be frequent, systematic, and verifiable, especially in elevated or concealed areas where dust can settle unnoticed. However, we were clear in the paper that housekeeping should never be relied upon as the primary control strategy. If you’re constantly cleaning up dust that’s escaping from process equipment, that’s not a control measure—that’s an indicator of a failed system design. The goal should always be to prevent the dust from escaping in the first place, through effective containment, enclosure, and point-source control.

We called attention to the importance of training, maintenance, and change management as integral parts of the combustible dust control system. Workers need to understand not only the visible risks of accumulated dust but also the invisible ones—like static energy or poor duct routing. Maintenance teams should be trained to recognize compromised seals, worn gaskets, or ungrounded components. And critically, every process modification—whether it’s a change in material, a layout shift, or new equipment—should trigger a combustible dust impact review. If that review isn’t built into the facility’s Management of Change (MOC) system, you’re flying blind.

Finally, we emphasized that emergency management is an essential—yet often underdeveloped—component of a comprehensive combustible dust safety strategy. Too often, facilities focus heavily on engineering controls and housekeeping, while overlooking the need to prepare for the possibility of an event. We advocated for site-specific emergency response plans that recognize the unique characteristics of dust explosions, including the potential for secondary explosions, intense thermal energy, and blast pressures that can compromise structural integrity. We recommended that emergency response planning include coordination with local fire departments and emergency services, clear protocols for evacuation and accountability, and training for personnel on how to respond safely without inadvertently creating additional hazards—such as dispersing accumulated dust while attempting to intervene. A well-informed and well-rehearsed response team is critical because, in a dust incident, seconds matter. While prevention remains the primary objective, effective emergency preparedness is a necessary safeguard when all other layers of protection are tested.

If you’d like to dive deeper into the fundamentals and real-world lessons behind combustible dust prevention, I encourage you to read the paper Dale Machir and I co-authored on the topic. It covers both the science and the practical strategies we’ve applied in industrial environments. You can access the full paper here: Prevention and Control of Dust Explosions in Industry.

If you are looking to go even further in the understanding and effective management of combustible dust hazards, this book is highly authoritative: Dust Explosions in the Process Industries, by Rolf Eckoff.

At the end of the day, preventing combustible dust explosions is not about any one control—it’s about integrating engineering, operations, and organizational discipline into a cohesive system. That was the core message of our paper, and it remains just as relevant today as when we first wrote it.


Spreading the Word on Combustible Dust Hazards and Control

I still perform training on the topic of dust explosions prevention and control to continue to make industrial organizations aware of the risk and the control methods. When I started my career in the industrial safety field, dust explosion knowledge was still very low for most safety professionals. My time with a company that had managed the hazards for decades gave me a wonderful opportunity to fully learn the science and practical management actions for this unique area of knowledge. An example of the training I typically provide is given in the presentation at this link: Example Combustible Dust Training Material by Chet Brandon

Dale and I developed a demonstration device to visually illustrate the fundamental principles of dust explosions, inspired by the original Hartmann Tube used in early combustible dust testing. Our version was a simplified cylindrical chamber equipped with an ignition source and a method to uniformly disperse dust particles into a suspended cloud. What made it especially effective for educational purposes was the visual demonstration of explosion pressure—a thick paper “vent” sealed the top of the tube and would burst outward upon ignition, mimicking a deflagration vent panel. The simplicity of the setup makes it a powerful teaching tool, especially for audiences new to the topic. I still have the device today and occasionally use it during presentations to help drive home the physics behind combustible dust hazards. You can see a video of it in action in one of my presentations: Hartmann Demonstration by Chet Brandon

I’m also encouraged that the National Fire Protection Association (NFPA), through the development of NFPA 652: Standard on the Fundamentals of Combustible Dust, captured and codified many of the core principles that Dale and I—and many others in this field—have emphasized over the years. This standard provides a foundational framework for hazard identification, Dust Hazard Analysis (DHA), and risk-based control strategies, helping to bridge the gap between theory, practice, and regulation. I conducted training on this NFPA Combustible Dust standard several years ago. You can view that material here: The Combustible Dust Threat by Chet Brandon

Note: In 2024 the NFPA combined several of it’s combustible dust related standards, including 652, into one new standard: NFPA 660, Standard for Combustible Dusts and Particulate Solids (2025). It was published in December of 2024.


Conclusion and Practical Takeaways

Combustible dust hazards remain one of the most underestimated risks in industrial operations, yet they are entirely preventable with the right combination of technical understanding, disciplined controls, and organizational commitment. Over the years, I’ve seen firsthand the consequences of both strong and weak dust management systems—and the difference often comes down to leadership, culture, and follow-through. Prevention is not just a function of engineering and housekeeping—it’s a mindset that must be built into design, operations, maintenance, and emergency preparedness.

I’m proud to continue sharing this knowledge, not only because of where I started in this field, but because I’ve seen how powerful it is when teams truly understand the science and the stakes. We owe it to our workers, our communities, and our profession to treat combustible dust as the serious hazard it is—and to manage it with the same rigor we apply to any other major industrial risk.

Stay safe, stay informed—and don’t let dust settle on your safety program!

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Safe to Fail: How Digital Twins Can Rewire Workplace Trust

Digital twin technology—virtual representations of physical systems or processes—can significantly enhance psychological safety in the workplace by providing environments where employees feel secure to speak up, experiment, and make mistakes without fear of negative consequences. These virtual environments enable organizations to address cultural, behavioral, and systemic issues in a safe, structured, and repeatable way.

Repeated exposure to complex or hazardous systems in a simulated context increases familiarity and confidence, making employees more likely to raise concerns and actively engage in risk discussions during real operations.

New Tech Brings Better Tools for Employee Success

One of the most powerful uses of digital twins is in the safe simulation of high-stakes scenarios. By allowing employees to interact with realistic simulations of equipment, systems, or workflows without exposing them to actual risks, digital twins encourage trial and error in a consequence-free environment. Teams can practice responses to emergencies, near misses, or procedural failures, which not only builds competence but also reduces anxiety. Repeated exposure to complex or hazardous systems in a simulated context increases familiarity and confidence, making employees more likely to raise concerns and actively engage in risk discussions during real operations.

Digital twins also promote collaborative problem-solving and experimentation. They serve as shared platforms where cross-functional teams can model and test various operational strategies or interventions. Because these simulations are grounded in a shared, objective digital model, they help minimize blame and reduce the tendency toward finger-pointing when things go wrong. In these environments, everyone’s input can be validated and tested, which fosters psychological safety by encouraging diverse perspectives, innovation, and respectful dissent. The neutral nature of the digital twin promotes a systems view, rather than individual fault-finding.

Another critical benefit is the transparent feedback and learning loops that digital twins enable. By continuously capturing and visualizing system behavior, teams can analyze how specific decisions or actions affect outcomes. This feedback is delivered in a non-threatening way that focuses on system performance rather than individual error. Such transparency helps employees understand that mistakes are often rooted in broader system dynamics, not personal shortcomings. It supports a learning culture where improvement is prioritized over punishment, making people feel safer to reflect on failures openly.

Digital twins also contribute to psychological safety by enabling inclusive design and participation. When digital twins are developed with input from operators, technicians, engineers, and other stakeholders, they serve as a tool for co-creation. This participatory approach allows frontline workers to contribute their expertise, surface concerns, and help identify design flaws early—before they cause harm. Employees who feel their insights are valued and impactful are more likely to speak up and challenge unsafe norms. Moreover, involving people from all levels of the organization helps reduce hierarchical barriers and fosters a sense of collective ownership over safety outcomes.

Additionally, digital twins offer predictive insights to prevent human error by modeling operator behaviors and system workflows. This allows organizations to identify latent conditions or error-prone configurations before they lead to real-world incidents. Rather than focusing on blaming human error, the technology highlights how systems can set people up to fail. This shift supports a just culture where accountability is shared, and emphasis is placed on improving design and reducing risk at the systemic level. As a result, individuals feel more supported and less scrutinized for honest mistakes.

Finally, digital twins are instrumental in conducting debriefings and after-action reviews in a psychologically safe manner. They can reconstruct operational events, training exercises, or near misses with a high degree of fidelity, enabling evidence-based discussions focused on what the system did rather than who erred. This creates a space for learning and reflection rather than shame or fear, allowing teams to explore complex causes of failure without defensiveness or punishment.

Real-world examples demonstrate how digital twins are already supporting psychological safety. In chemical plant operations, digital twins are used to train operators on abnormal situations, helping them build familiarity without exposing them to real hazards. In aviation and spaceflight, simulations help teams rehearse coordination and communication in high-pressure scenarios, reinforcing trust and shared understanding. In healthcare, digital twin-based rehearsals of patient workflows allow teams to implement new procedures more safely and confidently.

By combining realism, inclusivity, and systems thinking, digital twins serve not only as technical tools for process optimization but also as strategic enablers of psychological safety. Their ability to simulate, predict, and review operations creates a foundation for a more open, resilient, and learning-oriented workplace culture.

Digital Twin–Supported Framework for Psychological Safety

Here is a Digital Twin–Supported Framework for Psychological Safety, especially tailored for high-risk or complex industries (e.g., chemical, energy, aviation, manufacturing). The goal is to use digital twins not just for technical simulation, but as a deliberate mechanism to foster psychological safety in operations, training, design, and post-event analysis.

I. Core Objectives

  1. Create an environment where employees can experiment, speak up, and learn from failure without fear.
  2. Use digital twin technology to support systems thinking, inclusive collaboration, and just culture principles.
  3. Shift the focus from individual blame to systemic improvement.

II. Framework Components

1. Safe Learning and Simulation Environment

Purpose: Practice, experiment, and fail safely.

  • Build high-fidelity digital twins of processes, equipment, and control systems.
  • Allow teams to simulate rare, high-stress, or high-risk scenarios (e.g., equipment failure, emergency shutdowns).
  • Embed decision-making opportunities where teams can test “what-if” scenarios.

Psychological Safety Benefit:

Fosters confidence and comfort in raising concerns or suggesting alternate paths during simulations.


2. Participatory Design and Co-Creation

Purpose: Give all stakeholders a voice in system modeling and design.

  • Involve operators, technicians, engineers, and support staff in digital twin development.
  • Use digital twins to visualize work as done (WAD), not just work as imagined (WAI).
  • Use feedback loops to refine models based on lived experience.

Psychological Safety Benefit:

Encourages speaking up, values frontline insights, and reduces power distance.


3. Scenario-Based Team Debriefs

Purpose: Enable safe, structured reflection and learning.

  • Use digital twins to replay incidents, near-misses, or test scenarios.
  • Conduct non-punitive, evidence-based debriefs with the full team.
  • Focus on what the system allowed or encouraged, rather than who made a mistake.

Psychological Safety Benefit:

Builds trust and removes fear of blame; reinforces learning over punishment.


4. Psychological Safety Metrics via Digital Twin Interaction

Purpose: Monitor and improve team psychological safety using behavioral signals.

  • Track participation, voice frequency, idea diversity, and scenario engagement metrics.
  • Use sentiment and behavior analytics (e.g., hesitation in simulations, risk aversion, silence).
  • Flag environments where team members consistently defer, disengage, or avoid decisions.

Psychological Safety Benefit:

Identifies hidden psychological barriers and targets support where needed.


5. Systemic Risk and Error Modeling

Purpose: Identify latent conditions and design-induced risks before failure.

  • Use the digital twin to:
    • Simulate control room interfaces, process configurations, workload stressors.
    • Test HMI usability, alarm thresholds, or cognitive overload situations.
  • Integrate with human factors or error prediction models (e.g., HEART, SPAR-H).

Psychological Safety Benefit:

Prevents error-triggering conditions, supports system responsibility over individual blame.


6. Cross-Disciplinary Experimentation Workshops

Purpose: Support open innovation and divergent thinking.

  • Use the digital twin for workshops that:
    • Challenge “sacred cows” (assumptions).
    • Allow anonymous idea testing in simulations.
    • Invite junior or non-technical staff to test suggestions.

Psychological Safety Benefit:

Encourages voice from all levels, promotes inclusion, and reduces psychological risk of speaking out.


III. Implementation Phases

PhaseDescription
1. AssessmentIdentify psychological safety gaps; choose pilot teams.
2. Digital Twin SetupDevelop or refine digital twin models of key systems.
3. Stakeholder OnboardingTrain teams in use; co-design simulation goals.
4. IntegrationEmbed digital twin use in daily operations, training, and after-action reviews.
5. Feedback & EvolutionUse behavioral and safety data to continuously adapt.

IV. Guiding Principles

PrincipleAction
Just CultureFocus learning on conditions and decisions, not individual blame.
TransparencyMake assumptions, models, and results visible and accessible.
InclusionInvite feedback from all levels and disciplines.
Reflection over ReactionPause and reflect after events using twin-based reconstructions.
Iterative LearningRegularly refine simulations based on feedback and operational data.

Example Use Case: Chemical Loading Procedure

  • Digital twin simulates real loading system with all valve states, sensors, alarms.
  • Operator training involves practicing with evolving conditions and possible human-machine interface failures.
  • After each training session:
    • Debrief is done with playback and team discussion.
    • Issues raised are captured, tested in the digital twin, and incorporated into future designs.
  • Result: Operators feel empowered to report confusing controls or procedures—backed by evidence from simulation.

Lead the Shift: Psychological Safety Through Digital Innovation

To unlock the full potential of your workforce and drive a culture of continuous improvement, it’s time to move beyond traditional safety protocols and embrace digital twin technology as a catalyst for psychological safety. These virtual environments don’t just simulate operations—they create the space where people feel safe to speak up, challenge assumptions, and learn from mistakes without fear. By integrating digital twins into training, design, and debriefing processes, organizations can foster a just culture that values systems thinking, inclusivity, and open dialogue. The call to action is clear: invest in digital twin capabilities not only to optimize performance, but to build the kind of trust-rich environment where innovation thrives and safety becomes everyone’s shared mission.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Before Steel and Steam: How Alexander Hamilton Engineered America’s Future

Slater Textile Mill in Rhode Island, started operation in 1793. This is the Spinning Mule process. Textiles, pottery and lumber were some of the earliest industries.

Photo Description: Slater Textile Mill in Rhode Island, started operation in 1793 utilizing water power. This is the spinning mule process, the heart of the mill. Textiles, pottery and lumber were some of the earliest American industries. Photo credit: Wikipedia. https://en.wikipedia.org/wiki/Slater_Mill

Lately, I’ve been revisiting Alexander Hamilton by Ron Chernow, focusing on Hamilton’s influential years as the first U.S. Treasury Secretary. What stands out most is how deeply his vision shaped the foundation of the American economic system—especially his push to develop a strong manufacturing base. Having spent my career in the modern industrial environment, I can’t help but see how many of today’s economic realities have roots in the principles Hamilton laid out more than two centuries ago. His belief in a diversified, innovation-driven economy helped set the stage for the American system to emerge just in time to lead the world into the modern industrial age. I thought I’d take a step back from my usual writing and dig into these ideas a bit more—both out of historical interest and professional curiosity.

Alexander Hamilton, as the Secretary of the Treasury, had a visionary and transformative perspective on the economic development of the young nation. His belief in the importance of a strong manufacturing base, supported by an active federal government, laid the foundation for America’s rise as an industrial power. Though controversial in his time, Hamilton’s ideas have had a lasting impact on the country’s economic structure and policy.

Foundations of a Visionary

Hamilton’s remarkable vision was shaped by the unique experiences and influences in his early life. Born in the Caribbean and raised in poverty, Hamilton witnessed firsthand the economic fragility of colonial societies dependent on foreign imports. As a young clerk for a trading firm on St. Croix, he gained practical knowledge of finance, bookkeeping, trade, and shipping—insights that gave him a sophisticated understanding of economic systems and global commerce. After coming to the American colonies and attending King’s College, he was immersed in Enlightenment thought, which emphasized reason, progress, and institutional strength.

His service as an aide to George Washington during the Revolutionary War further crystallized his views. Hamilton saw how the lack of a centralized economic system hindered the war effort—funding was inconsistent, supply chains unreliable, and cooperation between states was weak. These experiences convinced him that a strong, coordinated national government was essential to America’s survival and growth. Influenced by European mercantilist thought and Britain’s financial model, Hamilton envisioned an American economy that embraced industry and innovation while remaining politically independent and socially dynamic.

The Report on Manufactures

In 1791, Hamilton presented his Report on the Subject of Manufactures to Congress—a groundbreaking and visionary policy blueprint aimed at transforming the economic structure of the United States. In it, Hamilton challenged the prevailing belief that agriculture alone should remain the economic backbone of the country. Instead, he argued for a balanced, diversified economy that integrated industry alongside farming to ensure long-term prosperity, national security, and independence. The report made several key arguments and policy proposals, many of which would influence American economic development for generations.

  • Economic Diversification: Hamilton believed a healthy national economy should not depend solely on agriculture. He argued that diversification—by developing domestic industry—would protect the country from the volatility of crop prices, poor harvests, and external market fluctuations. A robust manufacturing base would also provide resilience and flexibility, ensuring steady employment and economic output during times when agriculture might falter.
  • National Security and Independence: A central theme in the report was economic self-sufficiency. Hamilton warned that over-reliance on foreign goods, particularly from Europe, made the United States vulnerable in times of conflict. By producing essential goods—such as textiles, metalworks, and tools—at home, the nation would safeguard its independence and be better prepared for wartime disruptions. He viewed industrial development as an extension of national defense policy.
  • Utilization of Underemployed Labor: Hamilton highlighted that manufacturing could absorb segments of the population not fully utilized in agriculture, such as women, children, and those living in urban areas (Obviously the notation of employing children in industry is not acceptable in modern society. It was viewed differently in Hamilton’s time). He argued that this labor force could contribute meaningfully to production without displacing agricultural workers, thereby increasing national productivity without creating economic disruption.
  • Promotion of Innovation and Technical Progress: The report asserted that manufacturing would stimulate technological advancement by encouraging the application of science and specialized skills to production processes. Hamilton understood that industry had the potential to drive continuous innovation, making the country more competitive and fostering the development of new tools, processes, and techniques.
  • Mutual Reinforcement of Agriculture and Industry: Contrary to Jeffersonian fears, Hamilton insisted that manufacturing would not weaken agriculture but would actually enhance it. Farmers would benefit from a reliable domestic market for their raw materials and foodstuffs, while manufacturers would process those goods into value-added products. This synergy would reduce dependence on foreign trade and circulate wealth more widely across the economy.
  • Active Role of Government: One of the most revolutionary aspects of Hamilton’s report was his argument for federal involvement in economic development. He proposed that the government could and should take deliberate action to support industry. This included direct subsidies (bounties), the implementation of protective tariffs to shield American firms from cheaper imports, investment in infrastructure (such as roads and canals), and the development of a central banking system to manage credit and currency. Hamilton believed that market forces alone were insufficient to foster a robust industrial base in a fledgling nation.
  • Protection of Infant Industries: Hamilton argued that new American industries would struggle to compete against more established and efficient foreign producers, especially from Britain. He advocated for temporary protective tariffs to allow these “infant industries” the time and space to grow, innovate, and eventually become globally competitive. This idea would become a foundational principle of future U.S. industrial policy.
  • Moral and Civic Benefits: Beyond economics, Hamilton suggested that manufacturing would contribute to the moral and civic development of citizens. A broader occupational structure, combined with the demands of industrial organization and technical training, would promote discipline, hard work, and upward mobility, fostering a more productive and civically engaged society.
  • National Wealth and Power: Hamilton viewed manufacturing not just as a means of producing goods but as a pathway to national greatness. An economy built on a foundation of industry would generate revenue, enhance exports, stimulate internal markets, and allow for sustained growth. This economic strength, in turn, would translate into political power and international influence, securing America’s place among the leading nations of the world.

Taken together, these points formed a sophisticated, coherent argument for a new kind of American economy—one based not on the ideals of pastoral simplicity but on industrial dynamism, national self-sufficiency, and federal leadership. While many of these ideas were not immediately embraced by Congress, the report laid an intellectual and policy framework that would influence U.S. economic development for more than two centuries.

Immediate Reaction and Delayed Implementation

Despite its ambitious scope and long-term importance, the report was not well received by Congress at the time. Political opponents, particularly Thomas Jefferson and James Madison, favored a decentralized, agrarian republic and resisted the idea of a powerful federal government shaping economic life. As a result, many of Hamilton’s proposals—particularly subsidies for industry—were not enacted during his lifetime.

However, the intellectual influence of the report endured. Hamilton’s vision for a manufacturing-based economy planted the seeds for future economic policy and institutional development. His arguments for industrial development and federal involvement in economic affairs found new life in the decades that followed.

Influence on the American System

In the early 19th century, the ideas Hamilton articulated resurfaced in the form of the “American System,” championed by Henry Clay. This policy framework incorporated protective tariffs, a national bank, and federal funding for internal improvements—echoing Hamilton’s recommendations almost directly. Though operating under a different name and in a different political context, the American System represented a renewed embrace of Hamiltonian economics. It marked a shift in national thinking toward accepting a more proactive role for the federal government in guiding economic development.

Industrial Expansion and the 19th Century

During and after the Industrial Revolution, particularly in the post-Civil War era, the United States began to implement many of the policies Hamilton had proposed. Protective tariffs became a staple of economic policy, shielding developing industries from European competition. Federal investment in railroads, canals, and public education helped create the infrastructure and skilled workforce needed for industrial growth. Manufacturing boomed, transforming the U.S. into a global economic power by the late 19th century—just as Hamilton had predicted. His vision proved foundational in shaping the economic landscape of the modern nation.

Legacy in 20th-Century and Modern Policy

Hamilton’s influence extended well into the 20th century. During the Great Depression, New Deal programs drew on Hamiltonian principles by using federal power to stimulate economic recovery, support industry, and build infrastructure. Mid-century defense and technology investments, public funding of research, and innovation policies also echoed his belief that government should serve as an engine of economic development. His vision laid the intellectual groundwork for economic nationalism—the idea that the strength of a nation rests on a strategically guided and diversified economy.

Even in contemporary times, debates about infrastructure, industrial policy, and government involvement in the economy reflect Hamilton’s legacy. The Federal Reserve embodies his vision of centralized financial management, while federal support for science, education, and industry continues to align with his principles. Though not fully implemented in his lifetime, Hamilton’s Report on the Subject of Manufactures is now recognized as one of the most forward-thinking economic documents in American history.

How did he Conceive of Such a Complex System? Foundations in the Federalist Papers

Long before he formally outlined his industrial strategy as Treasury Secretary, Hamilton laid the intellectual groundwork for a strong national economy in the Federalist Papers. In essays such as Federalist No. 11 and No. 12, he emphasized the importance of centralized authority over commerce and taxation, arguing that a unified federal government could better negotiate trade, manage revenue collection, and promote national prosperity. He warned that fragmented state-level trade policies would weaken the country’s position on the world stage and foster internal conflict. This concern is echoed in Federalist Nos. 6 and 7, where Hamilton highlights the dangers of commercial rivalry among the states—warning that without a strong union, economic disputes could escalate into political instability or even violence. He believed that only a national government could ensure harmony in economic policy and prevent destructive competition. Hamilton’s vision of economic unity and strength is further developed in Federalist Nos. 30–36, where he defends the broad taxing powers of the federal government as essential to national security and infrastructure. While these essays do not directly propose industrial policy, they clearly reflect Hamilton’s belief that economic development required intentional, coordinated action at the federal level—an idea that would become the backbone of his later manufacturing proposals. In this sense, the Federalist Papers serve as the philosophical foundation for the economic blueprint he would later put into motion.

Conclusion

Alexander Hamilton’s economic vision was far ahead of its time. In a young republic wary of centralized power, he argued boldly for a manufacturing-based economy, supported by federal action and strategic planning. Though initially rejected, his ideas profoundly shaped the nation’s path toward industrialization, modernization, and global economic leadership. Hamilton’s legacy endures not only in the institutions he helped build—like the national bank and a robust financial system—but also in the very idea that government has a vital role in fostering national prosperity. His vision for American manufacturing was not merely economic—it was foundational to the identity and future strength of the United States.

Article Addendum: A Follow up Discussion on Tariffs

Hamilton’s economic plan famously advocated for the use of tariffs to protect America’s emerging industries—a strategy well suited to the realities of the late 18th century. At that time, the United States had virtually no established industrial base and little to no export market. Tariffs provided a necessary buffer, shielding fledgling manufacturers from overwhelming British competition while giving them time to develop capacity, technology, and a skilled workforce. In that historical moment, protectionism wasn’t just a policy choice—it was a developmental necessity. Hamilton understood that without government support, American industry would likely remain stunted under the shadow of more mature European economies.

However, applying the same logic to the 21st-century American economy is problematic. Today, the U.S. is home to some of the most advanced and globally integrated industries in the world, from aerospace and pharmaceuticals to semiconductors and precision manufacturing. These sectors generate a significant portion of their revenue from exports and rely heavily on complex international supply chains. In many cases, manufacturing processes are distributed across multiple countries—components may be designed in the U.S., fabricated in Asia, assembled in Mexico, and tested in Europe before returning to American markets. Broad tariffs in this environment don’t just target foreign competition; they impose added costs at multiple points in the production process, raising prices, reducing efficiency, and weakening global competitiveness.

Moreover, blanket tariffs can provoke retaliatory measures from trade partners, shrinking export markets and eroding relationships that American firms depend on. They can also discourage foreign direct investment in U.S. operations, which often brings not only capital but innovation and job creation. And perhaps most crucially, indiscriminate protectionism can slow down technological progress by insulating domestic firms from the pressures of global competition—pressures that often drive innovation, efficiency, and quality.

That said, the complexity of today’s economy does not mean tariffs are always inappropriate. There are legitimate strategic cases for targeted protection, particularly in industries critical to national security or in response to unfair trade practices by other nations. For example, measured tariffs can help stabilize sectors like steel, renewable energy, or microelectronics when global market distortions—such as state subsidies or dumping—undermine fair competition. In such cases, temporary protective measures, combined with long-term investment in innovation and workforce development, can be consistent with Hamiltonian principles.

While Hamilton’s tariff policy was critical in helping build the foundation of American industry, the modern economy demands a more nuanced approach—one that balances strategic support for key sectors with open market access, multilateral cooperation, and supply chain resilience. Protectionism in today’s globally interdependent world must be applied surgically, not ideologically. Hamilton’s core insight still holds: economic strength requires intentional policy. But the tools and context have evolved, and our strategies must evolve with them.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

What Makes a Great Chemical Plant Operator? – Safe Habits

I have been developing ideas regarding improving spill and release events in chemical operations lately. People in these operations remain a central influence on the safety of the operation. Operators of the process units have the greatest influence over the operations. At the most basic level, operators must have habits that enhance safe operations. This article explores some key competencies of these critical personnel in the chemical plant.

In the intricate world of chemical manufacturing, where precision, safety, and efficiency are paramount, the role of a chemical plant operator stands as a critical cornerstone. Far from simply monitoring gauges or turning valves, these individuals are the vigilant guardians of complex processes, directly influencing everything from product quality to environmental integrity. While some tasks may appear routine, the mastery of fundamental actions—like meticulously verifying the correct valve, ensuring absolute closure, and maintaining constant vigilance during operations—is what truly separates a competent operator from a truly great one. This article delves into the core competencies and unwavering dedication that define what it truly means to be an indispensable chemical plant operator.

Proficient Manual Valve Operation: A Core Competency for Chemical Plant Operators

Operating manual valves is a fundamental and frequent task for chemical plant operators. While seemingly simple, improper valve manipulation can lead to significant safety incidents, environmental releases, production losses, and equipment damage. Proficient manual valve operation hinges on three critical actions: verifying the correct valve, ensuring full seating when closing, and never walking away from an open valve. This training text will expand on each of these concepts to provide a comprehensive understanding for all operators.


1. Verifying the Correct Valve: The Foundation of Safe Operation

Operating the wrong valve is a common root cause of incidents in chemical plants. The consequences can range from minor process upsets to catastrophic events. Therefore, rigorous verification protocols are paramount.

Why is it Critical?

  • Process Interruption: Closing a critical flow path or opening an incorrect one can disrupt an entire process, leading to off-spec product or even shutdowns.
  • Safety Hazards: Opening a valve to a high-pressure line when a low-pressure line was intended, or isolating the wrong safety device, can create immediate and severe safety risks (e.g., leaks of hazardous chemicals, pressure excursions).
  • Environmental Releases: Misdirected flow can lead to spills or releases of regulated substances into the environment, resulting in regulatory fines and reputational damage.
  • Equipment Damage: Introducing incompatible materials, over-pressurizing equipment, or creating cavitation can severely damage pumps, heat exchangers, and piping.

Detailed Verification Steps:

  • Positive Identification (3-Way Verification): This is the cornerstone of correct valve identification. Every time you approach a valve for operation, perform the following:
    1. Tag Verification: Read the valve tag number and description. Verbally confirm it matches the Procedure, Work Order, or Operator Log instruction. Don’t rely solely on memory or location.
    2. Line Tracing: Physically trace the pipeline connected to the valve in both directions (upstream and downstream) as far as practical. Verify the pipe contents, direction of flow (if indicated), and connection points. This helps confirm you’re on the correct process line.
    3. Location Confirmation: Confirm the valve’s physical location matches diagrams, P&IDs (Piping and Instrumentation Diagrams), and your mental map of the unit. Look for surrounding equipment, landmarks, and other valves in the vicinity.
  • Utilize P&IDs and Process Flow Diagrams (PFDs): Before going into the field, review the relevant P&IDs and PFDs for the area where you’ll be working. Understand the valve’s function, its upstream and downstream connections, and its relationship to other equipment. This pre-job briefing is crucial.
  • Pre-Job Briefing and Communication: When working with a team, especially during complex operations, conduct a thorough pre-job briefing. Clearly communicate which valves will be operated, by whom, and the expected outcomes. Use consistent terminology and valve numbers.
  • “Line of Sight” Principle: Whenever possible, maintain a clear line of sight to the valve you’re operating. If you have to turn away or your view is obstructed, re-verify before proceeding.
  • Address Discrepancies Immediately: If there’s any discrepancy between the valve tag, line tracing, P&ID, or instructions, STOP. Don’t proceed until the discrepancy is resolved. This may involve consulting with a supervisor, another operator, or reviewing documentation. It’s always better to ask than to make a mistake.
  • Avoid Assumptions: Never assume a valve’s function or connection based on its appearance or location alone. Always verify.
  • Poor Lighting/Visibility: In dimly lit areas or areas with poor visibility, use a flashlight. Ensure valve tags are clean and legible.

2. Ensuring Valves Are Fully Seated When Closing: Achieving Positive Isolation

A common misconception is that “closed” means “closed.” For many valve types, especially gate and globe valves, a valve is only truly closed when it’s “full seated.” Failure to fully seat a valve can lead to leaks, bypassing, and incomplete isolation.

Why is it Critical?

  • Incomplete Isolation: A partially closed valve will allow product to leak through, preventing effective isolation for maintenance, cleaning, or ensuring process integrity.
  • Safety Risks: Leaks of hazardous materials can pose serious safety threats (e.g., toxic gas release, flammable liquid spills).
  • Process Contamination/Bypassing: Unintended flow through a “closed” valve can contaminate product, bypass critical process steps, or lead to inefficient operations.
  • Erosion and Damage: Continuous small leaks (often called “wire drawing”) through a partially seated valve can erode the valve seat and disc over time, leading to permanent damage and increased leakage.
  • Loss of Containment: This can contribute to environmental incidents and regulatory non-compliance.

Detailed Seating Procedure:

  • Feel the Resistance: As you close the valve, you’ll feel increasing resistance as the disc or wedge approaches the seat. This is normal.
  • Gentle Snug Up: Once you feel significant resistance, apply a firm but gentle final turn to “snug up” the valve. Do NOT over-tighten or use excessive force, especially with smaller valves or those with fine threads. Over-tightening can:
    • Damage the valve seat or disc.
    • Strip the stem threads.
    • Gasket deformation, leading to future leaks.
    • Make it difficult to open the valve later.
  • Back-Seating (Where Applicable): For some globe valves and specific gate valves, after fully closing the valve, it’s good practice to turn the handwheel a quarter to half turn back in the opening direction. This pulls the stem slightly back against a “back seat” in the bonnet, which helps to seal against stem packing leaks. Note: Not all valves are designed for back-seating, and excessive back-seating can sometimes unseat the main closure element. Refer to specific valve manufacturer guidelines or plant procedures.
  • Verify Zero Flow/Pressure: After closing a valve for isolation, always verify that flow has stopped or pressure has dropped to zero on the downstream side, if safe and practical to do so (e.g., by observing a pressure gauge, flow meter, or listening for flow).
  • “Closed” Indication: Visually confirm the valve position indicator (if present) shows “closed.” However, don’t rely solely on the indicator as it can sometimes be misaligned.
  • Never Use a Cheater Bar (Unless Approved): Using wrenches or “cheater bars” to gain leverage on valve handwheels is generally prohibited unless specifically authorized by a procedure or supervisor for specific, large valves where extra leverage is safely required, and the valve is designed to withstand it. Excessive force can severely damage the valve.
  • Report Leaks: If a valve doesn’t fully seat and continues to leak, or if you encounter excessive difficulty in seating it, stop and report the issue immediately for maintenance. Don’t try to force it.

3. Never Walking Away from an Open Valve: Maintaining Situational Awareness

Operating a valve is an active process that requires constant attention, especially when opening. Walking away from an open valve, even for a moment, can have serious consequences.

Why is it Critical?

  • Uncontrolled Flow/Pressure: Leaving an opening valve unattended, especially during filling or pressure equalization, can lead to overfilling of tanks, over-pressurization of lines, or uncontrolled reactions.
  • Spills and Releases: An unattended valve opening can quickly lead to spills, overflows, or releases of hazardous materials if the receiving vessel or line capacity is exceeded or if an unexpected condition arises.
  • Process Upsets: Uncontrolled flow can destabilize an entire process, leading to off-spec product, emergency shutdowns, or even equipment damage.
  • Safety Hazards: Uncontrolled flow can lead to explosions, fires, or exposure to hazardous chemicals if containment is lost.
  • Missed Abnormalities: An operator present during valve opening can immediately detect and respond to abnormal conditions such as leaks, unusual noises, vibrations, or rapid pressure/level changes.

Detailed Guidelines for Opening Valves:

  • “Open Slowly, Open Deliberately”: Unless a procedure specifically dictates rapid opening (e.g., for certain emergency valves), open valves slowly and deliberately. This allows:
    • Controlled Flow/Pressure Equalization: Prevents hydraulic shock (water hammer) and allows pressures and temperatures to equalize gradually.
    • Monitoring for Abnormalities: Gives you time to observe changes in pressure, flow, level, temperature, and listen for unusual sounds.
    • Reaction Time: Provides time to react and close the valve if an unexpected problem arises.
  • Constant Monitoring: Remain at the valve and continuously monitor the relevant process parameters (e.g., pressure gauges, level indicators, flow meters) as you open it. Listen for flow, watch for leaks, and feel for vibrations.
  • Define Your “Walk-Away” Point: Only walk away from a manual valve once its operation is complete (fully open, fully closed, or set to a specific throttled position), you’ve verified its state, and confirmed the process is stable and reacting as expected.
  • Communication: If you must leave a valve partially open for an extended period (e.g., during a slow fill), ensure you communicate this clearly to other operators and your supervisor. Use lockout/tagout procedures if the valve is critical for safety or maintenance.
  • Never Leave a Partially Open Valve Unattended During Critical Operations: During critical operations like filling a tank, transferring hazardous materials, or bringing equipment online, stay with the valve until the operation is complete and the system is stable.
  • Use Checklists and Procedures: For complex valve lineups or operations, always use written checklists and follow standard operating procedures (SOPs). Mark off each step as it’s completed, including valve position verification.
  • Handover Protocols: During shift changes or breaks, ensure clear and detailed handover of any valves that are in an intermediate state or that require ongoing monitoring.

4. Never Use Safety Devices to Run the Process: Protect the Last Line of Defense

In chemical plants, emergency systems like high-high level switches, low-low flow trips, or relief valves are built to protect people, equipment, and the environment if something goes wrong. These devices are not meant to be part of normal daily operation. Using them that way puts the plant at risk and removes your safety backup.


Why is it Critical?

  • Loss of Safety Margin: These devices are there to act when the process goes off track. If you’re relying on them during normal work, they’re no longer a true backup—and may not be there when you really need them.
  • Operator Desensitization: If a safety device trips regularly, it starts to feel routine. That means you’re ignoring early warning signs and could miss a real problem.
  • Mechanical Failure Risk: Emergency switches and shutdown systems are not built for frequent use. Repeated cycling can cause them to fail or stick, making them unreliable during an actual emergency.
  • Poor Operational Control: If you’re waiting for a safety trip to stop a process, you’re not truly in control. It means you’re running too close to the edge instead of managing the process with proper tools.
  • Regulatory Compliance: Standards like OSHA PSM, API 2350, and ISA 84 require that safety systems be used only as a final layer of protection—not as part of the operating plan. Misuse can lead to audits, fines, and serious safety issues.

Detailed Use Practices:

  • Know the Difference Between Control and Safety Devices: Understand which instruments are for daily control (like level transmitters or flow meters) and which are there to stop emergencies. Don’t mix them up in your operations.
  • Use Proper Operating Limits: Always stop processes—like tank filling—based on normal operating levels, not when the high-high trip activates. Stay within safe ranges well before safety systems kick in.
  • Track Levels and Flow—Don’t Wait for Trips: Monitor tank level, flow rate, and fill time closely. Know how long a fill should take and when to stop it. The safety switch should never be what tells you you’ve gone too far.
  • Review P&IDs and SOPs: Before starting a job, check P&IDs to see how the system is supposed to work. Procedures should never tell you to rely on a safety trip to complete an operation.
  • Recognize Frequent Trips as a Problem: If safety devices are tripping often, report it. That’s a sign something is wrong with the process or procedure—it’s not “just how it works.”
  • Speak Up About Unsafe Procedures: If you see or are asked to follow a practice that uses safety systems as a routine control method, stop and bring it up. It could prevent an incident.
  • Test Safety Devices on a Schedule—Not During Operations: Safety systems should be tested in a controlled environment during maintenance or inspection. They’re not process control devices.

5. Procedure Use and Compliance: The Blueprint for Safe and Consistent Operation

Standard Operating Procedures (SOPs) are the backbone of safe, efficient, and compliant chemical plant operations. They encapsulate critical process safety information, best practices, and lessons learned. Proficient operators not only follow procedures but also understand their purpose and actively contribute to their continuous improvement.

Why is it Critical?

  • Consistency and Predictability: Ensures tasks are performed uniformly across all shifts and operators, leading to predictable process outcomes and reducing variability.
  • Error Reduction: Procedures are designed to minimize human error by providing clear, step-by-step instructions, highlighting critical points, and specifying safety precautions.
  • Accident Prevention: Many incidents can be traced back to deviations from established procedures or the lack of adequate procedures. Adherence prevents known hazards from recurring.
  • Training and Competency: Procedures serve as vital training tools for new operators and refreshers for experienced personnel, ensuring a baseline level of competency.
  • Regulatory Compliance: Regulatory bodies (e.g., OSHA, EPA) often mandate written procedures for critical operations involving highly hazardous chemicals, making compliance essential.
  • Troubleshooting and Reference: Provide a reliable reference for operators during abnormal conditions or when troubleshooting process upsets.
  • Knowledge Transfer: Capture institutional knowledge, ensuring that critical operational experience isn’t lost due to personnel changes.

Detailed Guidelines for Procedure Use and Compliance:

  • Always Use the Latest Approved Version: Before starting any task, verify that you’re using the most current, approved version of the procedure. Check dates or version numbers. Never rely on outdated copies or memory for critical operations.
  • Pre-Job Briefing and Review:
    • Read Through Before Starting: For any non-routine or critical task, read the entire procedure from beginning to end before commencing work. This allows you to understand the flow, anticipate challenges, and identify any prerequisites.
    • Identify Critical Steps: Pay close attention to steps marked as “critical,” “caution,” “warning,” or “danger.” Understand why these steps are critical and what the potential consequences of error are.
    • Verify Equipment and Conditions: Before touching any equipment, mentally (or physically, if safe) walk through the procedure and confirm that all necessary equipment is available, in proper working order, and that the process conditions (e.g., pressure, temperature, levels) are as specified.
  • Step-by-Step Adherence:
    • Execute Each Step as Written: Perform each step exactly as written in the procedure. Don’t skip steps, combine steps, or improvise, unless explicitly authorized through a Management of Change (MOC) process or immediate emergency response (which must be documented afterwards).
    • “Stop-Think-Act-Review” (STAR Principle): For critical steps, pause before acting.
      • Stop: Before performing the action.
      • Think: What is the action? What is the expected outcome? What are the potential hazards?
      • Act: Perform the action deliberately.
      • Review: Verify the action was completed correctly and the expected outcome occurred.
    • “Verify and Confirm”: After completing a step (especially valve operations, switch positions), visually or audibly confirm the action was successful before moving to the next step.
  • Marking Procedures (Where Permitted): If procedures are designed for it, check off steps as they’re completed. This helps maintain place, especially in long or complex procedures, and provides an audit trail.
  • Do Not Deviate Without Authorization: If a step cannot be performed as written, or if an unexpected condition arises, stop the job immediately. Don’t attempt a workaround. Contact your supervisor, and if necessary, initiate the Management of Change (MOC) process to review and update the procedure.
  • Feedback and Improvement:
    • Report Discrepancies: If you identify an error, omission, ambiguity, or inefficiency in a procedure, report it immediately through the established feedback mechanism. This is a critical part of continuous improvement.
    • Participate in Reviews: Actively participate in periodic procedure reviews or updates when requested. Your frontline experience is invaluable in ensuring procedures are practical, accurate, and safe.
  • Accessibility: Know where to find all relevant procedures, whether they’re hard copies in a control room, electronic files, or within a document management system. Ensure they’re readily accessible during operations.
  • Training and Assessment: Understand that procedures are core to your training and competency. Actively engage in training sessions that use procedures and be prepared for assessments of your understanding and ability to follow them.

Conclusion: The Operator’s Role in Process Integrity

Proficient manual valve operation isn’t just about turning a wheel; it’s about understanding the process, anticipating potential issues, and maintaining constant vigilance. By diligently practicing these three key actions—verifying the correct valve every time, ensuring valves are full seated when closing, and never walking away from an open valve—along with strict procedure use and compliance, chemical plant operators significantly contribute to the safety, reliability, and efficiency of the entire facility. These practices are cornerstones of operational discipline and distinguish a proficient operator. Turning these crucial training points into ingrained habits is paramount, ensuring that safe and effective operation becomes second nature. Continuous training, unwavering adherence to procedures, and a steadfast commitment to safety are essential for mastering these fundamental skills.

Posted in Uncategorized | Tagged , , , , | Leave a comment

You Can’t Manage What You Don’t Value: Reimagining Capital in Organizations

A New Model for Business Decision Making

The Capitals Model is a framework that helps organizations recognize that long-term value creation depends on more than just financial assets. Developed and promoted by the Capitals Coalition, the model encourages businesses and institutions to account for four interconnected forms of capital: natural, social, human, and produced (or financial) capital. Natural capital includes environmental resources like air, water, biodiversity, and ecosystems that provide essential services such as clean water and climate regulation. Social capital refers to the relationships, trust, and networks that enable societies and economies to function effectively. Human capital encompasses the skills, knowledge, health, safety, and wellbeing of people—core elements that drive productivity and innovation. Finally, produced capital consists of physical and financial assets such as infrastructure, tools, and investments used in production.

“Every time we ignore human capital, we gamble with resilience and call it cost savings.”

The purpose of the model is to help organizations understand their dependencies and impacts across all these capitals, leading to better-informed decisions that balance financial, environmental, and social outcomes. By incorporating these multiple dimensions into planning and reporting, the Capitals Model promotes long-term resilience, risk awareness, and integrated value creation. It is increasingly used in sustainability reporting frameworks such as the CSRD (Corporate Sustainability Reporting Directive) and TNFD (Taskforce on Nature-related Financial Disclosures), as well as in ESG investing and corporate risk assessments. Ultimately, the Capitals Model challenges traditional, siloed thinking and emphasizes that organizations cannot fully measure or manage value if they ignore the real sources of it—people, nature, and society.

How it Differs

The Capitals Model differs significantly from the traditional financial model used in most businesses today by broadening the definition of value and encouraging a more holistic approach to decision-making. While the traditional model focuses almost entirely on produced capital—such as financial assets, infrastructure, and short-term profitability—the Capitals Model recognizes four types of capital: natural, social, human, and produced. This expanded perspective acknowledges that ecosystems, people, and communities are not just costs or risks to be managed but are core contributors to long-term value creation.

Another key difference is how each model treats externalities. The traditional financial model often ignores or externalizes environmental and social impacts—such as pollution, resource depletion, or workforce health—unless they directly affect the bottom line. In contrast, the Capitals Model seeks to internalize these impacts, making them visible and measurable so that they can be incorporated into strategic planning. It also differs in its time horizon. Traditional models prioritize short-term financial returns and quarterly performance, whereas the Capitals Model emphasizes long-term sustainability, resilience, and the ability of all capitals to continue generating value over time.

In terms of decision-making, the traditional approach typically evaluates options based on narrow financial return metrics, while the Capitals Model encourages integrated thinking that considers broader risks and opportunities tied to natural systems, human wellbeing, and social cohesion. Finally, the Capitals Model promotes more transparent and integrated reporting by aligning financial and non-financial performance measures. Rather than replacing the traditional model, it enhances it by providing a fuller, more realistic view of how organizations create, preserve, or destroy value across multiple dimensions.

Focus on Human Capital

Human capital refers to the collective value of the knowledge, skills, experience, health, motivation, and wellbeing that individuals bring to an organization. Unlike machines or buildings, human capital is a dynamic and renewable resource that grows through education, training, experience, and engagement. It is essential not only for productivity and operational success, but also for innovation, adaptability, and organizational resilience. Employees are the ones who solve problems, improve processes, build relationships, and respond to crises. Their insights and performance often determine whether safety protocols are followed, quality standards are met, and customers are retained. Yet, in traditional financial models, people are usually treated as expenses (e.g., salaries, benefits) rather than as value-generating assets. This leads to under investment in workforce development, wellbeing, and safety—despite the fact that losses related to disengagement, burnout, or injuries often far exceed those related to equipment failures.

By recognizing human capital as a core asset, organizations can shift from a cost-control mindset to an investment mindset. This means prioritizing not just technical training, but also mental health support, inclusive work cultures, and leadership development. In the context of safety, for example, valuing human capital helps justify investments in better work design, human-centered controls, and robust incident prevention—not just because it avoids harm, but because it preserves the organization’s most critical and irreplaceable resource: its people. When human capital is properly valued, it becomes clear that protecting and developing the workforce is not just a moral obligation, but a strategic and financial imperative.

Example Application for Safety Improvement

Integrating human capital concepts into the justification of capital expenditures to abate prioritized risks fundamentally strengthens the risk assessment process by illuminating the often-overlooked economic value of the workforce. In traditional models, risk abatement is typically justified through cost-avoidance calculations—preventing equipment damage, production losses, or regulatory fines. However, this approach often undervalues or entirely omits the impact of risks on people, treating injuries, fatigue, or human error as incidental rather than as substantial threats to organizational performance and value. By applying human capital thinking, organizations begin to see the workforce not as a cost to be minimized, but as a key asset whose protection and enhancement is central to sustainable value creation.

When prioritized risks are identified—such as those involving high potential for human error, exposure to hazardous conditions, or excessive cognitive or physical demands—the decision to allocate capital should be informed by the potential loss or degradation of human capital if those risks go unaddressed. This includes quantifiable losses such as lost time from injuries, recruitment and retraining costs, and reduced productivity from disengagement, as well as harder-to-measure impacts like erosion of institutional knowledge, morale, and team effectiveness. By explicitly incorporating these human capital losses into the cost-benefit analysis, the financial justification for risk mitigation becomes more robust and realistic.

For example, a capital project to redesign a loading station prone to human error might not be justified if only equipment downtime and repair costs are considered. But if the analysis includes the value of preventing operator fatigue, preserving experienced personnel, and avoiding the downstream effects of injuries on morale and turnover, the return on investment becomes clear. In this way, human capital provides a powerful lens through which safety investments are reframed—not just as a means of avoiding harm, but as strategic initiatives to preserve the organization’s productive capacity, resilience, and long-term competitiveness. This shift ultimately leads to better-aligned decisions, more effective risk management, and stronger outcomes for both people and performance.

Here is a conceptual example of how to use human capital concepts in a spreadsheet-style cost-benefit analysis to justify capital for risk abatement. This example compares a traditional justification based on physical assets alone with an enhanced justification that includes human capital impacts.

Spreadsheet Example: Capital Investment Justification for Safer Loading Station

CategoryTraditional ModelHuman Capital-Inclusive ModelNotes
A. Capital Investment Cost$150,000$150,000Cost to redesign loading station (e.g., automation, ergonomic layout).
B. Annual Cost of Incidents (Before Control)
Equipment Downtime (repairs, lost production)$30,000$30,000From incident reports.
Material Loss / Spills$10,000$10,000Spill cleanup, lost product.
Injury Costs (medical, claims)$15,000$15,000Workers’ comp, etc.
Lost Work Time$20,000Based on 400 hrs x $50/hr burdened labor rate.
Retraining Due to Turnover$8,000Based on 1.5 FTEs lost per year due to injuries/fatigue.
Productivity Loss (disengagement, fatigue)$25,000Estimated from human performance data and surveys.
Morale / Team Performance Impact$10,000Proxy value based on estimated effect on throughput.
Total Annual Incident Cost$55,000$118,000Total cost exposure per year.
C. Post-Investment Residual CostAssumes 80% risk reduction.
Residual Cost (20%)$11,000$23,600Reduced but not eliminated.
D. Annual Benefit (Cost Avoided)A – C
Annual Cost Savings$44,000$94,400Difference between pre- and post-control costs.
E. Payback Period (Capital / Annual Benefit)3.41 years1.59 yearsShorter payback with human capital included.
F. ROI over 5 Years147%315%Much stronger return when accounting for workforce impact.

Key Insights

  • The traditional analysis shows a marginal ROI and long payback, which may lead decision-makers to delay or reject the investment.
  • By including human capital-related losses—such as lost productivity, turnover, and team disruption—the financial justification is dramatically strengthened.
  • This approach makes it easier to align safety investments with business value and gain executive support.

I have been collaborating with other stakeholders on the Capitals Coalition’s Valuing Human Capital in Occupational Health & Safety project. This project engages current and future occupational health & safety professionals around the importance of valuing the health, safety and the wellbeing of workers through a capitals approach as set out in the Social & Human Capital Protocol.

Learn more about implementing the Capitals model at: https://capitalscoalition.org/capitals-approach/frameworkintegrated/

Posted in Uncategorized | Tagged , , , , | Leave a comment

From Ambiguity to Action: Turning Weak Signals into Strategic Safety Gains

As I stood reviewing yet another incident report, I found myself asking a question that’s become uncomfortably familiar: What could we have done differently—not after the fact, but before it happened? In high-risk, complex operations, it’s all too clear that control is never absolute, and even the most carefully written procedures or well-intentioned training programs don’t always prevent the unexpected. Despite our best efforts, relying on reaction after loss or injury often means we’re already too late. But what if the real opportunity lies not in tightening our response, but in shifting our mindset? When we proactively target the conditions that give rise to accidents—the weak signals, the subtle mismatches, the latent system flaws—we move closer to a performance model built not on avoiding failure, but on anticipating and outpacing it.

“Disasters don’t come without warning—they whisper. The smartest organizations are the ones that learn to listen before the shouting starts.”

This article explores how forward-looking strategies can help us reshape safety from a reactive posture to one rooted in resilience, foresight, and true operational control.

What is Weak Signal Theory

The application of weak signal theory to managing operations in a chemical manufacturing environment involves several critical elements that together enhance an organization’s ability to anticipate, detect, and respond to early indicators of failure or degradation before they lead to safety incidents, process disruptions, or quality deviations. At its core, weak signal theory revolves around recognizing subtle, fragmented, and often ambiguous indicators—such as minor alarms, slight deviations in process data, uncharacteristic equipment behavior, or informal operator concerns—that may not, on their own, demand immediate attention but could signal the early onset of significant problems. One of the foundational elements is operational mindfulness, which requires frontline workers, supervisors, and technical personnel to maintain an acute awareness of normal operating conditions and a sensitivity to any deviations, however small. This form of attentiveness must be cultivated through training, cultural reinforcement, and leadership modeling. Closely linked to mindfulness is the need for psychological safety, where workers feel empowered to speak up about concerns that may lack hard evidence or fall outside of routine metrics. Without such a culture, weak signals often remain unreported, particularly in hierarchical or production-driven environments.

Another essential element is the establishment of multiple channels for signal detection and capture, including both formal mechanisms (like near-miss reporting systems, shift logs, and operator rounds) and informal methods (such as conversations during toolbox talks or anecdotal comments during control room handovers). The goal is to create low-friction opportunities for employees to surface weak signals without fear of being ignored or penalized. Once a signal is detected, cross-functional interpretation becomes critical. Weak signals are often ambiguous and require the collective expertise of operations, engineering, safety, maintenance, and quality teams to understand their potential significance. These teams apply systems thinking and historical knowledge to connect the dots and determine whether a pattern is emerging or if further investigation is warranted.

To institutionalize this process, weak signal detection must be integrated into daily operational routines. This includes embedding weak signal prompts into shift handovers, routine safety meetings, management of change (MOC) reviews, and even control room checklists. Organizations should also maintain a repository of precursor indicators, linking weak signals to known failure modes or previous incidents. This enables trend analysis and pattern recognition that can uncover hidden systemic risks. A key feature of high-functioning weak signal systems is the willingness to act on incomplete information—whether that means initiating a preventive maintenance check, adjusting a process parameter, or triggering a temporary operational control—based on a credible concern rather than waiting for confirmation through a failure.

Additionally, the system must include feedback loops and learning mechanisms, so those who report weak signals see that their concerns are taken seriously and result in action or investigation. Feedback reinforces reporting behavior and contributes to a culture of trust and vigilance. The final critical element is ongoing evaluation and refinement of the weak signal processes themselves. This includes auditing the effectiveness of detection channels, assessing the organization’s responsiveness to weak signals, and ensuring that lessons learned from weak signals are shared across shifts and sites to strengthen the organizational memory. In sum, the critical elements of weak signal theory in chemical manufacturing encompass perceptual awareness, open communication, collaborative interpretation, proactive intervention, cultural support, and continuous learning—all of which are essential to achieving anticipatory safety and operational reliability in a complex, high-risk industrial setting.

Weak Signal Theory Applied to Operational Safety

Weak signal theory is critically important in maximizing the performance of safety systems in industrial operations because it shifts the organizational focus from reactive to proactive risk management. Traditional safety systems are often designed to respond to failures after they occur, relying on incident investigations and corrective actions. However, in complex, high-risk environments like industrial manufacturing, serious incidents are often preceded by small, ambiguous signs—what weak signal theory calls “weak signals.” These signals may include subtle equipment irregularities, near-misses, abnormal operating conditions, or informal operator concerns that do not fit neatly into established risk models. If ignored, these signals can represent missed opportunities to detect latent conditions, design flaws, or human factors vulnerabilities that could contribute to a major incident.

By enabling organizations to identify and act on these early indicators, weak signal theory enhances the agility and responsiveness of safety systems. It helps bridge the gap between what is known and what is emerging, allowing safety systems to evolve dynamically in response to real-world complexity. Additionally, it supports the principles of high reliability organizations (HROs) by fostering sensitivity to operations, a reluctance to simplify interpretations, and a commitment to resilience.

Weak signal theory also strengthens human performance by encouraging frontline workers to report what they sense, even when it lacks clear evidence, and by ensuring that the organization listens and learns from these observations. In doing so, it drives continual improvement in both technical controls and organizational processes, thereby maximizing the effectiveness and reliability of the entire safety management system. Ultimately, integrating weak signal detection into industrial operations can mean the difference between preventing a disaster and managing its aftermath.

The Connection Between Weak Signal Theory and Sensemaking

The connection between weak signal theory and sensemaking is both deep and essential, particularly in high-risk environments like chemical manufacturing where ambiguity, complexity, and time pressure are constant. Weak signal theory deals with the detection of early, often ambiguous indicators of potential problems—subtle signs such as unusual noises, small equipment irregularities, abnormal operator behavior, or inconsistent data trends that, while not yet clearly threatening, may be harbingers of larger failures. On its own, detecting these signals is not enough; their value lies in how an organization interprets and acts upon them. This is where sensemaking becomes vital.

Sensemaking, as defined by organizational theorist Karl Weick, is the social and cognitive process through which people interpret uncertain, incomplete, or ambiguous information to construct a coherent understanding of what is happening and decide what to do next. In the context of weak signals, sensemaking involves gathering fragmented pieces of information, questioning assumptions, and developing shared mental models among team members to assess whether the observed irregularities represent noise, a minor variation, or a precursor to a serious event. For example, a low-level alarm might be rationalized by one individual as insignificant, but during collective sensemaking—such as a multidisciplinary team discussion—it could be reframed as the early indication of a systemic failure or a control system degradation.

The link between the two concepts is especially important because weak signals are rarely clear-cut. They require contextualization—a blending of local knowledge, historical experience, technical expertise, and real-time observations. Sensemaking enables teams to transform weak signals into actionable insights by recognizing patterns, comparing current anomalies with past incidents, and asking critical questions like, “What are we missing?” or “What else could this mean?” In this way, sensemaking functions as the bridge between noticing weak signals and making risk-informed decisions. It shifts organizational focus from simplistic cause-and-effect thinking to dynamic interpretation and learning.

In high-reliability operations, the connection between weak signal theory and sensemaking also supports the principle of preoccupation with failure. Organizations that actively practice sensemaking in response to weak signals are more likely to anticipate emerging risks, adapt quickly, and intervene before an incident occurs. Moreover, sensemaking processes encourage distributed cognition—leveraging multiple perspectives across roles and departments—so that small cues are not dismissed due to cognitive biases or siloed thinking.

In summary, weak signal theory identifies the “what”—the subtle cues that something may be going wrong—while sensemaking provides the “how”—the interpretive process that gives these signals meaning, direction, and urgency. Together, they enable a proactive safety posture where early warnings are not only seen but understood, debated, and acted upon in ways that strengthen operational resilience and prevent harm.

Conclusion

In chemical manufacturing, where the stakes are high and systems are complex, weak signal theory provides a vital strategy for building foresight and resilience. By cultivating mindfulness, enabling open communication, interpreting signals collectively, and acting proactively, organizations can prevent small problems from growing into major incidents. Applying these steps consistently—and embedding them into the cultural and operational fabric of the plant—can transform how safety is managed, making it more anticipatory, adaptive, and effective.

Follow Up Discussion 5/11/25

A colleague of mine had this question after reading this post:

“I’ve been challenged when arguing for recognition of such patterns that the connection with recognized data analysis tactics is, well, weak. Can you be more specific about the prompts and questions you’re using to build detection processes like what you describe here?”

It gave me the opportunity to think further about the application of weak signal theory in the work place. Below are my full thoughts that colored the short answer I responded with to her:

That’s a great and nuanced question—recognizing weak signals often depends as much on culture and intentional listening as it does on hard data. One way to make the process more structured is to integrate specific prompts into tools like digital safety observations and post-job reviews. Questions such as, “Was anything unusual, harder than expected, or out of alignment with normal operations?” can help surface early indicators of failure. These qualitative responses could then be tagged and categorized in an EHS data system to identify emerging trends. Additionally, forming cross-functional review teams—including frontline operators, supervisors, and human factors or HPI professionals—can help interpret this data. Their role would focus on recognizing weak patterns like recurring workarounds, ambiguous feedback, or inconsistent practices that often signal deeper system vulnerabilities.

To further support this process, organizations could operationalize key principles from High Reliability Organizations (HROs)—especially “preoccupation with failure” and “deference to expertise.” These principles can be embedded into routine planning and debrief activities by encouraging teams to reflect on what nearly went wrong and to center the voices of those closest to the work. One idea is to dedicate time in monthly risk or operations meetings for a “Weak Signals Review,” where team leads bring forward seemingly minor concerns or gut instincts shared by staff. These discussions could be supported by visual tools like heat maps or storyboards that help connect dots across incidents. By formalizing both the tools and the cultural mindset, weak signals can evolve from anecdotal observations into early warning signs that drive proactive risk management. Below is a sample storyboard and signal taxonomy to get you started.

Weak Signals Tools Examples:

Weak Signals Storyboard:

Use this tool to organize and document the weak signals captured and to explain the significance to future safe and stable operations.

Title: Recurring Setup Delay in Reactor A Feed Valve Operations

Source:
• 3 operator comments during post-job reviews over 2 weeks
• 1 maintenance ticket noting minor binding in valve handle rotation
• Informal note from shift lead: “Valve’s just… off lately—can’t explain it.”

Context:
• No immediate failure, but recurring 8–12 minute delays in loading sequence
• Newer operators report more difficulty than seasoned staff
• Maintenance backlog for valve inspection is growing due to limited parts

HRO Cues Identified:

  • Preoccupation with failure: Noticing the pattern despite no failure
  • Reluctance to simplify: Not dismissing it as “just operator error”
  • Sensitivity to operations: Operators sense “something’s not right”

Initial Hypotheses:
• Micro-warp in valve stem under thermal cycling
• Inadequate procedural clarity on manual override steps
• Early signs of ergonomic mismatch in redesigned work platform

Action Path:
• Short-term: Expedite valve inspection and rotate in backup
• Mid-term: Conduct ergonomic assessment with HPI team
• Long-term: Update observation prompts to include “small friction points”


Signal Taxonomy Snapshot (Used in Tableau/Excel):

Use this tool to comb feedback from employees involved in the operation to identify relevant weak signals for further analysis by the cross functional team.

CategorySubcategoryExamples
Process FrictionMinor recurring delaysSetup lags, tool alignment issues
Procedural DriftUnofficial workarounds“We do it this way now” comments
Ambiguous FeedbackGut feelings, tone shifts“Doesn’t feel right,” tone in discussion
System NoiseFrequent resets/alertsAlarm fatigue, nuisance interlocks
Role StrainTask mismatchWorkarounds by less experienced workers
Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Alone in the Air: Some Thoughts on Solitude

Michael Collins in June of 1969. NASA
Michael Collins June 1969.
Photo: NASA

I just finished reading Carrying the Fire by Michael Collins, a deeply personal and candid account of what it really meant to be an astronaut during the golden age of space exploration. Unlike many official histories, this book isn’t just a celebration of Apollo 11’s triumph—it’s an unfiltered, often humorous, and sometimes unsettling look at the relentless training, the internal rivalries, and the staggering risks that defined NASA’s early missions. Collins, as the often-overlooked third man of the first moon landing, brings a unique perspective: while Armstrong and Aldrin left footprints on the lunar surface, he orbited above, utterly alone.

“Such solitude on a grand adventure reveals a truth about human experience—our deepest achievements and moments of transformation often come when we stand alone against the unknown.”

What was it like to be the sole human cut off from both Earth and the moon, at the farthest reaches of human isolation? What kind of mindset did it take to strap into a machine built by the lowest bidder, knowing a single failure could mean a silent death in space? Carrying the Fire doesn’t just tell the story of a mission—it compels you to consider what it took to be an astronaut in the Apollo era, what kind of person thrives in that environment, and whether the spirit of such daring exploration still burns as brightly today.

Floating Over Another Celestial Body Like a God of Space

After Neil Armstrong and Buzz Aldrin departed the command module on the 4th day of the mission to enter the lunar landing module, named Eagle, Michael Collins was alone in the Columbia Command Module. For over 23 hours, completing a lunar orbit every 2 hours, out of radio contact for 40 minutes each time the capsule went behind the moon, Collins kept Columbia safe and on course. Occasionally, he was able to monitor the progress of Armstrong and Aldrin below. During these lunar orbit phases, he recognized he was the only human being in that side of the universe. Collins had this to say about his time in solitude:

“I am alone now, truly alone, and absolutely isolated from any known life… …I feel this powerfully–not as fear or loneliness–but as awareness, anticipation, satisfaction, confidence, almost exultation. I like the feeling.”

Recognition of Solitude from Charles Lindbergh

After the Apollo 11 crew returned from their historic mission, they were in quarantine for 21 days. This was thought to be needed to ensure that no infectious materials from the moon we brought to Earth and could create an infection foreign to our biosphere. It was during this time that Michael Collins received a letter from the Charles Lindbergh, the first aviator to cross the Atlantic Ocean in an airplane (solo) in 1927.

Here is the text of the letter from Charles Lindbergh:

“Dear Colonel Collins,

My congratulations to you on your fascinating, extraordinary, and beautifully executed mission; and my sincere thanks for the part you took in issuing the invitation that permitted me to watch your Apollo 11 launching from the location assigned to the Astronauts. (There would have been constant distractions for me in the area with the VIPs, among whom I refuse to class myself–what a terrible designation!)

I managed to intercept on television the critical portion of your mission during this orbit of my own around this world. Of course after you began orbiting the moon, television attention was concentrated on the actual landing and walk-out. I watched every minute of the walk-out, and certainly it was of indescribable interest. But it seems to me you had an experience of in some ways greater profundity–the hours you spent orbiting the moon alone, and with more time for contemplation.

What a fantastic experience it must have been–alone looking down on another celestial body, like a god of space! There is a quality of aloneness that those who have not experienced it can not know–to be alone and then to return to one’s fellow men once more. You have experienced an aloneness unknown to man before. I believe you will find that it lets you think and sense with greater clarity. Sometime in the future I would like to listen to your own conclusions in this respect.

As for me, in some ways I felt closer to you in orbit that to your fellow astronauts I watched walking on the surface of the moon.

We are about to start the decent to Manilla, and I must end this letter.

My Admiration and best wishes,

Charles A. Lindbergh

Of course I feel sure that your sense of aloneness was regularly broken into by Mission Control at Houston; but there must have intervals in between–I hope enough of them. In my flying, yours ago, I didn’t have the problem of coping with radio communications.”

Letter postmarked Manilla, July 28, 1969

This is an extraordinary discussion between two men who have experienced an incredible feeling of being utterly isolated from all other human beings. It is this insightful conversation that started me thinking about solitude and inspired this post. The experience of being profoundly alone in a great adventure carries an existential weight that few ever fully encounter. It is an immersion into the deepest recesses of the self, where one’s existence is stripped down to its most fundamental elements—courage, uncertainty, faith, and raw survival. For those who embark on such journeys, solitude is not just a physical state but a profound psychological and philosophical encounter with the unknown. It is a moment where the individual stands on the edge of human limits, facing both the vast external world and the vast internal universe within.

Charles Lindbergh’s 33-hour, radio-silent flight across the Atlantic is a prime example of such an experience. In those long hours, he was entirely alone with his thoughts, his aircraft, and the unbroken expanse of sky and sea beneath him. There was no safety net, no immediate help if things went wrong—only his skill, his faith in his machine, and his endurance to see him through. In these moments, a person is not only tested physically but also spiritually. The loneliness is not just the absence of others, but an overwhelming presence—the presence of uncertainty, of the vast forces of nature, and of one’s own fragile mortality. Yet within this solitude lies an incredible paradox: in being utterly alone, one often feels most deeply connected to existence itself.

Such solitude on a grand adventure reveals a truth about human experience—our deepest achievements and moments of transformation often come when we stand alone against the unknown. There is an existential purity to it, a stripping away of all distractions and superficialities, leaving only the individual and their will to push forward. This is where the great adventurers, explorers, and pioneers of history have found themselves, standing in the midst of something far larger than themselves yet still driven to navigate it. It is in this space—between fear and faith, between self-doubt and self-reliance—that the essence of human resilience and transcendence is revealed.

Some may feel that Michael Collins got the short end of the stick regarding the Apollo, 11 moon mission; He was not able to walk on the surface of the moon for the first time. I agree with his thoughts on the matter however: he was the crucial link for the landing crew to return to earth and humanity. He was the Shepherd of the Columbia command module, without him, and that craft, there was no returning to earth. While his task served that important function, it also for him is a glimpse at the opportunity to be utterly alone, with all the emotions and revelations that come with it.

My Own Humble Experience with Solitude in the Air

I have not crossed the atlantic piloting my own plane, or orbited another heavenly object in a fantastic space machine, but I have piloted my plane for hours without talking with anyone. Alone with my thoughts, my piloting skill, and the faith in my small plane to return me safely to earth when I am ready. It is in these times when I am up there, piloting my craft in the atmosphere that I feel a little piece of what Collins and Lindbergh may have felt. You are acutely aware that your piloting skills, aeronautical decision making and the plane I am flying are all that allow me to enter this world and stay safe in it. Ironically, the solitude has a presence in the cockpit. It is intimidating yet exhilarating to be in that moment. With it comes the obligation to be competent, informed and always respectful of the responsibility aviators have to be safe and efficient at all times. There is a low margin for error in the world of the sky, and you are often alone in that world, testing your mettle. But what a feeling it is each time you are rewarded with a perfect landing to match the peaceful passage over the earth below.

Advantages of Solitude for Professionals

Solitude can be a powerful asset for professionals, offering a range of benefits that enhance productivity, creativity, and overall well-being. One of the primary advantages is enhanced focus and concentration. Working alone minimizes distractions from colleagues and office chatter, allowing professionals to engage in deep work and make significant progress on complex tasks. It also fosters increased creativity and innovation by providing the mental space needed for reflection, brainstorming, and pursuing unconventional solutions without the pressure of social conformity.

Additionally, solitude improves productivity by enabling self-paced work and streamlined decision-making, free from unnecessary meetings or office politics. It also promotes self-awareness and personal growth, offering time for introspection, goal setting, and mindfulness practices that enhance emotional intelligence and stress management. Stress reduction is another key benefit, as quiet environments help professionals decompress, lower anxiety levels, and return to work with renewed energy and focus.

Moreover, solitude enhances problem-solving skills by allowing for uninterrupted thought processes and deeper analysis, which are essential for strategic planning and leadership. It also contributes to a healthier work-life balance by helping professionals establish personal boundaries and create a customized work environment that supports their individual needs and preferences.

While collaboration and social interaction remain important, intentional periods of solitude can lead to deeper focus, increased creativity, and greater career satisfaction. Finding the right balance between solitude and engagement can optimize both professional performance and personal well-being.

Posted in Career Skills, Uncategorized | Tagged , , , , , , | Leave a comment