
spacebands is a multi-sensor wearable that monitors external, environmental hazards, anticipates potential accidents, and gives real-time data on stress in hazardous environments.
Sign up to the mailing list:
Subscribe
My journey started in traditional EHS roles, learning the fundamentals such as risk assessments,compliance, program implementation. The real shift happened when I was tasked with evaluating the safety implications of autonomous vehicles. That was my first deep dive into an AI-driven system in the workplace, and it opened my eyes to how profoundly this technology would transform our field and how it presented some new risks.
That experience led me to found AI 4 EHS. As my Substack articles show, I saw a huge gap: my peers knew AI was important, but they were overwhelmed by the hype and didn't know where to start.
My role now is to be a practical, vendor-neutral guide. I help EHS leaders cut through the noise, providing everything from strategic advisory and AI tool selection to hands-on training.Essentially, I translate the world of AI into the practical language of safety, helping organizations move from being reactive to being predictive and proactive.
This is a critical question because, as I wrote in my series with Arianna Howard, we have to be realistic about AI's capabilities.
Exceeded Expectations: A use case that has truly exceeded expectations is Computer Vision (CV) for real-time workplace monitoring. The ability of AI to "see" and interpret images from existing cameras to verify PPE compliance, detect unsafe conditions like blocked exits, or identify poor ergonomics has been transformative. It unlocks new objective and actionable insights for EHS leaders. It enables automated, continuous monitoring that was previously impossible, allowing us to move from a reactive to a preventative safety posture.
More Promise than Payoff: A use case that is still more promise than payoff is Predictive Analytics (PA) for forecasting future safety events with pinpoint accuracy. While my articles highlight predictive analytics as a key application, its limitation is handling novelty. AI struggles to predict risks it hasn't seen in historical data. While it's getting better at forecasting equipment failures or flagging high-risk scenarios based on past data, predicting the complex, unique chain of events that leads to a Serious Injury or Fatality (SIF) is still a major challenge. As I note, AI augments human judgment, it doesn't replace it, and this is the area where human expertise remains absolutely critical.
The key is to reframe the goal, a principle I cover extensively in Arianna and I's article on data governance. We don't use analytics to predict an individual's behavior. The ethical line we draw is to monitor the process, not the person.
Predictive analytics are used to understand the systemic conditions that influence behavior. For example, our analytics might show that a certain area has a higher risk of incidents on a hot day when a new crew is working overtime. The model isn't predicting what a specific worker will do; it's flagging a confluence of systemic risks. This allows us to make targeted interventions (like adding extra hydration breaks or providing more supervision) that make the environment safer for everyone.
It's about identifying and mitigating risk in the system, which is a core EHS function, just supercharged with better data.
The biggest misconception is what I call the 'Technology-First Approach,' which I discuss in my article on ROI of AI. Leaders often think they can just buy a cool AI tool and it will magically solve a problem. This leads to what I term 'safety innovation theater'—impressive demos that don't translate to operational improvement.
The reality, and the core of my advice, is that you must Target Outcomes and Problems First, Technology Second. The conversation shouldn't be 'We need an AI.' It should be, 'We need to reduce the time it takes to complete a Job Hazard Analysis by 50%.' Once you define that specific, measurable task, you can find the right tool. AI is not a strategy; it's an enabler for a strategy you already have. Without that, you end up stuck in 'pilot purgatory' with no demonstrable ROI.
The key is to tailor the message and the metrics for each audience, a concept I laid out in the 'Tailoring the EHS AI Value Proposition' section of my ROI article.
For Engineers: I talk about system design and data. They respond to the challenge of building a more resilient system, not just adding a safety feature. For Execs (like the CFO or COO): I speak their language. For the CFO, it's ROI, Payback Period, and reduced insurance costs. For the COO, it's operational efficiency, reduced downtime, and streamlined processes. I present safety as a strategic value driver, not a cost center.
For Frontline Teams: The message is about worker engagement and trust. As I stress in my governance article, transparency is crucial. We involve them in the process, showing them how a tool helps them, not how it surveils them. When they see the benefit to their own well-being, they become champions.
You have to connect the risk directly to a business outcome. In my ROI framework, I distinguish between tangible and intangible benefits. Instead of just presenting a high ergonomic risk score, I'll frame it as a business case: 'The high-risk manual tasks on this line are contributing to a 15% employee turnover rate and costing us X dollars in workers' comp claims. By investing in this AI-driven ergonomic monitoring and assist tool, we project a reduction in claims and an improvement in retention, with a payback period of 20 months.'
This approach translates a safety metric into the language of finance (claims cost, payback) and HR (turnover), making it immediately relevant to leaders outside the EHS silo.
The hardest message is always telling leadership that we need to slow down and manage expectations, especially when they're excited about a new technology. I had to do this recently for a major predictive analytics pilot.
To get it to land, I used the 'J-Curve' of AI productivity concept from my ROI article. I showed them the chart and explained that, like all major technologies, we should expect an initial dip in productivity as we implement, learn, and adapt. I explained that we wouldn't see a positive ROI overnight. By setting this realistic expectation from the start and framing it with a proven economic model, I shifted the conversation from 'Why isn't this working yet?' to 'What do we need to do to get to the upward part of the curve?' It turned a potential point of failure into a managed, strategic process.
This is the central theme of Part 2 of my and Arianna’s series, 'How Do I Do This Safely and Legally?' The key is building a robust governance framework from the start.
Effective AI governance in EHS ensures data security with layered controls, protects individual privacy through transparency and minimal data collection, maintains algorithmic integrity with bias checks and human oversight, and is managed by a cross-functional team.
We do this by obtaining what I call an 'AI Model Safety Data Sheet' or 'model cards' for each system and performing a risk assessment using this information. This documentation details the data sources, the model's purpose and limitations, and its performance metrics.
This ensures explainability. If a regulator asks why a decision was made, we can show our work. We also establish clear thresholds for mandatory human oversight, ensuring that a qualified person is always in the loop for safety-critical decisions.
The specific rulebook is lagging, but the foundational principles are not. Most modern safety management systems are performance-based; they tell you what to achieve (e.g., identify and control hazards), not how. This gives us room to innovate.
However, as I discuss in my governance article, the challenge is that new AI-specific legislation like the EU AI Act is emerging and will likely classify many EHS applications as 'high-risk.' Our job as EHS leaders is to build adaptable governance structures now that align with the four pillars I outlined—Data Security, Data Privacy, Algorithmic Integrity, and a Governance Framework. By doing this, we can demonstrate that our new methods meet the intent of existing standards and are ready for future regulations.
The line is bright and clear, and it's the most important principle in my work: We monitor the process, not the person. This is Pillar 2, Data Privacy, in our governance framework.
For example, we can use computer vision to analyze ergonomic data. The ethical application is to aggregate that data to say, 'Workstation C has a high rate of risky postures, we need to redesign it.'
The unethical application is to send an alert to a manager saying, 'Dan just bent over incorrectly.'
The first approach fixes the system and builds trust. The second creates a culture of surveillance and fear. We enforce this line with strict policies on transparency, purpose limitation, and data anonymization whenever possible.
We use a 'tight on the what, loose on the how' model.
The 'Tight' part is the global governance framework I detail in my articles. This includes our non-negotiable principles for data security, privacy, risk tolerance, and the use of an 'AI Model Safety Data Sheet.' This ensures a consistent, high standard of safety and ethical conduct everywhere.
The 'Loose' part is the implementation of specific use cases. A computer vision model trained to detect PPE in a North American construction site might need to be retrained for a manufacturing plant in Southeast Asia with different equipment and cultural norms. We empower local teams to adapt the tools to solve their most significant local problems, as long as they operate within the global ethical and security guardrails.
A new safety technology is worth scaling only after a successful pilot project has demonstrated its value and the organization is prepared for enterprise-wide orchestration. Significant productivity improvements only come from achieving scale. The litmus test involves asking these 4 questions:
1. Did the Pilot Solve a Real Business Problem?
Success isn't about the tech itself, but about solving a real EHS problem and demonstrating a measurable impact on clear KPIs. The pilot must move beyond being an impressive demo to showing actual operational improvement.
2. Is the Data Foundation Solid?
You cannot scale without high-quality, harmonized data and the infrastructure to support it. If the pilot revealed data disharmony or inconsistent terminology, those issues must be resolved before scaling.
3. Is there a Plan for Enterprise Orchestration?
Scaling requires moving beyond individual use to coordinated applications across the enterprise. This means having a cross-functional steering committee and a plan to integrate the technology with existing systems.
4. Are the People and Processes Ready?
The biggest barriers are often human, not technical. A technology is ready to scale only if there is a robust change management plan to address worker fears, provide upskilling, and redesign workflows around the new capabilities (an AI+ approach).
The single most important capability is data fluency combined with what we call "AI Value Creator" thinking. This isn't about becoming data scientists, but developing the ability to think systemically about how data flows through safety decisions and understanding how to ask the right questions of AI systems.
In ten years, every safety professional will work with AI systems daily. Success won't depend on understanding algorithms, but on knowing how to spot when AI insights don't match ground truth, how to translate between technical capabilities and safety outcomes, and how to leverage proprietary EHS data as a competitive advantage.
The professionals who develop this capability now will lead the field by becoming strategic business partners rather than cost centers. They'll understand how to demonstrate quantifiable improvements in efficiency, risk reduction, and cost savings that showcase EHS contribution to business resilience and operational excellence.
This means investing in training for a blend of skills: data science and statistics knowledge, AI fluency, business acumen, and change management. This is the shift that will separate the EHS functions that are merely cost centers from those that are true strategic partners who add value to the business.
You can reach Dan on LinkedIn here - https://www.linkedin.com/in/danielgrinnell/
We think you'll also find the articles below really useful
Join 5,000 H&S professionals and sign up for the spacebands monthly newsletter and get the latest blogs, free resources, tools, widgets and a dose of health & safety humour.
spacebands is a multi-sensor wearable that monitors external, environmental hazards, anticipates potential accidents, and gives real-time data on stress in hazardous environments.