Ethical AI Solutions and Their Impact on Regulated Industries
Across regulated and asset-intensive industries, AI adoption is shaped by factors beyond technical feasibility. Organizations now seek to deploy AI systems that deliver measurable operational improvements. At the same time, they must keep these systems safe, explainable, and aligned with human judgment. Errors in these contexts carry real human and economic consequences.
In manufacturing, these risks are tangible. For example, the U.S. Bureau of Labor Statistics reports 391 fatal occupational injuries in the manufacturing sector in 2023, highlighting how decisions in industrial environments can quickly become safety-critical. Similarly, in agriculture, the operational stakes are just as high, though on a global scale.
The Food and Agriculture Organization of the United Nations estimates that up to 40 percent of global crop production is lost each year to plant pests and diseases, resulting in economic losses of more than $220 billion USD. These pressures are driving organizations to seek AI systems for decision support, while also demanding that these systems remain transparent, reliable, and aligned with human oversight.
In a recent conversation with Emerj Editorial Director Matthew DeMello, Dr. Steffen Hoffmann, Managing Director of Bosch UK, discussed how the organization approaches this balance in practice. Drawing on Bosch’s experience applying AI in deterministic production systems and on the selective internal introduction of generative AI (GenAI), Hoffmann outlined how ethical guardrails, human oversight, and business-aligned use case design guide Bosch’s AI strategy.
To illustrate these principles, this article examines how Bosch applies AI earlier in production workflows to reduce defects, how the company differentiates oversight based on use-case risk, and why GenAI is positioned as decision support rather than an autonomous authority.
Key Insights
- Move AI upstream to reduce quality risk: Applying intelligent systems earlier in production workflows can significantly reduce defect rates by addressing root causes instead of detecting failures late in the process.
- Match AI oversight to use-case risk: Deterministic systems can be automated with limited human involvement, while people-facing or probabilistic AI requires structured human review.
- Use GenAI as decision support, not authority: Deploying GenAI in supervised internal workflows allows organizations to benefit from probabilistic insights without surrendering accountability.
Listen to the full episode below:
Guest: Dr. Steffen Hoffmann, Managing Director of Bosch UK, Bosch
Expertise: AI strategy, manufacturing AI, ethical and responsible AI
Brief Recognition: Dr. Steffen Hoffmann is a senior leader at Bosch. He guides the company’s AI strategy for manufacturing, agriculture, and internal enterprise systems. In this conversation, Hoffmann discusses Bosch’s use of AI in deterministic production, the organization’s human-in-the-loop oversight, and the role of a formal AI code of ethics in responsible AI development.
Move AI Upstream to Reduce Quality Risk
Hoffmann shares an example involving Bosch’s work with a manufacturing partner producing alloy wheels. In this production environment, defects were traditionally detected late in the process using X-ray inspections to identify internal flaws in wheel rims.
Through AI-enabled analysis, Bosch’s team identified that these defects were closely tied to upstream production parameters. Variables such as aluminum melting conditions, flow velocity, cooling temperature, and pressure all directly influenced final product quality.
Instead of focusing only on inspection at the end of the line, Bosch applied AI earlier in the workflow during the aluminum melting phase. Earlier intervention lets the manufacturer monitor and adjust parameters linked to defect formation before flaws are introduced.
In this case, defect rates were typically about 10%. Applying AI earlier reduced defects to roughly 1% to 2%. Hoffmann presented this as an example of how repositioning AI in a workflow can improve quality without changing inspection technology.
For business leaders, the takeaway is clear. AI often delivers more ROI when applied upstream. By addressing root causes earlier, organizations can reduce waste, rework, and variability. This shifts AI from a reactive to a proactive process-optimization tool.
Match AI Oversight to Use-Case Risk
Hoffmann goes on to describe a practical, use-case-driven approach to oversight rather than a uniform governance model. He emphasizes that much of Bosch’s AI deployment remains deterministic by design. In manufacturing and precision agriculture, AI systems are commonly used to monitor physical processes, identify deviations, or automate routine tasks that carry limited risk.
These systems operate within clearly defined parameters, making their outputs predictable and measurable. As a result, they can be deployed with minimal human intervention while still delivering operational efficiencies.
However, Hoffmann is clear that this level of automation is not for all AI use cases. Where systems influence people or involve ambiguity, Bosch evaluates human oversight needs on a case-by-case basis. Gated elevation choices avoid a uniform governance model for every situation.
The distinction Hoffmann describes here allows Bosch to avoid overgoverning low-risk automation. It also ensures higher-impact systems receive appropriate scrutiny. For executives, this approach offers a practical framework to align AI governance with real-world risk, not just abstract policy.
Use GenAI as Decision Support, Not Authority
Bosch’s introduction of GenAI took place through an internal HR assistant known as ROB. Hoffmann described HR as a context that can involve situations of potential legal relevance, making it unsuitable for fully automated decision-making.
“AI decisions that affect people should not be made without a human arbiter. That means there always needs to be a human in between. We want to develop AI products that are safe, robust, explainable, and, most of all, trustworthy. And when we develop AI products, we observe legal requirements and wider ethical principles. That’s the book by which we play.”
— Dr. Steffen Hoffmann, Managing Director of Bosch UK
ROB provides suggested responses or solutions, but HR professionals must review any outcomes that affect employees. Human staff check if the output makes sense, considering all relevant factors, before acting.
Bosch deployed this generative system internally, not in customer-facing settings. Hoffmann tells the podcast audience that doing so allowed the organization to explore probabilistic AI in a controlled environment. Clear human accountability was always maintained.
The manner of their internal deployment reflects Bosch’s broader ethical framework, which includes principles such as requiring human arbiters for AI decisions that affect people and prioritizing safety, robustness, explainability, and trustworthiness. By positioning GenAI as decision support rather than as authority, Bosch can experiment with new capabilities while managing risk.
