Yvette Schmitter is a thought leader in ethical AI and the co-founder and managing partner at Fusion Collective, a firm that specializes in growth marketing services and fractional C-suite expertise.
Schmitter is a former digital architecture partner at PwC and a thought leader previously featured in TEDx.
In this exclusive interview with IEN, Schmitter discusses how ethical AI is transforming the manufacturing industry.
IEN: Will ethical AI play any role in the manufacturing industry?
Yvette Schmitter: Absolutely! Ethical AI will play a transformative role in manufacturing by addressing critical challenges while driving efficiency and accelerating innovation. Why? Because ethical AI is already playing a crucial role in the manufacturing industry, particularly in quality control and defect detection. By integrating machine learning algorithms and AI-powered vision systems, manufacturers can ensure product safety, reduce waste, and improve efficiency. However, the ethical implications of these advancements cannot be overlooked—AI must be designed and deployed responsibly to enhance human expertise, prevent harm, and ensure fairness.
Let’s take Toyota Research Institute (TRI)’s ethical AI application in manufacturing Toyota’s AI-powered design optimization tools. Traditionally, automotive designers relied on generative AI for early-stage designs, but these tools often lacked real-world engineering constraints. To address this, Toyota integrated AI tools that optimized designs based on crucial engineering factors such as aerodynamics, fuel efficiency, handling, and safety. The benefit? By ensuring AI-generated designs adhere to real-world constraints, Toyota not only improved vehicle performance but also reinforced safety and sustainability ethics—reducing energy waste, improving fuel efficiency, and ultimately designing safer cars with longer ranges. This approach highlights the ethical responsibility of using AI not just for innovation’s sake but to enhance safety, efficiency, and environmental impact.
Another compelling example is Siemens Gamesa’s use of AI in wind turbine manufacturing. Wind turbines, often seen as symbols of sustainability, rely on handmade, made-to-order blades—a process that, while artisanal in nature, also increases the risk of human error. A defect in a wind turbine blade isn’t just a minor inconvenience; it can have serious consequences, from reduced efficiency to potential structural failures. Recognizing this challenge, Siemens Gamesa implemented AI-driven machine learning and computer vision systems to detect defects in real-time. The result? A 25% reduction in defects and a projected return on investment within 2.5 years.
Again, this is another prime example of ethical AI in action—enhancing human craftsmanship rather than replacing it, improving safety, and ensuring that renewable energy infrastructure remains reliable and sustainable. By minimizing defects, Siemens not only reduced material waste but also contributed to the long-term success of clean energy solutions.
This is why ethics in AI is a critical imperative that leverages AI as an enabler, not a replacement. These two examples alone emphasize broader ethical responsibility in AI-driven manufacturing:
- Enhancing human expertise, not replacing it. AI should and must augment human capabilities, improving safety and efficiency while ensuring skilled workers remain at the heart of production.
- Ensuring both safety and sustainability. AI-powered systems must be designed to not only reduce risks and prevent harm, but also promote sustainable practices rather than prioritizing cost-cutting at the expense of quality and ethical considerations. This is where true ethical leadership takes the lead—making the decision right even when it’s the hardest one to make.
- Maintaining transparency and accountability. AI-driven decision-making in manufacturing can’t be a black hole; it must not only be understandable to the average layperson but explainable and fair, ensuring that automated quality checks and design optimizations do not introduce biases or unforeseen risks.
So, in the end, ethical AI isn’t just about efficiency (i.e., reducing costs) and productivity (i.e., increasing output)—it’s about taking responsibility to do what’s right for all. When implemented thoughtfully, it really can make manufacturing safer, more sustainable, and more inclusive—ensuring that technological progress serves both businesses and society as a whole.
IEN: How can AI innovation help manufacturers be more inclusive?
YS: By making inclusion in AI manufacturing a responsibility and not a nice-to-have or if-someone-remembers afterthought. For far too long, entire demographics have been excluded from the rooms where innovation happens. Between 2021 and 2024, women and non-binary individuals held only 15% of C-suite roles within NASDAQ-100 tech companies. Notably, in 2022, this figure briefly rose to 17% before declining again. Manufacturing is no exception. More than 60% of manufacturers reported an increase in the representation of women within their companies over the past five years; however, despite this progress, challenges persist, with 50% of companies struggling to hire diverse candidates and 40% facing difficulties in retaining them. Decisions about what gets built, how it’s built, and who benefits from it are still shaped by a narrow set of voices—leaving gaps that AI can either amplify or eliminate.
The persistent and continued underrepresentation of women and people of color in technical development and decision-making roles within the tech and manufacturing industries is a significant concern. This abject lack of diversity not only perpetuates systemic inequalities but also hinders innovation and inclusivity in product development. AI is not the solution on its own.
Diversity, Equity & Inclusion (DE&I) has been weaponized as an initiative and is currently being systematically dismantled, but the not-so-secret to our success is that diversity is the future of this country. Sticking to math, because it doesn’t have an opinion, diverse markets and women in America are $16.3 trillion of the American $27 trillion GDP—that’s 60% of the US economy. This is the reason why I like math. This is just bad business rejecting the business case of diversity.
If you purely look at the math, do you want more customers or do you want them to go someplace else? Do you want employees who can understand the customer base? Do you want products on the shelf that reflect that?
When AI is used with intentionality and accountability, it can help manufacturers correct systemic exclusions that have shaped industries for decades. Here are a few ways how:
AI for Inclusive Design: Who’s Included?
Most products today are not designed for everyone—they’re designed for the “default” user, who is probably the developer and their preferences. And for decades, the default has been male, able-bodied, white, and Western-centric. AI can help fix that—but only if manufacturers demand that their AI tools analyze diverse datasets and challenge exclusionary norms. This requires leaders who challenge the status quo and the knee-jerk, easy path to datasets because there are millions of people who live in the shadow of rinse-and-repeat datasets used today.
Example: Everything from hands-free sinks to medical devices, even pulse oximeters and prosthetics, often fail people with darker skin tones because they were never tested on diverse skin pigments. AI-driven diagnostic tools must be trained on inclusive datasets —or they will continue reinforcing disparities in healthcare.
AI in Quality Control: Who’s Left Out?
Manufacturers blindly trust AI to automate quality control, but biased AI makes biased decisions. If AI systems aren’t trained on diverse data, they will flag differences as defects, reinforcing exclusion rather than eliminating it. For example, Clearview AI has stated in pitches to potential investors that 3,100 police departments use its tools (this represents more than 1/6th of law enforcement agencies across the US). But here’s the thing about the fine print—the software performs nearly perfect in lab testing using clear comparison photos. But there has been no real work independent testing of the technology’s accuracy in how police typically use it—with lower quality surveillance images, grainy images, and obscured images. Near perfection is expected when it’s trained on perfect pictures. Federal testing in 2019 showed Asian and Black people were up to 100x as likely to be misidentified by some AI software as White men. It’s not conjecture; it’s been proven that AI-powered facial recognition in security systems has higher failure rates for Black and Asian faces, yet it’s still deployed in workplaces and law enforcement agencies.
The Washington Post reported on January 13, 2025, that law enforcement agencies are using AI tools in a way they were never intended to be used—as a shortcut to finding and arresting suspects without other evidence. While most police departments are not required to report that they use facial recognition, those that do keep very few records of their use of the technology. But this isn’t just limited to law enforcement; most people using AI tools succumb to “automation bias,” which is the tendency to blindly trust decisions made by powerful software, ignorant of its risks and limitations.
Here’s the kicker: if manufacturers don’t demand rigorous bias testing, they risk making products hostile to entire demographics, and it’s more than being inconvenienced; it’s the difference between life and death.
AI Won’t Fix Exclusion—Unless We Make It
Manufacturers can’t claim progress while continuing to exclude entire demographics from decision-making. AI is already shaping the next era of manufacturing—but whether it drives inclusion or cements exclusion is a choice. As leaders, your responsibility is beyond the balance sheet and profits; it’s about a future that is beneficial for all.
Manufacturers must demand diverse datasets, unbiased testing, and transparency in AI decision-making. This can no longer be voluntary; we need ethical use and development regulations covering ALL industries or companies using AI. Manufacturing leadership must hold every AI developer accountable for building systems that serve all people—not just their preferences, experience or privileged few.
And most importantly, the manufacturing industry must put diverse human voices in the rooms where AI is built, trained, and deployed. Because if AI is designed only by those who have always had power, it will only reinforce the world they built for themselves—not the world we all deserve.
IEN: How should manufacturing companies be using AI right now? Any specific tools?
YS: Manufacturing companies should be leveraging AI-based tools to optimize per-piece workflows. There are significant opportunities to forecast demand and raw material pressures, making manufacturing floor processes more predictable and stable. Additionally, AI-driven tools play a crucial role in quality assurance and quality control (QA/QC). Essentially, any area where a statistical process control model—such as Six Sigma—is applied presents a ripe opportunity for AI tooling.
IEN: What best practices can manufacturers follow to make sure they are using AI tools to the fullest extent?
YS: Manufacturing, like every other industry, is at a crossroads. AI is here—not as a replacement but as a powerful enabler of human expertise, innovation, and inclusivity. But here’s the truth: technology will only be as fair, as just, and as inclusive as we make it. If we don’t train it right, if we don’t guide it wisely, it will inherit the same blind spots that have excluded entire demographics from the rooms where decisions are made.
So, how do manufacturers ensure AI isn’t just another IT tool of efficiency but a force for good? Here are three principles to lead by:
1. Train AI on the Full Story—Not Just One Chapter
AI learns from the past, but if the past has been narrowed, biased, or incomplete (history is written by the victors), it will keep telling the same old one-sided story. So, it’s up to us to expand its knowledge—to ensure it reflects the full spectrum of people, perspectives, and possibilities.
Lesson: AI must be trained not just on history, but on the future we want to create. Audit your data. Have the courage to ask whose voice is missing. And make sure your AI sees the whole picture that is larger than the viewpoints and experiences sitting in the conference room.
2. Keep People in the Process—Because AI Doesn’t Have a Conscience
AI is fast.
It is precise.
But it does not have intuition, compassion, accountability nor curiosity. And when it comes to critical areas around safety, ethics, and quality? That’s where human wisdom must always have the final say.
Lesson: AI should support human judgment, not replace it. The best decisions come from collaboration between machine intelligence and human wisdom. That’s the future human + machine, and the future is NOW!
Not to sound overly cliché, but the future really is in our hands. AI is a tool. A mirror. A reflection of the choices we make or don’t make. So, we all sit at the intersection of choosing will we use it to repeat the past—or to build a future that includes us all?
Manufacturers who train AI with intention, keep people at the center, and use it to open doors rather than close them will lead not just in efficiency but in impact. True innovation isn’t just about building better products; it’s about building a better world—for everyone. And that is the kind of progress worth making.