New AI technology from Neurala is said to improve quality control in manufacturing by identifying inconsistencies and anomalies in vision inspection datasets. (Neurala/Business Wire photo)

BOSTON—A new AI explainability technology is reported to help manufacturers improve quality inspections by accurately identifying objects in an image that are causing a specific problem or presenting an anomaly. The technology, developed by the vision AI software company Neurala, is purpose-built for industrial and manufacturing applications and is aimed at addressing the digitization challenges of Industry 4.0, Neurala said in a release.

“Explainability is widely recognized as a key feature for AI systems, especially when it comes to identifying bias or ethical issues. But this capability has immense potential and value in industrial use cases as well, where manufacturers demand not only accurate AI, but also need to understand why a particular decision was made,” said Max Versace, CEO and co-founder of Neurala, in the release. “We’re excited to launch this new technology to empower manufacturers to do more with the massive amounts of data collected by IIoT systems, and act with the precision required to meet the demands of the Industry 4.0 era.”

Industrial IoT systems are constantly collecting massive amounts of anomaly data—in the form of images—that are used in the quality inspection process. Neurala’s explainability feature is said to enable manufacturers to derive more actionable insights from these datasets, identifying whether an image truly is anomalous, or if the error is a false-positive resulting from other conditions in the environment, such as lighting. According to Neurala, this gives manufacturers a more precise understanding of what went wrong, and where, in the production process. It also allows them to take proper action, whether to fix an issue in the production flow or improve image quality, the company said.

Manufacturers can use Neurala’s explainability feature with either Classification or Anomaly Recognition models. Explainability highlights the area of an image causing the vision AI model to make a specific decision about a defect.

In the case of Classification, this includes making a specific class decision on how to categorize an object. In the case of Anomaly Recognition, it reveals whether an object is normal or anomalous. Armed with this detailed understanding of the workings of the AI model and its decision-making, manufacturers can build better performing models that continuously improve processes and efficiencies, according to the release.

The technology is said to be simple to implement, supporting Neurala’s mission to make AI accessible to all manufacturers, regardless of their level of expertise or familiarity with AI. No custom code is required. As a result, “anyone can leverage explainability to gain a deeper understanding of image features that matter for vision AI applications,” the company said in the release.

Explainability is available as part of Neurala’s cloud offering, Brain Builder, and will soon be available with Neurala’s on-premise software, VIA (Vision Inspection Automation), according to Neurala.

Neurala (https://www.neurala.com) was founded in 2006. The company’s research team invented Lifelong-DNN™ (L-DNN) technology, which is reported to lower the data requirements for AI model development and enable continuous learning in the cloud or at the edge.

Subscribe Now

Design-2-Part Magazine

Get the manufacturing industry news and features you need for free in a format you like.

FREE Print, Digital, or Both »

You have Successfully Subscribed!