As the digital age moves forward, it’s becoming impossible to avoid interacting with artificial intelligence (AI) systems. Computer assistants and AIs perform an ever-growing range of tasks that are broadly intended to improve our quality of life. This extends to industry as well.
But first, what do we mean by artificial intelligence? In simple terms, it’s any machine (usually a computer) that does things normally associated with human intelligence, such as reasoning, learning and self-improvement.
AI systems in industry are the same technologies you use in daily life but applied to industrial problems. The same kind of AI that makes our phone calls clearer can listen for bad blades in a sawmill. Programs built with AI like those that help us find new movies and music suited to our unique preferences can help guide designers to selecting the right materials to mix to make the perfect concrete for the job. The same math behind teaching a toy dog to walk helps manufacturing facilities plan and schedule maintenance well into the future.
When these tools and algorithms target problems in physical (non-digital) industries, they fall into the special realm of industrial AI, or IAI. The many unique needs and challenges of industry set these algorithms apart from their more broadly used counterparts. Specific industries even have special names for the adoption of IAI technologies. For example, manufacturing engineers use terms such as Industry 4.0 and “smart manufacturing.” These all reflect the growing adoption and application of AI to problems previously thought unable to be automated.
So how can the same technologies be applied to such vastly different problems and still get good results? By understanding not only the tool, but also the problem faced and the environment where it will be used.
Generally, IAI is applied to tasks that are tedious, time-consuming or simply too difficult for humans to accomplish. The goal of IAI, like any tool, is making both worker and facility more productive. As part of this, ongoing efforts at the National Institute of Standards and Technology (NIST) aim to educate and guide users towards selecting the right IAI tool for the right job.
In broadest terms, IAI tools fall into two categories: predefined rules-based tools and machine learning tools. Some tools use combinations or hybrids of these two groups, such as reinforcement learning, but most IAI tools fit one of these descriptions.
Following the rules
Rules-based AI operates strictly on pre-defined rules and requirements set during its creation. These AI tools are generally easier for humans to understand, both during creation and operation. These rely on equations or sets of “if-then”-type rules that tell the machine what to do.
In their purest form, these AI tools tend not to change after creation. This makes them very stable and makes it easier to know why they did what they did during operations. This type of IAI is often so simplistic in its creation and execution that some people forget it is considered AI. However, the seemingly simple ideas and methods of rule-based IAI can build up to incredibly complex and sophisticated systems.
Rules-based IAI tools are ideal for well-understood processes or environments that allow a small set of possible outcomes. Simple decision-making processes or systems that can be sufficiently modeled with simple equations represent typical applications of these tools. A simple rules-based decision engine could measure and reject machined shafts that are too long or short with very basic “if-then” rules. Another example of rules-based IAI uses equations about the physical properties of spinning equipment to identify tiny cracks in bearings.
Their ease of understanding and stability also have one unavoidable drawback. The designer must know and anticipate the system where the tool will operate. Because of this, the IAI will ultimately be limited by the knowledge and capabilities of the team who made it.
Learning from mistakes
This brings us to the second major category of AI, machine learning algorithms. This is what most people think of when they hear the term AI.
Machine learning algorithms are the class of AI that learn and adapt from the inputs they receive from the environment. A user does not need to directly dictate the behavior of the program. Instead the information it receives “teaches” the algorithm the correct output based on some reward scheme that helps it know good responses from bad.
Machine learning often finds use in situations and problems with lots of data because it needs examples and trials to determine correct behavior. The more versatile machine learning tools do not always need to know the specific equipment or system they will be applied to during development. Many developers and end-users assume that the tool can learn to perform its job either in the field or from historic observations.
In industry, many equipment condition monitoring systems use machine learning to learn and recognize patterns of equipment behavior, then alert if this pattern changes. Many IAI tools are ultimately fancy “pattern learning” devices.
Not every job can be solved by machine learning. Often, machine learning tools are misapplied, misinterpreted, or simply do not work within the limitations of the job. Good performance of any IAI tool requires certain conditions, especially for those built with machine learning. Data problems, job misspecification, lack of computing power, and even operator error can all cause poor results.
Even though this may sound intimidating, remember that AI systems are made of mathematical models that perform many of the same operations we learned about in high-school math and science. The general principles that govern all good science and math still apply when dealing with AI. The AI tools should behave in repeatable, consistent ways that are independently verifiable.
Testing industrial AI
NIST, along with external partners, is developing testing methods and metrics to help industry better pick out useful AI tools from “bad.” We are working with groups to further the science of metrology for IAI by perfecting how to test an AI in a way relevant to both the environment and the intended users.
When it comes to IAI, sometimes knowing what to measure is just as important as knowing how to measure it. For example, one ongoing effort helps companies measure the return on investment from using AI-based tools to evaluate production process performance based on product quality. This work looks at the risks and rewards of AI in terms of direct impact on safety and earnings. Measuring its value in relatable terms can help decision makers better understand the impact of the AI system before investing.
Good AI also needs good data. Data quality during training, testing, and operations has an enormous impact on the performance of any AI system. Few qualified industrial datasets exist publicly. NIST provides open access factory simulators and workshop test bed data. But as new tools develop, the need for more data also grows.
Other work concerning data aims at helping guide companies to properly collect and curate their data. How you collect data has a strong effect on what can be done with that data. Many experiments directed by NIST are exploring the possibilities and limits of industrial data, including sources traditionally underused. A major effort at NIST examines the gathering and use of natural language from industrial documents, such as maintenance logs or reports. Many documents with written words are hard to process with standard AI tools. Specialized IAI tools for processing natural language are also in development at NIST.
Much of this work hinges on the participation of the public stakeholders. Employees at NIST love doing outreach and giving educational talks about AI and its uses. But it is only with community feedback and cooperation that we can continue to adapt and provide the most needed answers to your most pressing problems. If you have data to share, are interested in collaborating, or just want to learn more about IAI and what NIST is doing — feel free to contact us!
About the author
Michael Sharp is a reliability engineer in the Systems Integration Division at NIST. He currently researches methods and metrology to integrate and qualify artificial intelligence for practical use in industrial and manufacturing settings. He received his Ph.D. in nuclear engineering from the University of Tennessee in 2012, and has been an active participant in state-of-the-art research communities for data modeling, sensing capabilities, and artificial intelligence.