• By David Immerman
  • July 31, 2020
  • Features
  • Operations & Management

The evolving era of AI in all its forms is driving industrial transformation today

By David Immerman

​There is no escaping the term artificial intelligence (AI) today. It perpetuates the media, pop culture, and industry.We broadly define AI as a discipline that uses computer science and statistics to create systems that perceive, understand, and act in a manner similar to human intelligence. The field of AI covers a variety of technologies leveraging a multitude of data science techniques capable of “learning” to enable this intelligence. In this article we will cover several types of AI, dispel common misconceptions, and dive into more specific real-world industrial applications with future guidance.

ANI versus AGI

At the highest level, there are two forms of AI: artificial narrow intelligence (ANI) and artificial general intelligence (AGI).

Artificial narrow intelligence (also known as specialized AI or weak AI) refers to systems programmed to do a single task, whether it is playing chess, identifying early stages of a disease on MRI scans, or autonomously driving through an environment. Although these tasks differ significantly in complexity and in the level of advancement of their AI techniques, they still fall under specific operational domains.

Artificial general intelligence (also known as strong AI) is the machine equivalent of human intelligence where the AI appears to be conscious and sentient. This is typically the popular sci-fi representation of AI such as shown in Ex Machina, Her, iRobot, and Westworld.

When presented with a novel task, the AGI can use prior knowledge and apply previously learned skills, similar to how a human would solve problems. There are more speculative levels of AI, such as artificial super intelligence, where the robot outperforms human intelligence in multiple domains and tasks, such as autonomously driving to a hospital to detect patient diseases and later defeating a human in chess.

Dispelling AI misconceptions

Due to this pop culture representation of AGI, there are several common misconceptions and misleading connotations with the technology.

Myth: Artificial general intelligence will be here tomorrow.  While much of pop culture enjoys running with the AGI utopian narrative, it does not yet exist. Research has progressed immensely in the past few years with technological advancements in high-performance computing and development techniques like neural networks, among others. However, we are still in unchartered waters. Researchers estimate an AGI breakthrough could be anywhere from 10 to more than 100 years away.

Myth: Using artificial intelligence for automation will soon replace all jobs. The market confusion around automation lies with whether AI is capable of replacing a job’s task versus the job itself. As we know with ANI, a single task can be automated, but people underestimate how many different and constantly changing tasks a typical worker performs. McKinsey estimates that fewer than 5 percent of jobs consist of activities that are 100 percent automatable. Even with increasingly autonomous machines entering industrial environments, 72 percent of factory tasks are still performed by humans.

However, there may be some workforce displacement in areas where the strengths of AI and automation align. These include tasks that involve repetition and precision, require heavy lifting, or are executed in hazardous environments. This will create some workforce reskilling and a shift in labor resources.

Assemblers and fabricators, who may be affected by automation disruption (–11 percent job growth for 2018–2028), can transfer to roles that service machines, such as general maintenance and repair workers (+6 percent) or mobile equipment service technicians (+4 percent). Organizations facing worker shortage and skills gaps will recognize that managing worker “displacement” through skills development programs is more cost effective than worker “replacement” by only using autonomous machines or recruiting new employees.

Myth: AI is a silver-bullet technology to solve all business needs. Artificial intelligence is a means to an end for businesses; it cannot be simply thrown at a pressing problem and resolve it. AI is another tool in the digital transformation toolbox that organizations use for enterprise-wide initiatives.

About 30 percent of industrial companies are evaluating or leveraging AI as part of their digital transformation initiative, and AI in manufacturing is anticipated to grow from $1 billion in 2019 to $17 billion by 2025. AI will have a massive impact in the industrial sector, but only when tied to strategically aligned and value-oriented digital transformation programs.

Types of ANI

Under the artificial narrow intelligence umbrella there are two primary classifications: machine learning and deep learning.

Machine learning (ML) is a branch of AI, specific to systems, models, and algorithms that can learn without explicit programming and can recognize patterns to predict outcomes. ML cuts out a lot of time that would be required for strenuous human preprocessing of massive datasets before analysis. The user can more simply build and calibrate models with desired inputs, outputs, and other variables (data labels) for the model to process and gain insights. For example, ML can use input data from Internet of Things (IoT) sensors (e.g., temperature, vibrations) to provide an output for the asset’s estimated remaining useful life or when it will fail.

Deep learning (DL) is a subset of ML but is more purpose-built for applications based on insights from unstructured data, such as images or audio files. DL uses interconnected artificial neurons that form neural networks, mimicking activity in the human brain. These networks can consist of millions of layers of neurons, which are essentially calculations and algorithms trained to recognize a specific feature or pattern within the data input to generate an output. Autonomous vehicles rely on DL to train massive neural networks and create inferenced models that can determine the difference between trees, stop signs, and pedestrians in real time.

Both of these types of learning require different forms of training data to feed the AI model, which could be anything from customer orders to machine telemetry to images. Once trained, an inferenced model is used in practice for AI-driven applications.

AI applications for the industrial enterprise

AI is increasingly integrating with and creating innovative industrial applications to improve key Industry 4.0–related financial metrics. AI can improve a range of industrial-oriented key performance indicators (KPIs), including asset efficiency, throughput, quality, new product introductions, and worker productivity.

Predictive maintenance is a form of condition-based monitoring that tracks and analyzes an asset’s performance, status, and health in real time using a variety of technologies. Even incremental predictability improvements of heavy industrial assets can drastically reduce downtime, which can cost an average of $260,000 per hour or even millions for some mission-critical machines.

Using generative design, the displayed bracket was developed with a set of system requirements and engineer-driven goals.

There are myriad data sources relevant to effective predictive maintenance, including the asset’s configurations, historical systems of record, and increasingly real-time industrial IoT data. ML can aggregate these disparate and massive datasets across fleets of assets and products to create inferenced models that can further predict the asset or product’s future state. More accurate predictions into future failures across an asset’s life cycle improve not only its uptime but can also better optimize service interventions and generate performance efficiencies, improving its useful life. Nearly 50 percent of manufacturers who use ML today use it or plan to implement it for predictive maintenance use cases.

Demand forecasting estimates optimal supply rates for fluctuating future customer and supply chain demands, and inventory optimization aims to have the optimal stocking to meet service level targets. They are both tools manufacturers and service teams use to lessen unpredictability from shifting market conditions, gain flexibility and agility within their operations, and improve customer satisfaction.

ML provides additional predictability into these applications to lessen intermittent demand across operations. ML identifies real-time patterns from causal (oil prices, market fluctuations, etc.), connected (IoT-enabled assets), and other business system and supply chain data. Of manufacturers with AI strategies, 55 percent are using or plan to use ML for intelligent inventory monitoring and/or supply demand forecasting.

Generative design autonomously creates an optimal design from a set of system requirements and engineer-driven goals. Generative design enables faster product development rates and higher engineering productivity. It creates lower cost, yet high-quality and innovative products. AI is increasingly embedded throughout this development process by presenting design alternatives for consideration and linking in preferred materials, purchasing decisions, manufacturing capacity, product variances, and supply-chain status, among other possible inputs. Additive manufacturing’s flexible manufacturing framework will bring many of these generative design-optimized products to life, becoming a nearly $45 billion market in 2030.

Computer and machine vision cover how artificial systems perceive or “see” the world around them. Computers and machines are increasingly equipped with cameras and other sensors, which are being embedded with object detection and image recognition deep learning models. It is simpler to separate these two “vision” segments as they pertain to use cases for connected workers and intelligent machines/robotics in industrial enterprises.

Most front-line workers in industrial settings have not recognized the benefits of digital technologies that their machine counterparts have, yet worker productivity is a key metric on statements of operations. Augmented reality (AR) is enabling powerful connected worker applications, and AI plays a major role within industrial use cases. AI is enabling computer vision in AR through perception via native sensors (camera, GPS) on the hardware itself and software interpreting the user’s movements (hand gesturing, eye tracking) in the context of the surrounding environment.

For example, consider how unique and complex a typical industrial environment is. There are innumerable dated, newer, and constantly changing items, parts, products, machines, and processes spanning operations. The computer-aided design (CAD) data or digital definition of these different objects, consisting of unique parameters and configurations, can feed into neural networks to train AI models. The deep learning inferenced model could then automatically recognize the object in the real world and its real-time characteristics. A service technician could recognize a machine in his or her field-of-view, trigger the unique object’s work instructions in AR to service it, and even order new parts for repairs.

Intelligent machines similarly leverage computer vision for perception, but it is primarily used for repetitive and precision-oriented tasks such as welding. While AI plays an important role in an industrial robot’s orientation and movements, we will focus on the machine vision aspect. Intelligent machines can use computer vision–based AI to inform their actions as well as evaluate results. By feeding deep learning models training data for what a product “should” look like, machines can quickly recognize anomalies on a production line.

Machines can quickly inspect the quality of products and check for nuanced defects that the human eye might not perceive. This can be extremely valuable when there are large volumes of different and quickly moving supplies, materials, and components, such as with a process manufacturer’s batch production line. With machine vision, manufacturers can spot a defect among these thousands of many different moving parts on the line, some that look nearly identical. Sixty-four percent of manufacturers leveraging machine learning cite using or planning to use it for quality assurance use cases.

These are a few among several forms of AI applications that will interface with the industrial enterprise. The initial form, application, and use case that AI will be used for will be unique to each industrial company and its strategic goals.

Using augmented reality, one type of computer and machine vision, a front-line worker can reference safety instructions in real time while conducting maintenance on a machine.

AI strategic guidance

The endless possibilities of AI are exciting. It will continue to alter the world and the many factories within it. But industrial companies are facing disruptive forces today and need to leverage technologies for operational efficiencies, strategic differentiation, and competitive advantages. Below are a few key considerations to forming an AI strategy to capitalize on these global headwinds.

  1. Define the breadth of use cases across the company. Artificial intelligence can create tremendous value across the organizational hierarchy and functions within the company. Use cases across the value chain can span from AI enabling a new smart, connected product to optimizing intelligence across a factory. Understanding the universe of opportunities will excite internal stakeholders, provide scope for current and future initiatives, and help align with technology partners for this transformation.
  2. Prioritize use cases that will drive business value. For any digital transformation initiative, it is critical to align strategic business goals with use cases. Prioritizing a few targeted, high-value, AI-driven use cases that can quickly produce meaningful wins is more advisable than attempting to simultaneously roll out dozens. Weighing which AI use cases to pick will be subject to a few unique parameters, such as the internal skill set required to create and expand the use case and the project’s scope for future investment. Sourcing and analyzing both internal and external data to train and create inferenced models will also be key components of most artificial intelligence strategies.
  3. Measure success to propel future growth. Measuring what worked through targeted key performance indicators and metrics will validate the use case’s success and provide lessons learned for future transformation involving additional stakeholders. Successful artificial intelligence projects have predetermined the importance of establishing these metrics while the remaining adopters are quickly recognizing the need to: 46 percent of organizations with an AI strategy have defined KPIs to measure success, while 42 percent plan on determining them in the next year.

The inaugural industrial artificial intelligence use case will be unique to each company in form and in how it propels growth. AI could be the centerpiece or a single component in a broader industrial digital transformation; whichever brings the greatest business value to your organization should be prioritized.

Final thoughts for the near future

The time is prime for AI, with increasingly ubiquitous computing and growing investment from industry, university, and government entities. Coupled with the growing prevalence of open-source software and powerful AI development tools, this is providing realistic jumping off points for industrial enterprises.

The growth of the cloud and constantly improving AI hardware are providing the infrastructure and compute power to quickly build lower-cost and innovative AI applications. Investments from industry and governmental organizations are spurring research activities, including the next wave of collegiate talent to foster AI growth. Open-source AI software platforms, annotated datasets, and prebuilt digital solutions will lower barriers to AI entry for many industrial incumbents. With this converging ecosystem, all signs point to forming an AI strategy today to drive industrial transformation tomorrow.

Reader Feedback

We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.

Like This Article?

Subscribe Now!

About The Authors

David Immerman is a senior research analyst for PTC, providing thought leadership and market research on industrial technologies, trends, markets, and other topics. Previously Immerman was an industry analyst in 451 Research’s Internet of Things channel, primarily covering smart transportation and automotive technology, including fleet telematics, connected cars, and autonomous vehicles. Prior to 451 Research, Immerman conducted market research at IDC.