Components of Artificial Intelligence: AI Machine Learning and Inferencing

Also referred to as machine intelligence, artificial intelligence (AI) enables a machine to “think” like humans. AI can perceive, learn, analyze, and deduce while it solves a problem or performs a task.

0922 Littelfuse pdp tmov



Most researchers agree upon these four types/levels of AI. Type I has no memory and can react only to current input. Type II has limited memory upon which to draw for informing decisions. Type III will expand memory capacity and decision-making. Type IV will describe a level of AI that is aware of itself and its feelings along with those of others. For now, Type I and Type II exist in abundance, while Type III and IV exist in concept only.

All four types have two components of AI, or phases: learning and inferencing (decision-making).

What is Learning in AI?

Like humans, AI needs to learn a task before doing it. The human brain organizes information so that it can use that data to make rapid decisions when next encountering the same or similar information.

Similarly, a computer algorithm, or a problem-solving protocol, generates a model of thinking with parameters from external data input. It’s a process designed to solve future problems efficiently and with minimal error.

There are numerous AI machine-learning categories. This article introduces and addresses the application of a few basic categories. Among these are supervised, unsupervised, and reinforcement learning. Hybrid learning, such as semi-supervised learning, and learning techniques based on fundamental learning types, such as transfer and ensemble learning, are also discussed.

Different Types of AI Learning and Their Applications

Supervised learning

Supervised learning is the most commonly used type of machine learning.

Supervised learning’s goal is to train an algorithm to describe input data so that it can produce data output with the fewest errors. The “supervised” aspect of the learning refers to the use of labeled datasets with corresponding and known output. Similar to the way a teacher guides students to the right answer, labeled data guides the algorithm toward an acceptable level of correctness.

Supervised learning problems consist of regression and classification problems. The output for classification problems is a category, while the output of regression problems is a numerical value of a designated unit.

The application of supervised learning includes identifying discrepancies in financial systems (regression) and recognizing faces, objects, speech, gestures, or handwriting (classification).

Supervised learning is based on neural networks. A neural network consists of an input layer that converts different types of data into numbers, one or several intermediate layers that analyze the data, and an output layer that spits out the results. In the process, the layers of analysis form a thinking system hierarchy that can identify trends or patterns.

A convolutional neural network (CNN) extracts, preserves, and analyzes the spatial relationship between signals, such as image, voice, or text, for further analysis. Facial and speech recognition and autonomous vehicle operation rely on these CNN abilities.

In a recurrent neural network (RNN), the output is fed back to the network for correction so that the system can learn from its mistakes and increase the percentage of correct predictions. An RNN is highly suitable for text-to-speech conversion, wherein the input is usually long and context-rich. For example, an RNN can distinguish different words that have identical pronunciations, like “here” and “hear,” or words with multiple meanings, such as “crane,” “date,” “leaves,” and “point.”

In addition to image, audio, and handwriting recognition, supervised learning sees application in bioinformatics, email spam detection, and pattern recognition. In particular, its capacity for pattern recognition will enable smart factories, grids, and factories to devise cost-cutting measures.

Unsupervised learning

While labeled datasets bring powerful control to supervised learning, they also require substantial time and effort because most of the data collected is unlabeled.

Unlike supervised learning, unsupervised learning is trained with unlabeled data with no corresponding or expected output. This type of learning has no teacher or correct answers.

As a result, unsupervised learning may reveal previously unknown data patterns or features for categorization. While unsupervised learning enables the solving of more complex problems than supervised learning, its output features a higher degree of uncertainty.

The main types of unsupervised learning include clustering and association problems.

Grouping voters by their past voting history is an example of a clustering application. Other examples are grouping shoppers by their previous purchases or medical patients by their genetic characteristics or lifestyle patterns. Another example is the nearest neighbor problem. Here, the algorithm does not produce a model but stores all available cases and classifies new instances based on a similar method.

By asking the right question, asking a question the right way, or asking questions from different angles, one can model various data organizations that may reveal anomalies. For example, in a scenario of credit use, asking the question of why 99 out of 100 transactions occur in the United States and only one occurs in China may uncover fraudulent financial transactions.

Association could be applied to predict tendencies based on newly discovered relationships among variables in a large database. An example of an association is predicting a shopper’s future purchases based on his or her previous purchases.

Other types of learning with different flavors from the above include reinforcement, semi-supervised, transfer, and ensemble.

AI Inferencing and How it Works

During the training phase, the algorithm generates a new model or repurposes a pre-trained model with optimized parameters. After the model has been tested, it is ready to be deployed for inferencing tasks. During the inferencing phase, predictions and decisions on new data are made based on the learned parameters. Because inferencing involves computation within designated parameters, it takes place much more quickly than the learning phase.

Learning and Inferencing: Processing, Storage, Energy Needs, and Networking

AI’s learning and inferencing phases differ in energy consumption and in their requirements for processing power and data storage. In addition, networking is a pain point.

Learning requires a significant amount of time, computation power, and electrical energy. Training with datasets and building models demand thousands of hours of computation. As a result, training consumes large amounts of electricity. In contrast, the inferencing phase requires less processing and consumes less power. Inferencing involves much less computation than learning; as a result, it can get by with less energy.

The time frame for different tasks also matters. Some tasks, such as analyzing airline maintenance data, have turnaround times of hours. On the other hand, facial- or image-recognition tasks such as border control or insurance claims, respectively, will have much shorter turnaround times. If a task needs to be done in a shorter time frame, it will require more processing power and, hence, more power consumption and potentially more data storage. There is also the problem of scale. Because processing exponentially more data will only result in linear improvements, businesses need to collect, analyze, and store more data each day to increase accuracy significantly.

There may be a difference in storage needs. Image- or video-recognition tasks in the medical, scientific, or security sector involve large files; therefore, they will have a much greater need for data storage. On the other hand, tasks like financial fraud detection or supply chain analytics will need much less data storage. All in all, data cannot be just stored but also warehoused effectively to enable efficient retrieval and analytics. Therefore, throughput or massive delivery of data from storage to analytics will be another pain point.

Networking is another concern. In cloud computing, data that has been collected away from the central server heads to the central cloud for analysis. Due to the limit on the speed of data transfer and the amount of data that needs to be transferred, there will be a considerable time lag (latency). When data is read and re-read, such latency will only get worse.

Advantages of Moving AI Learning and Inferencing to the Edge

IoT cloud computing, or computation occurring at a physical center, offers several advantages to the problems mentioned above in terms of processing, energy consumption, and storage.

The central cloud usually has powerful processors; a business can always add more capacity for processing and storage as needed. The central servers are connected to the utility grid, so energy consumption usually is not a concern.

However, the latency and networking bottlenecks of cloud computing can be a problem.

Scaling up processing and storage will not solve the problem of latency. In fact, the scaling up of processing and storage will mean that more data needs to be transferred from the IoT edge to the cloud, thus exacerbating the congestion. The increasing latency will make applications such as autonomous vehicles, utility grids, or military drones that demand quick or even real-time inferencing impossible.

From a cost perspective, data transfer is already expensive; more data transfer will only add to the total cost. As most of the collected data sent to the cloud is not relevant to the analytics, data transfer is hugely inefficient from a time as well as a cost perspective. In addition, data transfer poses a risk to data security. Lastly, data transfer may corrode data integrity to the point that it hinders data retrieval or analytics.

Challenges in Edge Computing: Energy Efficiency, Data Storage, Processing Needs, Data Security

One way to solve the latency problem in cloud computing is to move machine learning closer to the edge. In edge computing, the data is analyzed such that it is collected to reach inference quickly, at the same time circumventing the need to transfer a large amount of data. Afterward, only the relevant data, which is a fraction of the entire dataset, needs to be moved to the central cloud for further analysis. This way, the expense of and the congestion caused by data transfer can be solved in one fell swoop.

However, the edge computing model is not without weakness. As the bulk of data processing moves away from the central cloud, the system can no longer depend on powerful central servers for data analytics and storage, creating higher demand for edge servers’ performance and storage capabilities.

First, the computational power of edge IoT servers must keep up with large-scale data processing. High processing power is equivalent to high energy consumption, pressuring the energy supply at the edge. Edge IoT servers are sometimes powered by batteries and installed in remote locations; as a result, battery replacement will be challenging. Smart IoT sensors consume less energy but may not have the processing power needed. Therefore, balancing the demand for processing performance and low energy consumption will be a challenge for edge computing.

Therefore, for edge computing to succeed, solutions to increase the energy efficiency of the edge servers are needed. For example, the learning phase of AI can take place in the cloud and then deploy to the edge for inferencing. Also, some of the simpler computational tasks can be moved from the edge server to devices, such as field-programmable gate arrays, which are more energy-efficient.


Edge computing can alleviate two cloud computing problems by reducing latency and the cost of data transfer. However, edge computing faces the challenges of reducing energy consumption and achieving efficiency, expanding data storage and processing performance, and increasing data security. Therefore, there is a need to develop more powerful chips for edge servers, expand computation power with quantum computing, improve the learning and inferencing algorithms, and strengthen data security at the edge. In upcoming articles, we will explore various solutions and how to apply them in different vertical segments and study the tradeoffs.

newsletter 1

Related news articles

Latest News

Sorry, your filter selection returned no results.

We've updated our privacy policy. Please take a moment to review these changes. By clicking I Agree to Arrow Electronics Terms Of Use  and have read and understand the Privacy Policy and Cookie Policy.

Our website places cookies on your device to improve your experience and to improve our site. Read more about the cookies we use and how to disable them here. Cookies and tracking technologies may be used for marketing purposes.
By clicking “Accept”, you are consenting to placement of cookies on your device and to our use of tracking technologies. Click “Read More” below for more information and instructions on how to disable cookies and tracking technologies. While acceptance of cookies and tracking technologies is voluntary, disabling them may result in the website not working properly, and certain advertisements may be less relevant to you.
We respect your privacy. Read our privacy policy here