Enter Title

AI: Edge Computing

Then, Now and Next

Sponsor: Micron

Take a bite size look on the tech landscape of AI - where we have been, where we are, and where we are going. Then explore more with our partner Micron »

TensorFlow Lite for MCUs is AI on the Edge

By Michael Parks for Mouser Electronics

Sponsor: Intel, Xilinx

The history of technological progress is full of examples of technologies evolving independently before converging to change the world. Atomic energy and jet engines converged to give rise to the nuclear aircraft carriers that redefined warfare for much of the 20th century. Computer and radio frequency communications converged to give us the smartphone, and in doing so, redefined how we all interacted with technology and each other. Today, the convergence of embedded electronics and artificial intelligence (AI) is increasingly poised to be one of the next game-changing technical unions. Let’s take a look at the evolution of this convergence.

Welcome to The Edge

The notion of AI can be found in writings that date as far back as the ancient Greeks, although it would not be until the first half of the 20th century before the initial concerted efforts to develop AI as an actual technology would emerge. Fundamentally, AI enables digital technology to interact with the analog world efficiently and responsively, akin to the human brain. For real-world practical applications of AI to have utility, think autonomous vehicles, the interaction between the electronics and the physical world must be nearly instantaneous while processing multiple dynamic inputs. Thankfully, embedded electronics systems have continued to evolve alongside the development of machine-learning algorithms. Their marriage is giving rise to the concept of edge computing.

Edge computing takes the processing power that has historically been only achievable with powerful processing hardware in the cloud and brings it local devices that are on the edge of the physical-digital interface. Combined with the ubiquity of inexpensive yet robust embedded components such as microcontrollers and sensors, and the result is a revolution in automation, both in terms of scale and capabilities.

TensorFlow Lite: Big ML Algorithms on Tiny Hardware

a screenshot of the TensorFlow website getting started guide
Figure 1: Google's TensorFlow Lite for Microcontroller website. (Source: Google)

TensorFlow, a Google-led effort, is a set of open-source software libraries that enable developers to easily integrate complex numerical computation algorithms and machine learning (ML) into their projects (Figure 1). According to Google, these libraries provide stable application programming interfaces for Python (Python 3.7+ across all platforms) and C. Also, they provide APIs without backward compatibility guarantees for C++, Go, Java and JavaScript. Additionally, an alpha release is available for Apple’s Swift language.

TensorFlow offers so-called end-to-end machine learning support for the development and utilization of deep neural networks (DNN). DNNs are an implementation of ML that are particularly adept at pattern recognition and object detection and classification. TensorFlow libraries support both phases of the machine-learning process, which are training and inferencing. The first is the training of deep neural networks that requires significant computing horsepower typically found in server-grade hardware and graphical processing units (GPUs). More recently application-specific integrated circuits known as Tensor Processing Unit (TPUs) have been developed to support the training efforts. The second phase, inferencing, is utilizing the trained DNNs in the real-world to respond to new inputs and make recommendations based on the analysis of those inputs against the trained models. This is the phase that should be of keen interest to embedded product developers.

The release of TensorFlow Lite for Microcontrollers (a subset of the TensorFlow libraries) is specifically geared for performing inferencing on memory-constrained devices typically found in most embedded systems applications. It does not allow you to train new networks. That still requires the higher-end hardware.

Practically Speaking: ML Application Use Cases

Terms such as artificial intelligence, neural networks, and machine learning can come across as either science fiction or jargon. So, what are the practical implications of these emerging technologies?

One massive advantage of devices that leverage both machine-learning algorithms and internet connectivity, such as IoT devices, is that products can integrate new or better-trained models over time with a simple over-the-air firmware update. This means products can get smarter with time and are not limited to those functionalities that were possible at the time of their manufacturing–so long as the new models and firmware still fit within the physical memory and processing capacity of the hardware.

a wall with multiple screens on it showing different security camera feeds
Using AI, security feeds can be monitored automatically to recognize certain individuals (Source: Monopoly919 - stock.adobe.com)

The goal of AI-based algorithms running on embedded systems is to process real-world data collected by sensors in ways that are more efficient than more allowed by the more traditional procedural or object-oriented programming methodologies. Perhaps the most visible use case in our collective consciousness is the progression from legacy automobiles, to cars with automation assistance–such as lane departure warning alerts and collision avoidance systems–to the ultimate goal of self-driving cars with no human in the control loop. However, many other less-conspicuous uses of deep learning are already being used even if you did not know it. Voice recognition in your smartphone or virtual assistants such as Amazon Alexa leverage deep-learning algorithms. Other uses include facial detection for security applications and or background replacement, sans green screen, in remote meeting software such as Zoom.

One massive advantage of devices that leverage both machine-learning algorithms and internet connectivity, such as IoT devices, is that products can integrate new or better-trained models over time with a simple over-the-air firmware update. This means products can get smarter with time and are not limited to those functionalities that were possible at the time of their manufacturing–so long as the new models and firmware still fit within the physical memory and processing capacity of the hardware.

An illustrative chart showing a TensorFlow model
Figure 2: Translating a TensorFlow model to a version that can be used aboard a memory-constrained device such as a microcontroller. (Source: NXP)

The Workflow

According to the documentation provided for TensorFlow Lite for Microcontrollers, the developer workflow can be broken down into five keys steps (Figure 2). These steps are:

  1. Create or Obtain a TensorFlow Model: The model must be small enough to fit on your target device after conversion, and it can only use supported operations. If you want to use operations that are not currently supported, you can provide your custom implementation.
  2. Convert the Model to a TensorFlow Lite FlatBuffer: You will convert your model into the standard TensorFlow Lite format using the TensorFlow Lite converter. You might wish to output a quantized model since these are smaller in size and more efficient to execute.
  3. Convert the FlatBuffer to a C byte array: Models are kept in read-only program memory and provided in the form of a simple C file. Standard tools can be used to convert the FlatBuffer into a C array.
  4. Integrate the TensorFlow Lite for Microcontrollers C++ Library: Write your microcontroller code to collect data, perform inference using the C++ library, and make use of the results.
  5. Deploy to your Device: Build and deploy the program to your device.

Some caveats that a developer should be aware of when selecting a compatible embedded platform for use with TensorFlow Lite libraries include:

  1. 32-bit architecture such as Arm Cortex-M processors and ESP32-based systems.
  2. It can run on systems where memory size is measured in the tens of kilobytes.
  3. TensorFlow Lite for Microcontrollers is written in C++ 11.
  4. TensorFlow Lite for Microcontrollers is available as an Arduino library. The framework can also generate projects for other development environments such as Mbed.
  5. No need for operating system support, dynamic memory allocation or any of the C/C++ standard libraries.

Next Steps

Google offers four pre-trained models as examples that can be used to run on embedded platforms. With a few slight modifications, they can be used on various development boards. Examples include:

Visit the additional articles below for a series of step-by-step guides that will show you how to get these models working on various different microcontroller platforms.

Photo/imagery credits (in order of display)
pinkeyes - stock.adobe.com, Monopoly919 - stock.adobe.com, proindustrial2 - stock.adobe.com

The Tech Between Us Podcast

Sponsor: Adventech, Microchip

Full Podcast (39:43mins)

Introduction (01:00mins)

Condensed Podcast (09:45mins)

Join us in our technology conversation with Dr. Sriraam Natarajan, professor of computer science at the University of Texas at Dallas. After listening, explore more from our sponsored partners, Advantech and Microchip Technology.

Podcast Host

Raymond Yin

Director of Technical Content, Mouser Electronics

Podcast Guest

Dr. Sriraam Natarajan

Professor of Computer Science, University of Texas at Dallas

Edge Security in an Insecure World

By M. Tim Jones for Mouser Electronics

Sponsor: NXP

As the cost of embedded networked devices falls—consider the Raspberry Pi as one example—they become ubiquitous. But, a hidden cost in this proliferation is that these devices can lack security and therefore be exploited. Without the investment in security, devices can leak private information—such as video, images, or audio—or become part of a botnet that wreaks havoc around the world.

Edge Computing in a Nutshell

Edge computing is a paradigm of shifting centralized compute resources closer to the source of data. This produces a number of benefits including:

  • Disconnected operation
  • Faster response time
  • Improved balance of compute needs across the spectrum

As shown in Figure 1, the cloud infrastructure manages the devices at the edge. The Internet of Things (IoT) devices connect to the cloud through an edge device such as an edge gateway to minimize global communication.

An illustrative chart showing a box with cloud infrastructure on the left connected to the internet inside a cloud in the middle. Then Edge Device connected to the internet on the right with 3 iot devices connected to the edge device.
Figure 1: The Edge Computing Architecture diagram shows the cloud infrastructure’s relationship to the connected devices at the edge. (Source: Author)

Statista, a German-based statistics database company, estimated that there were 23 billion connected IoT devices worldwide in 2018, and experts expect that number to grow to 75 billion by 2025. The Mirai malware, which targeted IoT devices and disrupted internet access for millions of people in 2016, illustrated the need for better security in these devices. In fact, when an attacker finds an exploit for a particular device, the attacker can then apply the exploit en masse to other identical devices.

As more and more devices proliferate to the edge, so do the risks for these devices. Connected devices are a common target for attackers who could be exploiting for attention or more commonly to expand botnets. Let’s explore the ways to secure edge computing devices.

Securing a Device

To look at a device and understand how it can be exploited, we look at what’s called the attack surface. The attack surface for a device represents all of the points where an attacker can attempt to exploit or extract data from a device. This attack surface could include:

  • The network ports that interface to the device
  • The serial port
  • The firmware update process used to upgrade the device
  • The physical device itself

Attack Vectors

The attack surface defines the device’s exposure to the world and becomes the focus of defense for security. Securing a device is then a process of understanding the possible attack vectors for a device and protecting them to reduce the surface.

Common attack vectors typically include:

  • Interfaces
  • Protocols
  • Services

From Figure 2, we can see some of the attack vectors from the interfaces—network or local—various surfaces around firmware running on the device, and even the physical package itself. Let’s now explore some of these vectors and how to secure them.

An illustrative chart showing a box with the title Package / Enclosure and inside of it is 4 different boxes each with the labels Firmware, Interfaces, Processor and Local Storage.
Figure 2: The image shows the potential attack vectors for a simple edge device. (Source: Author)


Attacking an interface or protocol is a multi-layered issue. There is the security of the communication itself with the cloud—including data security—as well as access security to the device through one or more protocols such as HTTP.

The Transport Layer Security (TLS) should protect all communication to and from the device. This type of cryptographic protocol covers authentication—to ensure that both sides can be specific about who they are communicating with—as well as encryption of all data to avoid eavesdropping attacks. This is ideal for an edge device that communicates to a remote cloud over public networks like the internet.

Given the speeds at which data moves over IP networks, hardware acceleration is a must in order to efficiently manage authentication and data encryption and decryption. Processors with hardware encryption acceleration like the TI EK-TM4C129EXL include on-chip crypto acceleration for TLS, ensuring secure communication with remote systems.

Using protocols like Kerberos for authentication can ensure that a client and server securely identify themselves. Kerberos relies on symmetric-key cryptography or public-key cryptography, both of which can be accelerated using processors that include cryptographic engines.

Protocol Ports

The protocol ports used with a network interface form one of the largest attack vectors on an internet-connected device. These ports expose protocol access to the device—for example, a web interface is exposed typically through port 80—and therefore provide information to the attacker on types of exploits to attempt.

One of the simplest ways of protecting these ports is with a firewall. A firewall is an application on a device that you can configure to limit access to ports in order to protect them. For example, a firewall can include a rule that prohibits access to a given port except for a predefined trusted host. This limits access to the port and helps to avoid common attacks using protocol exploits such as buffer overflows.

Firmware Updates

Edge devices are becoming increasingly complex, performing more advanced functions than prior generations—including machine-learning applications. With this complexity comes the requirement to fix issues and release updates to devices. But, the firmware update process creates an attack vector. By implementing security measures for firmware updates in your edge security plan, you can mitigate the risks posed by attackers.

Code signing is a common security method used to avoid malicious code from entering a device. This entails digitally signing the firmware image with a cryptographic hash, which can be used on the device prior to the firmware update process, to ensure the code is authentic and has not been altered since the signing process.

The signed code can also be used at boot-time to ensure that the firmware in the local storage device has not been altered. This covers two attack vectors, attempting to update the device with an exploited image using the device’s update process and protecting the device against an image forced into the local storage device.

The processor used within a device can be helpful here particularly if it implements a secure cryptographic engine for hash generation and checking. One example is the Microchip CEC1302 that includes a cryptographic Advanced Encryption Standard (AES) and hash engine.

The use of a Trusted Platform Module (TPM) is also beneficial. The TPM is a secure crypto-processor dedicated to security features and typically includes hash generation, key-storage, hash and encryption acceleration, and a variety of other features. One example is the Microchip AT97SC3205T, which implements a TPM in the context of an 8-bit microcontroller.

Physical Security Measures

Creating tamper-proof designs can help detect if the device has been physically opened or compromised in some way. This also includes minimizing external signals whenever possible to limit the ways that an attacker can monitor a device in their possession and identify exploits. Attackers may attempt to monitor bus signals to identify secure information and, in extreme cases, may apply temperature changes to the device, change clock signals, and even induce errors through the use of radiation. Understanding the methods that a motivated attacker will use to understand your device will help in building more secure products.

Where to Learn More

With the state of cyber warfare today and the plethora of motivations that individuals and states seek to exploit devices, edge security is an uphill battle. But, implementing modern security practices and considering security at the start of product development will go a long way to keep your device safe. Early analysis of the attack surface of a device helps to identify where to focus attention in order to create more secure devices. You can learn more at the Mouser Security blog.  

Photo/imagery credits (in order of display)
Archreactor - shutterstock.com, Graf Vishenka - shutterstock.com, AndSus - stock.adobe.com

Distributed Analytics Beyond the Cloud

Sponsor: Intel

A close up of a man holding a laptop with data visualized coming out of the laptop

BLOG: Distributed Analytics Beyond the Cloud

Analytics, AI, and ML are typically implemented as centralized functions in networks, often residing in the cloud. There is a growing trend to distribute these learning and inference functions between cloud, edge, and endpoint computer capabilities. Here, we’ll look at some of the tradeoffs involved.

Read more »

Why the Edge Is Central to IoT Success

By Stephen Evanczuk for Mouser Electronics

Sponsor: Maxim, TE Connectivity

In the Internet of Things (IoT), edge devices seem almost an afterthought, assigned to a minor position at the boundary between IoT devices in the periphery and the sophisticated IoT software applications in the cloud. Yet, as IoT developers tackle emerging IoT requirements, edge devices will play a central role in addressing the many challenges that lie ahead in large-scale IoT systems.

In their most basic role, IoT edge devices connect IoT terminal devices with remote resources—not unlike how a telecom wire center, industrial I/O controller, and Wi-Fi router respectively connect telephones, factory automation devices, and home computers. By supporting diverse wireless technologies and protocols, an edge device can greatly simplify requirements for IoT device design. Developers can focus on the application in their IoT device designs rather than work through the limited options for wireless connectivity. Yet, the nature of IoT drives the need for edge devices able to support functionality beyond basic connectivity.

IoT applications thrive on the mass effect of hundreds or thousands of wireless sensor nodes pouring out streams of data. Deploying and maintaining those nodes in their large numbers represents a significant logistical challenge that can surely impede overall IoT success. Edge devices offer a natural solution by providing a local host for initial commissioning of massive numbers of devices onto IoT networks and handling subsequent over-the-air updates of those devices. In addition, edge devices can provide local versions of cloud-based services to maintain operations when cloud connections are down. For time-sensitive operations, edge devices can perform local processing essential to support short-latency feedback loops unable to tolerate the extra delay imposed by cloud access.

At the same time, edge devices can help ensure that deployment, maintenance, and ongoing operations remain secure from device to cloud. In partitioning off a subnet of sensor nodes, edge devices inherently offer a degree of protection unavailable in shallower IoT topologies that allow hackers to reach through the cloud to directly attack a large set of terminal devices. With their combination of isolation and greater performance capabilities, edge devices can provide more robust security necessary to mitigate all but the most determined attacks.

A chart displaying the connect from devices throug a fog/edge layer up to the cloud
Figure 1: An illustration of the relationship and how data flows through the fog and edge layers (Source: elenabsl / shutterstock.com)

Edge devices also provide developers with a means to address emerging requirements more effectively. One such requirement, "privacy,"" stands to rise as a critical issue driven by regulation and consumer demand. Scheduled to take effect in 2018, the European Union’s General Data Protection Regulation will impose privacy regulations not just on EU companies, but on any organization that processes data from EU residents. Concepts, such as privacy-by-default, and privacy techniques, such as data minimization, will add IoT requirements that are likely to find resolution at the edge. At the same time, as data scientists find more constraints on pushing detailed data to the cloud, IoT solutions will need to pull data-intensive algorithms, such as anonymized machine learning and advanced pattern matching, into the edge.

To meet their diverse requirements, edge devices will build on a hardware foundation that combines the features of host platforms and real-time systems, using both application processors and microcontrollers (Figure 2). As IoT intelligence expands from the cloud to encompass the edge, these devices will offer greater performance and processing specialization capabilities necessary to support algorithm sophistication in security, privacy, analytics, and more.

an illustrated chart showing the i.MX6UL from NXP
Figure 2: Using both application processors and wireless microcontrollers, IoT edge devices combine the performance of host systems with the connectivity of I/O controllers to support their unique role in the IoT systems hierarchy. (Source: NXP Semiconductors)

Developers can expect to see a growing emphasis on edge systems at all levels of the IoT solution chain—from chips to modules and boards as well as through software specialization. In fact, Arm has already identified edge devices as a key element in meeting emerging requirements in the IoT. As IoT applications evolve, edge devices will play a pivotal role in meeting more complex requirements for effective IoT solutions. 

Photo/imagery credits (in order of display)
Liu zishana - shutterstock.com, Nadya_C - stock.adobe.com

At the Edge

At the Edge

Enter Edge Computing

As the tech evolution continues, the shift of some analytics and processing out of the cloud for real-time results is reshaping the landscape.

A chart showing that connected devices started 14 Billion in 2015 and is projected to be at 35 Billion by 2025
A illustration of the world map with a cloud and edge icons above then lines going from them to the different continents on the map


Edge computing moves the processing of data closer to the point of creation – AKA the edge, at the device itself.

A illustration showing various types of applications including a laptop, car and server and the types of sensor measurements including temperature, gyroscopic, fuel, pressure and magnetism
An illustration of the side of a head with circuitry as the brain
The AI Factor

EDGE AI lives at the local level, where it processes data and makes decisions in real time, with tools rolled out as Software-as-a-Service (Saas).

Allows for swifter event analysis and detection
Steamlines data storage and management
Creates better, independent processes through analytics
Automates core workflows
Why make the move?
Checkmark inside of a six-sided shape


Doesn't rely on connectivity

Dollar sign inside of a six-sided shape


Reduced traffic for reduced costs

Locked deadbolt inside of a six-sided shape


Sensitive info stays local

Speedometer inside of a six-sided shape


Lower latency & high throughput

Applications at the edge
At the Edge

XILINX speeds the development of embedded vision applications for highly differentiated, extremely responsive and instantly adaptive systems.

An illustration of a eye with the following text on it: AR/VR, Emotion Analysis, face recognition, object detection/identification, medical image analysis, image recognition and classification, security camera, and other.


Optimal AI training

Up to 3X throughput at low latency

Out-of-the-box ready for application development

More AI

AI lettering on multi-colored background

Artificial Intelligence Resources

Access additional Artificial Intelligence technical content from Mouser including articles, videos, eBooks and more.

Check it out »

Phone showing the sign-in screen for the Tech Quoitent game

What's your Tech Quotient?

Got answers? Well, we've got lots of technical questions and brainiac trivia to put your knowledge to the test. Beat the clock and see how you stack up against your friends, colleagues and other engineers around the globe.

Do you have the technical chops to triumph over all? Play now!

Get it on Google Play Download at the App Store

Google Play and the Google Play logo are trademarks of Google LLC.

Thank you to our supplier partners for their participation in helping us bring this content to our valued customers.