Reduce Risks: How to Protect Data from the Edge to the Cloud

Data needs to be persistently protected as it goes from edge devices to a number of interim devices, then to the cloud and back.

A combination of increasingly powerful semiconductor technology and networking innovation led by 5G promises an upcoming renaissance in AI and edge computing. These technologies will allow companies to process an increasing amount of their data in devices at the edge of their networks. That being said, the cloud is not going away. For trust and security professionals, this trend means they need to deal with an even more complicated environment as they will have to ensure that data remains trusted and secure throughout a greatly expanded data ecosystem. And, just to add to the pressure, with IoT and data becoming more and more intertwined with critical infrastructure, their job is more important than ever.

Given the expected exponential growth in the number of edge devices where sensitive data can reside and be processed, maintaining the flow of trusted data is not a simple process. As an example, let’s look at a refrigerated delivery truck. The truck can have a number of systems that can collect and process data, such as the battery and electric drivetrain, the refrigeration system, the delivery tracking system, and apps on the driver’s personal device. In addition to the vehicle-mounted sensors, the refrigeration system itself will typically have multiple temperature and humidity sensors positioned throughout the trailer. All of these will gather and transmit large amounts of data, quite often in real time. 

As the truck travels through its route, each of these systems will be sending and receiving data through any number of networks, ranging from the Bluetooth connection on the driver’s phone, to the self-organizing temperature sensor mesh, to the Wi-Fi network at the truck yard, to the cellular and satellite networks the truck connects to on the road. As Vehicle-to-Vehicle and Vehicle-to-Infrastructure (V2X) communications become more common, our truck will be connecting to these for more critical safety functions.

Each of these devices and network configurations has its own characteristics that a security architect needs to take into consideration. In addition, the data needs to be secured when it reaches and is processed in the cloud. Let’s also not forget about protecting the data as it is sent back to the systems in the truck from the cloud. Multiply this by the number of trucks in a fleet, and you can see why edge security is a conundrum.

See also: Cloud Security: A Primer

You also need to consider the speed at which cyber attacks can affect something. In the past, the effects of many attacks may not have been immediately felt. An attack in our refrigerated truck example could put the life of the driver and those around her in immediate danger.

While the edge is now a very exciting place for computing technology, not all edge networks are equal when it comes to security. In many cases, network security currently in place is inadequate. And since the Internet is a network of networks, the security supporting trusted data flows is only as strong as the weakest network link. The threat is very real. Weak network security often has gaps in it that can be exploited, and devices can be compromised with disastrous consequences. For example, hackers were able to obtain sensitive data from a Las Vegas casino through an Internet-connected fish tank. Also, hackers can take advantage of leaked data to imitate a trusted connection and issue spoofed orders to devices.

One major issue with widely used edge network security protocols such as TLS or VPNs is that they only protect data from one network endpoint to another. Once the data leaves the network endpoint, it is dependent on the security of the device it’s now traversing. This could be an IoT gateway many of which are known to have inadequate security and could be yet another weak point for a hacker to exploit.

Adopting a zero-trust network architecture is a well-regarded approach to this issue, but the architecture needs to be designed to persistently protect the data throughout its lifecycle. This means not only from a device to the cloud but also from the cloud to the device. It also means protecting data while at rest and in use, as data can be exposed to a hostile environment as it exits a TLS or VPN tunnel. Companies need to invest in security systems that persistently protect data across any and all gaps that they may encounter in their journey in order to make the next generation of IoT technology safe, scalable, and reliable.

Taking these sorts of measures is even more important since companies now rely more than ever on data to do such things as reduce operational costs, enhance user experience, support their customers, and create new products and services. A successful cyberattack can be very costly to the corporate bottom line. According to a 2022 Ponemon Institute study, between March 2021 and March 2022, the average total cost of a data breach was $4.35 million, an all-time high.

Nevertheless, this just reflects the direct costs of a data breach, such as response costs, data migration, and regulatory fines. When data is stolen or hacked, and systems are made to malfunction, the impacts to a company can be enormous. It could mean loss of productivity as well as revenue from day-to-do day operations as well as short, medium, or long-term productivity losses. Other implications include loss of competitive advantage, reputational damage, and reduction in the value of assets. If the cyberattack affects OT systems, such as those involved in energy infrastructure, it could cause catastrophic failures that would rise to the level of a national security issue.

As mentioned before, data needs to be persistently protected as it goes from edge devices to a number of interim devices, then to the cloud and back. One promising approach doesn’t involve new, untested technologies. Instead, it relies on technologies with a sound track record that is well understood in the IT industry. One of these is PKI-based digital signatures. Data packets can be digitally signed (and optionally encrypted) at the device when they are created. When doing this though, it is important that the device itself includes a protected processing environment to protect the security of the device’s software stack. The digital signature would not only indicate the makeup of the data packet, but it would also include metadata such as the known secure state of the device and the time the data was created.

The data packet then makes its way over networks and devices – both trusted and untrusted – to a cloud server. This cloud server would have knowledge of the digital signatures of signed packets. When the server receives the packet, it would then compare the digital signature of the packet it received with the digital signature of the packet when it was created. The server would then confirm that the data can be trusted and (if encrypted), decrypt it and direct it to the proper cloud data repository.

The data still needs to be protected on the cloud, though. Modern cloud data systems are usually well maintained by security experts, but the algorithms that companies use are often licensed from third-party developers. While most can be trusted, since in many ways the algorithms represent a “black box,” it’s prudent to run algorithms in cloud “sandboxes” where the input and output of algorithms are strictly controlled to avoid unauthorized access to data.

Maintaining trust in data through its journey is critical to corporate operations in today’s digital world. With the judicious use of well-understood data security technologies and techniques, it’s not an impossible goal.

Tags:

Leave a Reply

Your email address will not be published.