Edgeless Systems Brings Confidential Computing to AI

Edgeless Systems today launched a Continuum platform that applies confidential computing to artificial intelligence (AI) workloads to better secure them.

Continuum leverages encryption to ensure user requests, also known as prompts, and corresponding replies manifested as clear text can’t be viewed by anyone providing the AI service or by a malicious actor attempting to compromise the IT environments.

Edgeless Systems CEO Felix Schuster said it’s possible to now provide this level of confidential computing using a Continuum framework based on client code and an operating system to create sandbox extensions that are supported on NVIDIA H100 graphical processing units (GPUs). In the second half of this year, Edgeless Systems plans to open source Continuum to foster wider adoption.

In general, confidential computing takes encryption to the next level by securing data while it is loaded in memory, not just while it is at rest or in transit. Prior to the arrival of confidential computing, all data running in memory was accessible as clear text. There is now a range of processors that enable data to be encrypted while running in memory that many cloud services now support or, conversely, be deployed in an on-premises IT environment.

In addition to providing better security, that approach eliminates compliance issues because the plain text used to prompt an AI model was always encrypted, noted Schuster.

It’s not clear whether confidential computing might become the default option for deploying any type of workload. The sensitivity of the data being used to prompt AI models, however, makes encrypting prompts a more pressing issue, added Schuster.

Cybersecurity teams, naturally, have a vested interest in encrypting data everywhere, including when it is being processed. Cybercriminals are becoming more adept at launching more sophisticated attacks to exfiltrate data, so no organization should assume any platform that processes data is inherently secure. The challenge in the cloud era is that the current shared responsibility model advocated by CSPs often makes it difficult for organizations to determine what cybersecurity functions will be handled by the CSPs and which they are responsible for. In the AI era, it is even less clear which entity is responsible for securing prompts that cybercriminals might later use to deliberately coax an AI model into generating malicious outputs.

Unfortunately, most end users don’t appreciate the security issues that might arise when sensitive data is included in prompts. Just as troubling, most of the data science teams that build and deploy AI models have little to no cybersecurity training, so the probability that sensitive data will be exposed when invoking AI models is high. It won’t be until there are a few high-profile breaches involving unencrypted data that find its way into a prompt that cybersecurity teams appreciate the full extent of the potential risk.

In the meantime, end users should assume that cybercriminals are paying close attention to what kinds of data are shared with AI models. After all, from their perspective, a new treasure trove of data that can be easily viewed as plain text is only the latest in a series of opportunities simply because cybersecurity was once again an afterthought.

Image source: https://vecteezy_digital-security-unlock-or-encryption-concept-secure-login_13253673_107.jpg

Avatar photo

Michael Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

mike-vizard has 747 posts and counting.See all posts by mike-vizard