19/11/2023

    Shoring Up Security For AIs: Don’t Leave The Barn Door Open

    Video caption: we can all think about how AI data gets protected, all the time How do you think about privacy? Is it mostly an issue when you’re going through an airport, tempted to use a sketchy publ

    Shoring Up Security For AIs: Don’t Leave The Barn Door Open

    Video caption: we can all think about how AI data gets protected, all the time

    How do you think about privacy? Is it mostly an issue when you’re going through an airport, tempted to use a sketchy public wifi connection? Or are you more deeply concerned with how private data is handled elsewhere on the Internet, and thinking about the modern equivalent of a VPN?

    Network cable morphed into lock component in open position. isolated on white.

    getty

    In bringing some new analysis to the future of generative AI models, Sacha Servan-Schreiber talks about some of the current restraints on systems, and presents solutions that some people might be forgiven for thinking that AI already does. In essence, we’re talking about the ability of AI to go out and get answers, in a way that involves discretion, so that for privacy and cybersecurity, the AI is not the weakest link.

    He starts out with the propensity of large language models to hallucinate, and then suggests that retrieval augmented generative AI can help.

    Explaining a two-stage process, Servan-Schreiber explains that first, the system will retrieve the relevant document, and then second, will complete the task with a generative response.

    All of this is in response to a user query that Servan-Schreiber suggests should be, ideally, kept private.

    This harkens back to some of the other things that we've heard about modern work on data privacy…

    To try to break this down into plain language, let's put it this way – people are using AI locally to ask questions about their world. The idea is that if they’re using a private protocol and not connected to the global Internet, they don't want those responses to their questions to compromise their privacy.

    So how do you square that circle?

    Servan-Schreiber describes one of the best modern attempts to keep effective querying private for generative AI models.

    “The general problem that Preco solves is the nearest neighbor search problem,” he says. “So if we think of documents as having some feature vectors describing the document, and we have a query, the problem will be defined. So if we have a vector database here, with documents associated with each of the points in this high dimensional space, (unintelligible) to find the most relevant documents to a given query.”

    Using terminology like differential privacy and homomorphic encryption, he shows how Preco gets results from a vector database, without showing that database what a user is interested in.

    Talking about applications in augmented AI, he contrasts the traditional nearest neighbor approach to something he calls “approximate nearest neighbor,” and also suggests that local hashing can help, using hash tables and a specific encryption process between the stakeholders.

    He talks about this as a “magical property” of the Preco system.

    His words:

    “The way it works is: we combine a locality sensitive, hashing-based approximate nearest neighbor search with a private retrieval protocol. And the privacy guarantee that Preco (provides) is that the database will not learn anything about the user's query. … and the accuracy guarantee that we want is: we want the accuracy of the protocol to be the same as (a) non-private approximate nearest neighbor search, so we don't want to sacrifice on accuracy.”

    Showing candidate sets and hash tables, Servan-Schreiber shows how encryption sits between the generative AI and the database.

    So what are we learning from language models?

    Here’s how Servan-Schreiber describes one of the problems associated with traditional AI in business:

    “People in companies want privacy when interacting with LLMs,” he says. “This has recently been seen in the news, where companies have banned employees from using language models, for fear of them leaking information to external parties.”

    As for existing potential, he sums up the talk this way:

    “Preco is a step towards having private conversations with AI,” Servan-Schreiber says. “We can envision also in the future, when we have these models running locally on our computers, we would still need to find the relevant documents from external databases. And we might not want those external databases to learn what we're talking (about) in our conversations with large language models. There's a lot of exciting potential for future work in this area, and even real-world deployments.”

    Interesting stuff, to be sure, in a time when security is so paramount, and we are assessing so many aspects of AI according to risk. Be sure to watch this video and others at IIA to get more deep insights on what’s happening around general and specialized AI.

    Portrait of Sacha Servan-Schreiber

    Katherine Taylor

    Disclaimer & Legal Notice

    The opinions expressed in this article are the author's own and do not necessarily reflect the editorial policy of the site. We only aggregate the content, if you are interested in removing it, please contact us.