Scroll Top

DATA+AI SECURITY SUMMIT 2024  •  KEYNOTE  •  DR. LOK YAN

AI Autonomous Imagineering for Security

DR. LOK YAN

Program Manager @ DARPA

Lok specializes in systems security and hardware/software integration. Previously with the Air Force Research Laboratory, he has a Ph.D. from Syracuse University and is known for his work in embedded systems and behavior analysis in national security.

All right, everyone, so I’m sorry to disappoint you because we’re not going to be talking much about specifications. Instead, I think what Mohit also asked me to do is why don’t we just try to hallucinate together, right? Since this is all about AI and it just kind of use our imaginations. And so the title of today’s talk is let’s just think beyond what it is that we’re doing today in terms of AI and security and see where we go. Okay? And to explain why it is that we do this, I don’t know if you’re familiar with this organization called darpa, but here is just a quick overview of some of the crazy things that we do within the organization. Right. So the tagline basically is that we just bring imagination to reality. And what you see here is a very quick vignette. On the upper left hand side you see arpanet, which is a revolutionizing the idea of how do we actually communicate? Everybody uses the Internet nowadays. Good. On the lower left, what you also see is VLSI design, which is the basis of every single chip that we use today, revolutionizing that. Okay? So we’re talking about communications processing design. 

On the upper right, your right, we also have stealth. Stealth technology is a little bit crazy if you think about the ingenuity that went into creating the new materials and design space in order to create aircraft that is invisible to certain kinds of radar and whatnot. That’s a materials problem that was also tackled. If you look in the lower right, you see something a little bit even crazier, which is a microphysiological system. Think of it as a lab in a chip or maybe an organ on a chip. If you put all these things together, what you might see is you might see a future where we can, instead of having a deck of cards, is a deck of organs where in a trauma situation, you might want to be able to think about, perhaps we can actually have these semi, very small, super important replacement organs just for this trauma situation that are really smart. You can communicate with each other and that your body will not reject. These are the kind of crazy things that we think about. These are the kind of things that most of you might think is hallucinations. And maybe it is. But we love to think about these kind of problems and what the future might hold. And so for the rest of this talk, let’s just together kind of go and do this kind of experimentation. Let’s just do this thought experiment together, okay? 

And to see where AI and Security might come together in something that might be a little bit different. So to get started, let’s ask this very simple question. How does Mars, if Elon Musk gets us to Mars and we’re all there, how does that change the nature of security and how we think about security now in any one of these things? One thing we want to be clear about is, of course, what in the world do we mean by security in the first place? Throughout this morning’s talks and panels, you’ve heard a lot about what it is that you deal with security today. But if we look a little bit backwards, security can be extremely simple. Over the past 60 years, this is what we have thought about security. Security is just subjects accessing objects, access control. Okay? If you go way back in time, what we see is discretionary access control, right? It is my data. It’s my data and it will be my discretion whether or not you can have a copy of it. And then of course, we advance a little bit by saying, no, no, it’s not just your data. The organization itself also has a say where we get mandatory access control. Right? A higher level kind of control in terms of what it is that we think about data. However, the subject is still there. If you move a little bit further in terms of time, you see role based assets control. Now this is really interesting because we’re starting to separate the subject, this individual me, from the role that I play. Okay? 

Now even further in time, 2014 was this establishment, right? I mean, research was, you know, there was research before of attribute based access control. What this is saying is, okay, fine, not only is the role that you play, but the location that you are at and the time and whatever other metadata you want to process. Now what is important here to remember to look at this particular evolution is that this evolution in access control and what we mean by security follows the evolution of cheap processing. 

What we’re saying here is, if you think about it, is that we have been really thinking about bringing data to users because of the limitation in processing. Where was the processing? At the beginning it was in the mainframes. And then we started getting portable computers. And of course what happens is that is where the processing is. We bring data to it. And of course, when you bring data to the processing, what that means is that you are making many, many copies. And when you make so many copies, you lose control of it. Okay, so let’s juxtapose this with something that was extremely, extremely exciting, right? Just a little bit less than a decade ago. Zero trust. Now, notice I guess we don’t hear much about Zero Trust these days, but Zero Trust, it almost gave birth to this idea of data centric security. 

Okay, one way we can think about this is that the way Zero Trust came about and what it became is this idea of instead of moving the data to the user, it’s keep the data and the processing in the cloud. You move the user to the data and processing. Now, if you think about it this way, what we see here is that this revolution in some ways is really an idea of following the trend of cheap communications, right? So processing is now cheap. Communications is cheap. Because communications is cheap. I can leave the user wherever it is that they are around the world and I can keep the data in place in the cloud. Now, where this broke apart is we forgot to actually keep the data where it really is, and we are still continuing to make many, many different copies of the data. 

Now the question for you to think about though is, is this necessary as a community? Do we still have to create a compute environment where you make so many different copies of the data which we wind up losing in the first place? So this is a question that you might want to think about. But here’s something very interesting. Let’s bring it back to this idea of Mars. Does Mars throw a wrench into this whole idea and this whole trend that we’ve been seeing over the past six decades? Here is a average latency for communications between Earth and Mars. Okay, what we’re Talking about is 12 minutes of delay. Can you possibly have an interactive way of accessing your data? Can you keep your data on Earth while I’m in Mars? The answer is likely no. So how does this change the way that we think about security in the first place? 

Well, it turns out there’s this idea that we at the Air Force Research Laboratory, as Mohit was talking about, we’re playing around with. And this idea is very simple. What we saw was what happens, unfortunately, right? We all die sometimes. What happens when you die? So in other words, take this thought experiment of taking the subject completely out of the equation. Now what happens when we die? If you do it properly, right? According to all of these advice that we get is you will have a will on how it is that you’re going to split your asset, your data into whoever it is or whatever organization you want to pass along to. So given this very simple idea, what we’re saying is, is there a way for you to create an environment, a computing environment where the subject no longer exists? All they’re doing is Defining what it is that you want to do the processing and what data you want to do it to, and then have it done in the cloud. Now, not only in the cloud, because as we talked about earlier today, least privilege is very important. Having that tight control over the data is important. You put it into a single container and we have demonstrations where you can actually do this in the same single container where the container itself is created solely for the purpose of executing this particular combination of the will of whatever the subject user is and the data. This is one way that we might be able to think about how do we address this problem of having extremely long latencies. Now, unfortunately, this is also not perfect. The reason why it’s not perfect is this statistic. 

It’s the same one we thought about before, except what we’re saying here is if I’m on Mars and the compute and the data is on Earth, if I make a single mistake, we’re not talking about 12 minutes, we’re talking about 24 minutes, half an hour of just suffering through that. And so this is now where we are. Let’s take a second to think about what is it that we might be able to do? Maybe with AI? Here’s something that I think you might be able to do even today. One of the things that we talked about multiple times is this idea of the difference between hallucinations and imagination. What if I can reduce the cost of making mistake? What if as a subject, as a user, I say I want you to do this to that particular piece of data, right? Whether it is to create a new account, whether it is to access and edit a document, whether it’s to retrieve medical records in order to make a particular diagnosis. Okay, what if we can take this AI and use it to say, here is what I think I want to do. Can you tell me all the other possible things that I can do that are related? Now, if we can do this, as long as the cost of this additional processing of all the alternate possibilities fits within that 24 minute timeframe, the cost is effectively zero. Now, what is also interesting about this particular thought experiment is think about the containers themselves. 

Because of the containers, the jars, right, all the accesses are there. I can predetermine whether or not I would allow these accesses as I’m creating the individual or jars, or even better, I can delay the access control when I open it. In other words, every single one of these computations that I have done in excess, as long as I don’t access the resulting data, I can just destroy it if I destroy it, I’m not actually violating the intent of what it is that you wanted to do in the first place. So this is a different way of thinking about security which might be enabled by the way that we now have the ability to do hallucinations or just imagination using degenerative AI models. So this is something to consider. And if we consider this, here is a very quick view in terms of where we are. 

One way of viewing this is in the past is user centric. Who, who, who? And the accesses, the security that we worry about is actually limited to the lifetime of the user, right? Someone gets fired, they can find a new job. Accesses gone. It’s the lifetime of the user. If you move forward in terms of what we’re talking about with zero trust, that’s data centric security. What we care about is less about the user, but the actual pieces of data. And that was mentioned multiple times today. Okay? And so it’s data lifetime. But now if we have this ability to say I want to do X, but the AI is going to tell me here’s X but here’s the bounding box of all the other things that you could do with it, I might now be able to create a new kind of security called purpose centric security. It’s not about who is accessing what data, but why is it that you’re doing it? Because fundamentally, if you think about a medical situation, right, I want a solution to this trauma that I might have had. I want a therapeutic now. It doesn’t matter if it is the ambulance driver, it doesn’t matter if it’s the paramedic or it doesn’t matter if it’s just some AI somewhere I would like myself to be saved, right? Access to my medical records is perfectly okay. So purpose centric is where we may be moving forward if all this AI, right, Kind of makes sense. So some ideas to think about in terms of maybe there’s intent based, maybe it is value based. If you are a organization, you can think about profit and loss, whether or not what would happen if this particular data or subjects would actually leave, right? 

Maybe you can think about window. And window base is actually something very interesting as well, because every single one of the things that I said there is about lifetime. Is there a way for you to access and control the accesses in terms of security where it is valid only within a specific window, Right. This is not the same as attribute based access control. This is really hitting on the user lifetime, data lifetime and the value lifetime, where there’s a set time when the value is zero. Okay, so these are the things that we can imagine, and these are the kind of things that I would like to do while I’m at darpa. Now, as I said before, I think all of this is possible today, right? You can go and try these things out. I’m sure you can find some interesting results. But let’s move forward a little bit more. What else can this way of thinking actually change in terms of this Mars situation? Instead of thinking about the delay for communications, let’s think about the delay for a travel from Earth to Mars. On average, the way the rule of thumb is about nine months. How does this completely change the logistical situation that we have to deal with? So one way that people have thought a lot about is how do you build a in situ supply chain? And I use a chain literally as an example of how do you actually create these links, right? If you look into a little bit more detail, it is super complex. Many, many different stages, right? You need blasting, right? 

You need to transport all this raw material somewhere in order to separate it and crush it. Then you need to do smelting, you need to do actual forging, and then finally manufacture the chain. There’s a lot of different steps, and every single one of these things have their own supply chains. Now let’s think about this other difference. And we do this every day, which is scrappy resourcefulness, right? Can you take that window, which is kind of round, and turn it into a chain and link them together? Yeah, you can probably do that, right? Can you take that window and it’s kind of round and turn into a wheel? Yeah, I think you can do that, right? In fact, we do things like this every single day. They’re imperfect solutions. And because they’re imperfect solutions, I ask you again, is it really hallucinating or we just imaginate, right. Imagineering in terms of imaginating what else it is that you can actually repurpose things. 

So what I would argue is that perhaps this ability of using an AI to go and give you other purposes of what it is that you have, and using that, integrating in perhaps a closed loop manufacturing solution might be another way that we can use and leverage this new capability that is being built now, unfortunately, we’re also almost there. We’re also almost there. Because I had an opportunity to play with the copilot before this talk. I asked a very simple question, which is, how do I put an Accord engine into a Corolla? Okay. And it had some really nice insights Are they perfect? Absolutely not. But the process and the parts dependencies, it kind of get right. That’s part of the training data is able to retrieve it. Good. The hard part is I asked, can you create a CAD drawing? So in other words, right when you take one engine and mount it into another, the mounting holes and everything will be different. But give me a CAD drawing of how you would adapt it to. 

It says, well, sorry, I can’t create, you know, generate catch points. But does that mean that we can’t generate it forever or only right now? Maybe this is something that we can have the AI automatically generate. The other thing that it’s already able to do is do trade off analysis. Now trade off analysis is super important if you want to be scrappy, resourceful. Right. In that situation, I said before about a window, that round window turning into a wheel, that is something that you might have to do if you are in a super tight, resource constrained environment. What you want the AI to do is you say, is it possible? Yes, but what are the implications of it? Are we saying that no, if you do this, you can only drive at 40 miles an hour instead of 60, 20 miles an hour. Are there limitations in the terrain? All of this is extremely useful information that as people we need in order to make smart decisions. That goes back to what the panel was talking about in terms of how to combine humans AI. This is now possible. Finally, there are some crazy imagineering things, which in this case I would agree these are probably hallucinations because I cannot imagine taking one of these car engines and building a motorcycle or go kart out of it. But you know, there’s always room for improvement. I think the important thing here is we’re close. 

This is not too far. So the last thing I will now ask you is how would all this change if I need to go to deep space? There is no logistics supply chain. Okay, how would we use this example of changing the way that we think about security? Because everything that happens back on Earth might not be something that we really, really care about. Right. That purposeful way of thinking about security might be the way that we reach deep space.

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.