Governing the Future: Federal Cybersecurity in the Age of Edge and AI

Show notes

Governing the Future: Federal Cybersecurity in the Age of Edge and AI

In this episode of the "Trusted Tech for Critical Missions" podcast, host Ben Arent interviews Steve Orrin, Chief Technology Officer at Intel Federal, about the evolving landscape of federal cybersecurity in the age of edge computing and artificial intelligence.

Key Takeaways

  • Establishing a hardware-based Root of Trust is crucial for securing edge devices that may be physically accessible outside traditional network perimeters.
  • Protecting AI models requires governance throughout the entire lifecycle, from ensuring diverse training data to continuously monitoring models post-deployment.
  • Confidential computing, which uses hardware isolation and memory encryption, enables secure data sharing and analytics.
  • Engaging security, legal, and compliance stakeholders early is essential when developing and deploying AI solutions in the public sector.
  • Organizations should maintain an accurate inventory of all assets (including data) and leverage built-in platform security features.

Topics Covered

  1. Challenges of securing edge devices outside traditional network perimeters
  2. Importance of hardware-based Root of Trust for edge devices
  3. Protecting AI/ML models throughout their lifecycle
  4. Enabling privacy-preserving federated learning
  5. Transitioning from edge sensing to edge computing for real-time decision making
  6. Applying Zero Trust principles and confidential computing to protect data in use
  7. Engaging stakeholders early when developing AI solutions
  8. Tips for maintaining asset inventory and leveraging platform security features

Resources

Intel Federal Public Sector Solutions

About the Guest

Steve Orrin is the Chief Technology Officer and Senior Principal Engineer at Intel Federal. He leads technology strategy, architecture, and customer engagements for government and regulated industries. With over 20 years of experience in cybersecurity, AI, edge computing, and supply chain security, Steve brings deep expertise to help organizations navigate the evolving threat landscape.

Show transcript

S1: Welcome to Access Control, a podcast providing practical security advice for fast-growing organizations, advice from people who've been there. In each episode, we'll interview a leader in their field and learn best practices and practical tips for securing your org. Today's guest is Steve Orrin, Chief Technology Officer and Senior Principal Engineer at Intel Federal. Steve leads technology, architecture, strategy, and customer engagements for government and regulated industries at Intel. He brings deep expertise in cybersecurity, AI, edge computing, supply chain security, and rapid development. Steve, thanks for joining us today.

S2: Thank you, Ben. Pleasure to be here today.

S1: To start, can you share your background and role at Intel Federal?

S2: Sure. So for the first 20 years of my career, I did cybersecurity startups across a variety of domains from edge and client security all the way through to mainframe and web security. And then after joining Intel in 2005, I ran security pathfinding for about nine years, looking at the sort of the forward leading edge of security capabilities and how to bring those to market quickly. And then about 10 years ago, I took on the role of CTO for Intel Federal and have been driving our technology engagements across the federal government and public sector, covering everything from AI to high-performance computing, security, edge, and everything in between as it applies to the enterprise and mission needs of the US government.

S1: Great, and so can you give a little bit of-- from your sort of prior startup experience and how did that sort of shape your approach to cybersecurity?

S2: So when you're in a startup, you're really looking at trying to understand what the customer needs, anticipate where their needs are going to be. And again, in the cybersecurity domain, it's often understanding the threat landscape and how that's going to evolve and affect the businesses and organizations that you're looking to serve. And so one of the ways that I've always brought that startup mentality is not just-- is working with the customer, understanding what their challenges are, understanding the threat landscapes, obviously, but also understanding how they adopt technology, what other technologies, infrastructures, and processes and procedures they have because you can have the best security widget on the planet, but if it doesn't scale, or if it's not user-friendly, or if it doesn't meet the practical needs of the organization, you're not going to be successful.

S2: So oftentimes, I leverage the learnings I gained in my startup experience to best understand how to get successful technologies adopted by organizations, how to help them address the risks that they're dealing with today, and help anticipate what risks they're going to have to deal with in the future so that our technologies and capabilities in our ecosystem can better serve them. And so really, it's taking both an architectural and technology role, of course, but it's really understanding the customer environment and the customer needs and mapping those two worlds together is really helped shape how we bring security and broader technologies into the marketplace and to service the customers in their domain and what they're trying to accomplish.

S1: And I guess in the world of startups or federal, we're all facing the same threats. It's not like a threat actor picks certain organizations. Everyone has the same ever-changing sort of landscape.

S2: Well, I think what we find is that the bigger targets obviously get the more complex and targeted attacks and threats. So federal government is obviously a big target. It's got national sensitive information. It's got broad range. But what we find is that the type of attacks and techniques that are brought to bear against federal and public sector entities make their way into other regulated industries, financial services, healthcare, and then into the broader market. So you can almost think of the kind of security threats that the government is dealing with today and in the future trickle down to the broader industry over time as the tools and techniques become more pervasive to not just nation-state actors, but to cyber criminals and hackers across the world. And so we do see that the juicier targets get the most complex and most elegant attacks and have the strongest threat, but they don't stay there. Those threats become more commoditized and target the broader industries. And so one interesting thing about working with the public sector and understanding the threats they're dealing with is it gives you an eye to the future of what regular industry and other commercial organizations will be facing in the not too distant future.

S1: And can you give an overview of what these sort of trends you're currently seeing in the cybersecurity landscape?

S2: So obviously you can't turn on the news without hearing about large data breaches and ransomware, and that's really pervasive. But I think when you look at the kind of attacks the federal government is worried about, things like supply chain attacks are top-of-mind right now. Also, more complex campaigns so they're a combination of phishing and advanced persistent threat and advanced malware and targeted techniques combined together to be able to go after those government assets. And so what you find is a level of coordination when you're dealing about nation-state actors attacking nation-state systems. And so the threat landscape encompasses multiple areas. So they're looking at various vectors, whether it be over-the-air, web, targeted systems. But I think right now the hot button still is looking at supply-chain-based attacks and looking about whether it be software-based or hardware-based threats that are coming in to the legitimate systems or into the open source community and leveraging vulnerabilities there to get your foothold.

S2: The other thing that we see as a trend is that much of the malware that's been employed against the federal government, as well as in regulated industries, are much more stealthy. They're about long-term persistence, low-and-slow techniques as opposed to just ransomware-ing an organization quickly. They want to get in there and be persistent and not be detected. And so they're looking at under-the-OS style of techniques where your security tools often don't have visibility. And so the trends that the federal government is looking at are those more deep-into-the-system kind of attacks or attacks that scale across multiple systems, taking advantage of various vulnerabilities strung together into a cohesive malware or advanced persistent threat campaign.

S1: You know, I think one interesting thing about the government, one, it's a very large organization and there's many different departments, and I think many of them are in different phases of their cybersecurity literacy. And I'm sure there's lots of digital transformation that kind of goes between probably the NSA is on the cutting edge, but Parks and Recs are probably less sophisticated, probably less of a chance to attack. But what have you kind of seen as sort of some practical strategies as people sort of go along building cyber resilience as they sort of transform and deal with these threats that they encounter?

S2: So you bring up a good question, Ben. And to look at it, like you said, different organizations have both different levels of maturity as well as different levels of funding and staff to address the risks. And it really comes down to understanding your risk, understanding your assets. Broadly speaking, getting transparency or visibility into both your physical and digital systems and the data and assets that you're trying to protect or to leverage for your enterprise and mission systems. So whether you're the Department of Interior or Forestry or you're the Department of Defense, you have different levels of risk, but you also have different kinds of assets that you're trying to protect. And so one of the key strategies, and as we look at some of the mandates and executive orders, and the strategies that have been published, at the very beginning is getting a good handle on your risk posture and the risk management to understand where are your risks, "Where are the assets? Where are they deployed? Who's managing or who owns them? What are the interfaces?" And the term that everyone is using that's in the executive order is to get towards a Zero Trust architecture.

S2: And at the core of that is changing the dynamic from accepting a lot of risk and accepting everyone in and once you've authenticated and just letting them have access, to that sort of default deny and verification of every access point, of every transaction. And when you do that categorically, it doesn't matter if you've got millions of systems or 10 systems, taking that approach allows you to get a better handle around your risk because it reduces the amount of risk you take in and allows you to only give access where it is needed or to give limited access and then shut it down after a transaction has occurred, that taking that approach helps organizations get a better handle on the massive amount of risks and assets that they're trying to protect and trying to secure. Especially when you look at some of the more complex systems.

S2: It was one thing when we had just a data center with a bunch of servers and some clients internally, but now we've got edge computing, we've got remote workers, we've got cloud services and other micro services all coming in, data flowing all across the network. That old model of a perimeter security just doesn't work. And so really it's about putting security and putting the risk mitigations where the data lives and to have it follow the data, follow the transaction. And so what organizations are doing, they're taking a look not just at what their assets are and where the data is, but the interactions, how things are coming together, where the data flows, what are the transactions, what are the actual application workflows, and applying the right risk rubrics, if you will, onto controlling those accesses. And then taking best practices like segmentation and others to help isolate systems from each other.

S1: One interesting point that you touched on was previously we had data centers. You had people who were approved to go into the data center. But with edge, you have basically computers anywhere in the world that are connecting back to a central place. And I think this is sort of an interesting aspect where Intel can help a lot with is sort of the hardware Root of Trust, making sure that those devices are the ones that are technically in the field, but how can you trust this IoT device connecting back to your data center? I wonder if you can sort of expand on sort of how Intel is innovating around hardware-level foundation of trust.

S2: So it's a really good point there. Edge computing is by definition outside the domain of the organization, whether that's a power distribution station sitting out in the field or sensors on a light pole to vehicles and everything in between. You don't have the guards with guns and the locked doors and all the security cameras watching those assets like you do in an enterprise data center. So at its core, you've got-- it's exposed to anyone can come physically up to that system or can digitally attack that system outside the perimeter walls, if you will. And so that's why it's important to have implicit trust in those systems.

S2: Now, this is one of those things where hardware really is a key part of that story. Having an immutable sense of Root of Trust built into the hardware where you can verify, "Is this the system that I expect to be talking to and be able to validate that? Has anything changed on that system since the last time we spoke?" As an edge computing device is gathering data, how do you verify that both the hardware, the firmware, and the software and applications have not been tampered with? How do you verify that the communications links that are you using for that are valid and authenticated? And so using hardware, a hardware Root of Trust, and so built into the Intel hardware are tools and techniques to validate the-- everything up the stack from the hardware all the way through the software and operating system into the application space to give you keys that you can bind and attest to those workloads and verify or attest that information prior to accepting data or trusting that system.

S2: Then we've recently taken a step further and provide secure containers in the form of secure enclaves or confidential computing to allow you to protect data even if the application or the runtime environment may have been compromised. And this can be in a cloud environment or at the edge, being able to use the hardware to protect that data and that application code in an inviolate encrypted memory container allows you to trust the most important bits, the keys, your application code, your AI inferencing model. Even when potentially adversaries could get physical access to the system and try to probe it or try to load malware or other adverse configurations onto the platform, it gives you a safe place to operate even in the in light of targeted attacks.

S1: Yeah, so probably expand a little bit more on sort of trustworthy AI since you know it's a hot topic. And it's also been ubiquitous across industries, not just, you know, there's the services you can run, you can run local models. Apple came out with their private cloud, which will be released with the latest version of the iPhone, which has an interesting private compute secure enclave as well. Can you give some examples of how sort of organizations and different companies are using private compute to sort of keep personal information and data secure?

S2: Looking at AI, there's sort of two ways you have to look at, "How do I secure my AI?" A lot of focus, and it's an important focus, is, "How do I protect the AI where it's deployed," like on the iPhone, or in your laptop, or into a sensor that's doing inferencing and object recognition, and so forth? So protecting the AI where it's deployed is a key focus area. And things like confidential computing and trusted containers and other security tools are being leveraged to make sure that that AI model is not being tampered with, to make sure that it's still operating within the parameters that you expect it to. But there's another aspect that has to be looked at when organizations are looking to adopt AI for whatever business application, and that is the life cycle. Because AI is not just a thing I deploy, it is a whole process, starting with the data, starting with the sourcing and the modeling and the label tuning, all of that that goes into generating that model or algorithm, has to be protected.

S2: What's the key importance there is stepping stone to all this other talk about responsible use and securing AI is getting visibility into all those steps so that you can attest what data sets went into driving that model, "What tuning was done? What optimizations was done? What labeling? Who did the labeling? Where did these models--" Especially in the more modern approach where I take a large language model and then I adapt it for a specific domain, "Who did the training of the original one? What changes happened when I did that miniaturization?" It's not saying that you have to lock down the security aspect, but you have to have visibility. Because at the end of the day, the question isn't, "Is that a secure AI?" The question is, "Can I trust that AI?" And trust is bigger than an encryption algorithm or a key. It's, "Do I have appropriate information to make a risk determination that this-- I can trust the results that I'm getting from this particular AI model?" And that's something that has to look deeper into the life cycle.

S2: And so more mature organizations are looking at the whole process, understanding their source, their-- if you will, their AI supply chain. Where are they getting the data sets? Where are they getting the model? Who's doing the model tuning for them? How are they adopting that model to their specific domain? And are they doing the right things as far as the configuration and infrastructure to support that AI? It sounds like a lot of work and it can be. But if you build that in from the get-go in your AI development projects, you build the need for a governance framework that has the right controls and that you have visibility and transparency along the way, all of those little breadcrumbs and building blocks will lead to a better outcome, and it will become easier to then do those trust decisions on the resulting AI if you have all those artifacts across the lifecycle. That gets you pretty far away towards trusted AI.

S2: And then the last piece I want to mention is that AI isn't, "Well, I did it. I'm done. I can go and do something else." In some respects, it's a living thing. It's always evolving. It's learning from the prompts. It's learning from the interactions. It's constantly training itself. And so keeping an eye on your model as new inputs come in, as new data is presented to it, and making sure that it isn't being poisoned or starting to go astray and hallucinate, requires ongoing continuous monitoring of the AI as well. So it's the lifecycle, it's protected where it's deployed, and then monitor it ongoing are really the strategies for getting better trust into our AI systems.

S1: And I can imagine this is very critical, let's say, in healthcare, for example. If you're making diagnoses based upon AI recommendations, you need to make sure that the foundational source of, "What is a cancer," is coming from professionals and not from hallucinations?

S2: And it gets even more interesting when you look at healthcare as a great example. The data actually really matters here because when you're making a diagnosis, a lot of times the data is based on what was available to train the model. And oftentimes you find that the data sets-- they call it bias, but it's just-- it's more intrinsic than that. You may have a large data set of, let's say, white males aged 18 to 35 that trains the model on diagnosing this particular cancer. But when it gets deployed, it's being deployed on a larger population. And so if you didn't anticipate what the use of that AI that was going to go for a broader population, across genders, across races, across ethnicities, even across country boundaries. Understanding that if your data is too homogenous, you're not going to get good outcomes.

S2: And so even if it's not malicious, the data will drive potentially bad outcomes. And we've seen this examples in a lot of AI use cases where misdiagnoses or misrecognitions, not because someone poisoned the data set, but because they used too limited, not a diverse enough, set of data to drive the initial model in its training. And so having visibility will help you at the end stage to make sure that you do deploy the right AI for the right questions. In the case of cancer diagnosis, if you only care about diagnosing 18 to 35-year-old white males, you're good. But if you're looking at a broader population, you want to make sure you have a really good, diverse data set that drives that model from the get-go. And the way you know that is from having visibility into the data sourcing that started this.

S1: And then I guess continuation from the healthcare compliance. HIPAA is a big compliance regime around the use of your healthcare data and where it gets sent. This probably comes back to, "Is it going to an on-premise data center in the hospital? Is it going up to the cloud?" How do you sort of think about architecting solutions that sort of comply with these compliance regimes that we-- that different organizations have to take into account?

S2: It's a fascinating question, and there are a couple of strategies. And I'll break it into two buckets. One is having data controls and pathways for sharing the data appropriately across organizations, whether that's being able to have multi-party analytics in a controlled way. And so things like confidential computing for multi-party analytics or private offline data centers to be able to do that multi-party training across multiple different data sets. The other is taking a different approach, which is the anonymization of the data. So one is you want to give the data so you get the richest AI. The other is, "Do I really need all that PII or do I really just need all the scans and the diagnosis? I don't need to know the name, the age necessarily."

S2: And so putting in the right data controls, and this is where a governance framework really is powerful, is by implementing those controls early, you can then help inform how your data or how the AI is used much further downstream. And in the case of healthcare where there are strong controls around data sharing and need-to-know, it's important to build those controls in so that when you want to use this AI or you want to be able to share the data, you already have built in the right controls, whether it's the masking in order to strip out the PII or it's the safe harbors that you're creating of encrypted data links, multi-party secure analytics to allow for the collaborative sharing in a controlled way. And those are going to be driven by those compliance and governance frameworks. But you have to build that in. It's not something you can think about at the end. It's like, "Oh, we want to do this now, what do we do?" If it's not built-in, it's going to be very hard to do it correctly later downstream when you're ready to actually take advantage of those systems.

S2: What we've seen organizations do is in the early stages when it's the data sharing, is creating those safe harbors, whether it's direct encrypted links, confidential computing, offline air-gapped environments to be able to do the model of tuning in a protected way and then only retrieve the model. We've seen this in a lot of the federated models where one approach is to instead of having a central body where all the data comes together and you train there, which presents [unique?]-- really hard challenges from compliance, is you distribute the training out to the data owners, each one training their piece of the puzzle. And the only thing that gets shared is the resulting weights collectively. And that way you help protect the [PAI?] or the regulated data from ever leaving the system, and the only thing that's quote "leaving" is the weights that were trained, and then that gets collected together into a federated model. And that approach is actually gaining subtraction to help deal with regulated industries, both in finance, healthcare, and other organizations.

S1: This reminds me of a project I worked very early in my career, which was monitoring people for independent living and where she deployed cameras in people's homes that would monitor different actions. So if they were dwelling under a door, if they were-- have trouble walking, and we would do gait analysis. But what was interesting, people always think of cameras like streaming all of their data back to this central hub, but it would just measure their gait and send us the gait data so we would never get anything. So as a sensor would have it.

S2: Exactly. That's a great example.

S1: And I think that's kind of a good segue into my next thing. This is an example of like an edge sensor capturing information. And from manufacturing to autonomous systems to in-care monitoring, there's all these challenges and also resource-constrained, intermittent connectivity of these devices. What are some ways in which you've sort of seen edge AI and just sort of compute deployed and be successful?

S2: So I think it's at the heart of what your question was. It's about edge computing. The transition and shift we've seen in the last several years is from edge sensing to edge computing. And really, it's at the heart of whether it be because of regulations or because of just bandwidth and latency, is pushing more of the intelligence out to the edge. Because the reality is you don't want streaming 20 hours of 4K video data shipped back to the cloud. Number one, it's very costly. It's a lot of storage, a lot of communications, and that assumes you have good storage and good communications. And then the round-robin of waiting to get that information back to make a decision is often too long.

S2: So the trend has been to push more computing capability out to the edge, and that's where edge computing is really starting to shine. And combine that with other trends, like the AI being able to take advantage and be-- and do inferencing on small form factor computing devices, have 5G and other connectivity options to be able to have a distributed edge communicating amongst itself, allows you to operate not just with real-time decision-making, but like you said, intermittent or denied communications environments. You don't want your autonomous vehicle to stop being able to tell you where to drive if you go through a tunnel. It operates as a self-contained entity. And the same is true of many of these edge computing applications, whether it be safety, autonomous systems, or regular sensing, you want to have real-time response to an event. And what you want on the back end is the trend. Not that all the raw data or being able to give you the response of what I saw, but you want to know what was-- decision was made so that you can do more of the trend analysis, more the advanced AI sort of across multiple domains.

S2: But at the edge, in the intelligence sensing, you want to be able to know, "Did somebody come up to my door? Is the drone going to the right place in the organization? Is the robotic arm doing what it's supposed to do? Are there defects on my manufacturing line?" You need to know that right away. And that decision can be made right there. It doesn't need a massive large language model sitting in a cloud. It needs to be able to do the inferencing in that environment. The benefits of that, like I said, is you don't have to worry about the intermittent connectivity. They can be self-contained or autonomous in their own. They can self-organize in the case of autonomous systems, but also means it reduces the amount of load from both a cost and from a latency perspective by just sending the data that was-- the results, letting the edge do the work of identifying-- in the case of change detection for surveillance, you only care if something moved in front of the camera. You don't want to see the 20 hours of just nothing. And so being able to get to those real-time decisions and only sending that back reduces the amount of bandwidth requirements, the amount of storage. And again, it allows you to get your alerting much quicker by having the edge computing.

S2: And so we're seeing this massive push towards intelligent edge as opposed to just sensing at the edge. And that's been enabled by computing being able to do that at the edge. And so really powerful CPUs with memory and storage and connectivity deployed right up at the sensor. So you have smart cameras, smart systems. And we've been talking about IoT for years, but really the Internet of Things started out as connecting your sensors. This next wave is making those sensors smart, intelligent, and actually having the AI deployed all the way out to the edge. And that transformation, along with things like 5G, are really enabling this next wave of transformation in public and private sector.

S2: Yeah, a lot of kind of shifting there. Of all of these things, let's say we have devices on the edge, we have compute, there's all these sort of security policies that people need to maintain and control, and it's a constant procedure to keep them updated. What are some examples that you've seen that have worked well within these Zero Trust environments to make sure that security policies and controls sort of stay true?

S2: It's an ever-ending battle of, "How do I make sure that I'm compliant, my applications are doing what they're supposed to do, they're secure and they're protected?" And it goes back to sort of a couple of key things we talked about. Number one is having proper risk management. So you're applying the right policies for the data and the applications that's consuming them and deploying those policies to the edge or to the systems, understanding the risk profiles. And those activities help inform the mitigating controls that you need to implement. The challenge is, "How do we keep up to date with every new vulnerability that gets exposed, every new risk that gets introduced?" And it's a constant firefighting battle. And what we're seeing is organizations are looking at ways to automate so they can what I call 'automate the stupid stuff.' The everyday firefighting, the vulnerability management, the patch management activities, which are the, "Everyday making sure my systems are configured correctly, have the latest updates, the latest patches, or the security controls on the networks are implemented."

S2: Using automation allows your cybersecurity teams and your risk management people to focus on the harder problems, the 10% or 20% of a nation-state adversary or of a new complex system that's being introduced. So one strategy is automating the things you can. The other is, again, taking a risk management approach and marrying that with Zero Trust principles. So if you start with a default deny and then validate and validate [inaudible] and continuous authentication, continuous monitoring, you will reduce your overall risk. Now, there's an old saying I like to quote is that, "Sometimes it's okay for the CEO to lose email for 30 minutes if it blocks a data breach." And so we have to get out of the mode that certain people have to be super users and godlike or certain applications just we'll give them a waiver so that-- because they're super important.

S2: Those days have to be gone because the data breaches have gotten to the point where we're almost numb to the sheer numbers of credit cards and user accounts that have been exposed, the amount of ransomware that has happened. And a lot of that is because we have legacy policies that give inherent access, give inherent trust. By shifting to the Zero Trust approach and implementing that from a risk management perspective, we're going to reduce our overall risk. We're not going to allow these long-lived pervasive attacks to occur. They'll still get in. It's going to happen, but they're going to be isolated and self-contained so that it doesn't become catastrophic throughout our organization. It will break things. Like I said, the CEO may lose email for 20 minutes. And we've got to be okay with that from an organization when we look at the overall risk of is-- what's more important, a data breach being prevented or access to a particular application for a short period of time being limited. And taking that different approach to how I secure is a fundamental way we get more comfortable with deploying cybersecurity at scale.

S1: Yeah. And from a technology perspective, I know we touched on it a few times, which is like secure computing. I don't know if you can sort of expand a little bit more how this sort of from a technology perspective can sort of help negate some of these issues that we sort of brought up.

S2: So confidential computing, which is about five years old in the industry, is a concept of protecting data in what we call the last mile. So for many years, we've had data-at-rest security. That's full disk encryption and file encryption. We've had data-in-transit security with TLS, IPSec and secure network tunnels. The last mile was how do I protect data while it's in use? How do I protect the application that's been-- that's transacting on the data while it's doing its transaction? And that requires being able to protect the data in memory from the CPU, where it's actually happening. And so with confidential computing, it actually addresses that last mile. It's a combination of CPU-level controls that sort of cordon off that application and its data and provide hardware control access, and then encrypted memory. And so the memory that is holding the data while it's being transacted on is encrypted from the CPU out. What that means is from both a physical attack perspective, if someone were to walk up to a system and try to drop a probe or pull out the memory and try to read it on other system, it would be encrypted. And from an application security perspective, you could have a system with tons of malware reading everything in memory. When it tries to access the protected memory, the CPU will block it because it's encrypted and it's controlled from the hardware to prevent access from any application that's outside that secure container, that secure enclave.

S2: Again, looking at it from a risk mitigation, I can take the most important data or the most important applications, put them in that secure enclave, and worry about protecting that as a self-contained entry. This is like segmentation to the next level. And then if something else is running on that system, so think about the cloud with co-tenancy, I don't have to have visibility into that other VM if I know my VM is protected from all forms of digital and physical attack. So it allows me to create different standards or different security controls based on the things I own or things I really care about. And then the things that I can't control or the things that are going to have less security allow me to be able to operate through. And it goes back to the idea of resiliency. Being able to protect my systems in the face of actual threat and attack as opposed to the trying to recover, which is an important part of your overall strategy. But what confidential computing allows me to do is to have that application be protected even when you're under active attack or from potential risks you don't know about today.

S2: And that really is one of the foundations of why we're seeing this broad adoption, both in the cloud, where you do have to deal with things like co-tenancy and micro services or remote access, so threats coming from all directions, as well as in the edge computing where you don't physically have tight controls on the system itself, where people can come up, cut the wire on the gate and walk through without-- there's one camera on that power distribution system. Someone comes in the backside, they physically access those systems. How do you protect the applications and data even from those physical access attacks? And so that's really where confidential community helps fill that last gap in the puzzle around protecting the data and application while it's being transacted upon from both digital and physical attacks.

S1: And there's also, we see this between security teams and developers that security really locks things down and developers find a backdoor into systems. So building systems that are resilient and kind of deploying zero standing privilege so people can get their job done in the most secure way. So I kind of like that use of using technology to get the goal done.

S2: Exactly. And it lets developers do what they do best, which is build code. And it helps the security operators stop being the no by saying, "Okay, you want to deploy this application into this cloud. I'm going to put it here in this secure place so that we can protect it." And it allows you to sort of be more of an enabler for the business and the DevOps folks, as opposed to a choke point for every application coming through.

S1: So of all of these things, we talk about securing, locking down systems, we're sort of balancing the need for a secure system, but also trying to be sort of data-driven within all of these strict security and compliance frameworks. What are some strategies for engaging with security and compliance stakeholders early on, which I think we kind of alluded to earlier that you sort of build this good foundation, to also sort of build in compliant data utilization for analytics and AI as sort of part of your foundational system?

S2: So a really good strategy is getting the right stakeholders in the room early in the process. And so one of the things, when we're talking about a lot of the AI and data-driven innovations that organizations are taking on, a really good best practice is from the very beginning, when you're starting up that activity, having compliance, having legal, having security, and the business owners in the room in the requirements definition phase, so at the very beginning. It really serves two main goals. One is you get their input early, which means you are already building in for the compliance requirements. You're building in for what security controls have to be put in. You've figured out the IP and rights and licensing part. The business owner's giving you their requirements of what they ultimately need. All of those requirements give you a richer set of capabilities, but it also gives them a level of ownership. And so as you move through the process, they get them in the beginning and you start building the app and you're ready to go deploy, they have a sense of ownership. They were part of the process so they're going to be with you on that deployment. They're going to help you throughout the process.

S2: And so instead of it being, "Well, here, we're going to throw this over the transom and let legal figure out what to do, legal now is a stakeholder. And it feels like they've been part of the process. It's just culturally works better for them to help you get out the door. And same thing with security so that when you're ready to go deploy, they already knew what you were trying to do from the beginning. They know the requirements. They know what's going to be needed. They're already planning for the security controls, as opposed to having to stop everything and go figure it out. So what it means for a much more efficient way of deploying security and compliance for those applications, but it also means that what you get out of the end is a much more secure, trusted, and compliant application or AI system because you had the requirements from the get-go. You were already addressing them, whether it be making sure that you dealt with PAI from the very beginning, or you put into the right security controls to prevent leakage. All of those things are absolutely critical to being successful and, key, scalable.

S2: I think that's the other thing is a lot of these AI projects start in the lab. It's a really cool project. It's a great widget. And they say, "Great, let's go do something with it." And it breaks because you didn't think about compliance. You didn't think about scale. You didn't think about the bandwidth. And it falls down on deployment. And one of the ways we've seen companies get across that chasm of death is by planning for scale from the get-go, planning for compliance and security from the beginning in those-- when they're still doing the experimentation. Even if they haven't fixed everything, but they already have a plan, it makes the transition to practice that much easier. And then the organizations that have to support it are already ready. They've already planned for and budgeted for what will be needed to deploy that application.

S2: One other thing that's important to know is what you thought you were doing from the beginning becomes something completely different oftentimes when it gets in deployment. You were looking to solve cancer diagnosis for the state of Arizona, and that was going to be your project. Downstream a year later, six other states and two other countries love what you've done and want to play it-- and take it to their location. So thinking about, "How would I scale this beyond the use case I define?" And whether that means building in the mechanisms to do additional training to be able to train for different diverse populations. Or even if you're going to take something that was really good for, you know, like you talked earlier about gait detection, which was applied to a healthcare use case. What if I wanted to take that algorithm and apply it to sports athletes and see how they're running up and down the field and do recommendations for their workout. Understanding that your AI could be used in different ways allows you to think about it in a more modular, flexible way, and you then don't build yourself into a corner. And so those best practices help build a more robust, scalable system downstream by thinking a little bit outside the box from the beginning.

S1: Yeah. Yeah. And I like your answer, just bringing in the whole team. It's ultimately a people problem. Working in federal, I think people probably see government as like a black box. But ultimately, even the person in DMV is a real person, working with their day-to-day problems.

S2: Yeah. And I think one thing people often think is that, "Oh, government is so different from everything else." And the reality is the federal government is actually a microcosm of every other industry. You want to talk about healthcare and insurance, the Veterans Administration is one of the largest healthcare providers in the world. Talk about finance, you've got obviously the IRS, but also Medicaid Medicare are one of the largest payment systems in the world. And you have logistics and supply chain. Well, the DoD has to get people and vehicles all over the world. They have a lot of the same applications and problems that commercial industry in the private sector does sometimes at a larger scale. But you find that a lot of the same best practices that are used in government can be used in industry and vice versa. They learn from each other. And there's a lot to be learned from what each one is attempting to do.

S1: Yeah. Which is a good segue into where can listeners go to learn more about your work at Intel Federal and what you guys do?

S2: So the best way to find out what we're doing is go to intel.com/publicsector, one word. That will show you all the innovations, the solutions, and the ecosystem we're bringing to bear to help the public sector achieve their mission goals.

S1: Great. And to close out, I always like to start with one practical tip for organizations that can help secure the infrastructure. Do you have a tip for today's listeners?

S2: One tip is hard. I'll give you two. One is actually a simple one. Asset management. You can't secure what you don't know. And it's not the asset management of yesteryear where, "I just need to know all my systems and printers and network switches." It's, your data is an asset. Your transactions and applications and workflows are assets. Understanding what assets you have and what the interfaces are to those assets is critical for every step beyond the risk management process, Zero Trust. You can't secure what you don't know. So that's number one is know what you're protecting. And the second one is, leverage the security you already have. Most organizations don't even realize the amount of security that comes with the platform they've bought. Inside the server, inside the laptop. There's a lot of security features that are there, they're part of the package, they came for free. You bought them already. Turn them on. Go into the BIOS and turn on Secure Boot. Go leverage the encryption acceleration. Turn on the features you have, will give you a really good baseline. And then you can go look at all the other security tools and mitigations you need. But if you're not taking advantage of what you already have, you're missing a whole lot of value and security that just comes with the platform.

S1: Awesome. There's some great tips.

S3: This podcast is brought to you by Teleport.

S1: All right, thank you, Steve.

S3: Teleport--

S2: Thank you, Ben. It was a pleasure being with you.

S3: -is the easiest and most secure way to access all your infrastructure. The open-source Teleport Access Planes consolidates connectivity, authentication, authorization, and auditing into a single platform. By consolidating all aspects of infrastructure access, teleport reduces attack surface area, cuts operational overhead, easily enforces compliance, and improves engineering productivity. Learn more at goteleport.com or find us on GitHub, github.com/gravitational/teleport.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.