Let AI Drive Compliance and Provisioning in Your Identity Journey

WEBINAR ON-DEMAND

AI brings not only transformative possibilities but also new security challenges with a need to stay ahead of the curve when it comes to identity security. Watch us with our partners, intiGrow to learn how to strategically employ AI to safeguard against the very challenges it may introduce. We will unveil how Bravura Security is actively harnessing AI to empower organizations to keep pace with a constantly evolving security landscape and a speed of change never seen before. This compelling webinar will explore the intersection of two powerful forces – artificial intelligence and identity security – and how organizations can strategically employ AI through Bravura Cloud to safeguard against the very challenges it may introduce. 

Watch on-demand Let AI Drive Compliance and Provisioning in Your Identity Journey, presented with our partners at intiGrow, recorded during our third annual Power Of One Conference. 

Key Highlights 

  • The Duality of AI in Identity Security: As AI becomes more sophisticated, it poses both an opportunity and a threat to identity security. Discover how AI can be a double-edged sword and what strategies can be adopted to harness its potential for bolstering security. 
  • Unveiling AI-Driven Threats: Delve into the evolving landscape of AI-driven identity security threats. From deepfake attacks to AI-powered phishing, we will uncover the novel ways in which cybercriminals leverage AI and how organizations can stay prepared. 
  • Leveraging AI for Defense: Explore innovative approaches to combating AI-driven identity security challenges. Learn how advanced AI techniques can be employed to detect, prevent, and mitigate threats, ensuring robust identity security. 
  • Adaptive Security Posture: Discover how AI can empower organizations to adapt their security posture in real-time. From anomaly detection to predictive analytics, explore how AI can enhance the agility and effectiveness of identity security strategies. 
  • Real-World Insights: Through real-world use cases, we will showcase how organizations can integrate AI into their identity security frameworks. We will walk through pervasive use cases and gain insights into best practices. 
  • The Human-AI Collaboration: Understand the importance of a collaborative approach between AI and human expertise. Explore how skilled identity security professionals can leverage AI tools to make informed decisions and respond to emerging threats effectively.

Presenters

 

Bryan Christ

Bravura Security

Senior Solutions Sales Engineer

Bryan specializes in security and access governance. For more than twenty years he has focused on open-source and software development with an emphasis on team leadership and executive oversight. Bryan is also an experienced Virtual Chief Information Officer in the Greater Houston area. 

 

Jim Skidmore

intiGrow

Vice President, Solutions Group 

Jim, a consultative Solutions Executive, help clients implement on-prem and cloud based SAAS Solutions to achieve desired outcomes across cybersecurity, compliance and risk management, IoT, and AI. Jim has consulting experience in a variety technical disciplines including eradicating compliance issues. 

What might once have seemed impossible is rapidly becoming reality thanks to growing developments in Artificial Intelligence (AI) technology. Developers are finding new use cases for AI in many different applications and industries, including Identity and Access Management (IAM) and cybersecurity. If you've been looking for a way to enhance your security posture and keep up with the ever-evolving threat landscape, we can help. Let's explore how AI and your identity journey can go hand in hand. 

The Technological Progression of AI — An Overview

To understand where AI is going, it's important to explore where it's been. Let's take a look at how AI has changed since its inception and where it could be headed in the next few years:

  1. Rule-based AI: One of the original iterations, a rule-based AI is extremely simple — it consists of a structured set of rules that lead to predetermined outcomes. These systems are immutable and unscalable, only capable of executing the tasks defined in their programming.
  2. Context awareness: Context-aware AI can retain information from previous interactions, which provides the context for future responses. This capability eliminates the need to continuously repeat yourself in your prompts, creating a smoother and more personalized user experience.
  3. Domain-specific mastery: This AI can understand and retain context, but it also has the tools to rapidly become an expert within a specific field. Google's DeepMind AlphaGo AI, for instance, was designed specifically to master the highly complex Chinese board game Go.
  4. Generative AI: Generative AI models — such as ChatGPT, DALL-E and Google's Bard — are a bit more human-like in their capacity for thinking and reasoning. Machine Learning (ML) and deep learning enable these models to comprehend complex concepts and use that information to generate creative solutions to new problems.

While we've only just entered Stage Four with generative AI models exploding into the mainstream, experts are already forecasting about what's next. It's uncertain what exactly the next stages of AI could look like. Its growing presence in our everyday lives, however, has the potential to benefit us in various ways we may never have previously imagined.

Current Events and Developments in AI

While the Singularity is highly speculative and still just science fiction, it's impossible to deny the enormous impact AI will have on our world moving forward — not to mention the impact it's already made. Cautionary tales about rogue AI and privacy violations are abundant, but AI has so many promising applications in business, computing and more. 

In terms of cybersecurity, AI has applications for both malicious actors and companies looking to cover their vulnerabilities. For example, hackers could use generative AI programs to create convincing phishing emails that are more difficult to detect than those written by humans. To combat this kind of threat, a company could use ML algorithms to quickly identify threats once they enter their network, significantly reducing potential losses. 

However, we're sure to see things change rapidly over the coming years. 

How Leveraging AI for Defense Can Improve Your Security Posture

So how can AI enhance cybersecurity techniques like IAM and Privileged Access Management (PAM)? It all comes down to Natural Language Processing (NLP), which is a computer program's ability to understand human language and produce an appropriate response. NLP is a foundational element of generative AI like ChatGPT and DALL-E, which take written inputs and produce new content that matches what the user prompted.

Some possible IAM use cases for this technology include:

  • Generative policy creation: You could enforce strong password policies by prompting a generative AI model to create an expression for a policy that is in violation if a user fails to change their password after 30 days. Further prompting the AI to explain the policy would result in a thorough breakdown of each line of code, allowing you to check its work.
  • Flagging inactive accounts: To identify dormant or orphaned accounts, you could prompt a generative AI to create a policy that is in violation if a user's last Active Directory login was 45 or more days ago. 
  • Identifying entitlement outliers: The Principle of Least Privilege (PoLP) is critical for minimizing the risk of both malicious and unintentional insider threats. You could prompt an AI to review specific user accounts and flag any entitlements they possess that do not match their role, team or department.
  • Detecting rehires: When an employee leaves your company, their identity never truly gets deleted from your system — it simply changes state. The same goes for when they return to your company. An AI could create a policy expression that automatically reactivates and restores a rehire's access permissions to save time for your IT and HR departments.
  • Identifying unusual requests: An AI model could analyze typical behavior patterns for your users and flag anomalies if any are present. Additionally, it could automatically remediate these requests by activating access management techniques like step-up authentication and workflow approvals. 

A professional can help you determine how AI might work with your organization's network, which can help you maximize your investment. 

Planning Your AI Voyage — Start With Design

To implement a successful AI cybersecurity strategy, you need to start from the ground up. Data discovery is an essential first step because it unlocks the power of all your organization's data, including datasets you may not have even known about. Once you are sure you know all your datasets, you can begin planning for real. 

You'll first want to determine what decisions you'll allow your AI model to make from your data set. As we mentioned earlier, privacy is a real concern with unregulated AI, so you'll need to be careful if you intend to use it on datasets containing Personally Identifiable Information (PII), like HR records or data from your Customer Relationship Management (CRM) system.

Additionally, consider what role Security Information and Event Management (SIEM) will play in your strategy.

Enhance Your Security Strategy With Advanced Solutions From Bravura Security

If your organization needs to step up its IAM or PAM strategies, you can count on us. The Bravura Security Fabric, our fully integrated access control solution, provides end-to-end protection for enterprises and large corporations. With fully configurable capabilities and services, you can tailor it to your organization's environment, creating an effective shield against would-be attackers from any angle. And advanced security automation helps combat automated attacks, so your system is protected from AI threats.

Schedule a live demo today to see our solutions in action, and feel free to contact us online with any questions. 

Review the Full Session Transcript

No time to watch the session? No Problem, Take a read through the transcript.

Courtney Auchter (00:03):

All right, as people continue to join, my name is Courtney Ter. I'll be the moderator. I'm with Integral, and today is all about letting AI drive compliance and provisioning in your identity journey. And we have two great speakers who are leading this discussion today. Today I'd like to introduce them. Jim Skidmore is the Vice President of the Solutions Group at Integral. He is a consultative solutions executive who helps clients implement on-prem and cloud-based SaaS solutions to achieve desired outcomes across cybersecurity, compliance and risk management. IO, OT and ai. Jim has over 20 years of consulting experience in a variety of technical disciplines, including eradicating compliance issues. Brian, Chris is a senior solutions sales engineer at Brara Security and he specializes in security and access governance for more than 20 years. He has focused on open source and software development with an emphasis on team leadership and executive oversight. Ryan is also an experienced virtual chief information officer in the greater Houston area. So welcome Brian. Welcome Jim. And we're going to go ahead and get into the discussion and let's first talk about the technological progression of ai. And Brian, I would love for you to speak on where we're at and where we're going.

Bryan Christ (01:43):

Hey, yeah, thanks Courtney. Appreciate the introductions. I do want to do kind of a quick level set for folks on the line. So first of all, I want to thank you for attending our Power of One summit today. I hope that you've had an opportunity to hear and listen into some of the previous sessions. In fact, we'll sort of springboard at points off of those things that have already been brought up for us. This is actually for Gemini. This is actually a continuation of a series that started earlier this year. So once a quarter we've sort of been keeping tabs, what's going on in AI and how that might be relevant to the work that you do. And obviously as it relates to things that we're doing here at Brave Security. So what you see up here on the screen, just kind of an overview of the historical milestones in terms of progression of AI on that far left side might be a little bit difficult to see, but it's sort of a rules-based approach.

(02:45):

I think everybody gets that. Rules were early on used to make decisions, context aware, domain mastery. Domain mastery. You can think of things like the ability to play chess, so getting really good at playing chess. And so I think we all understand that the big blue chess competitions and things like that. And then generative ai. So that's where we're at today and these things are building on top of each other. So generative AI is its domain mastery, but it's your chat GPT, it's your mid journeys. It's the ability to leverage AI to create things. It's really under the hood is synthesizing of lots of data, but then want to talk real quickly about what's on the horizon. And I think Jim, I'm going to ask Jim to chime in here in a minute and add some color to some of these things. But the next thing that's on the horizon, and there's no really no real debate about whether this will happen.

(03:55):

So artificial general intelligence. So when we talk about generative AI and I think mid journey or chat GPT, those are focused on specific tasks, creating works of art, writing a poem, but they're specialized, right? GI would be another milestone where you have one AI to interact with and it can do it all right? It can write a recipe, but it also can read a radiation chart and make diagnosis. And so I like to tell people this is kind of, if you've ever watched Star Trek that they can talk to this thing they call computer and it's just a vast wealth of information that can supply answers to 'em. The next two are a little bit speculative, A SI, which is basically artificial super intelligence and singularity there. There's a lot of debate on whether these things will actually happen. You decide for yourself, but I'll just mention what they are.

(04:59):

Artificial super intelligence builds on a GI such that it's the amount of knowledge that is generated by AI is vastly superior to that of human beings. And then the singularity, this thing gets often misconstrued, but it's a point in time essentially where you can't imagine what the world was like before. AI became so pervasive in our life that it really changed the dynamics of human experience. So Jim, you and I talked a little bit about rules. It's at the bottom of the slide here, but there's a place for rule-based system. We don't talk about that a little bit. Can you share your thoughts? I know we discussed it, but share your thoughts with the audience on that.

Jim Skidmore (05:49):

Yeah, it's really important. Thanks, Brian. It's really important. I think we're going to go through a little bit of a timeframe. The world will look to see where AI can solve, excuse the expression, world hunger in their organization. At the same time, it's important to understand I think where it should play and where it shouldn't play. And we'll talk about that in context of identity today. And some of the use cases I think will be good signposts for the kinds of things that we can allow the new generation to automate and elevate certain things. And then areas that the legacy sort of standard technology should really be handling things. And from talking to a lot of people, it's kind of an age of discovery right now because people are like, well, theoretically if I have this major issue here, I can have AI work on the automation side of that to solve my problem in conjunction with the other work that we're doing in a more legacy fashion and practicality, security governance and a whole bunch of other areas are definitely going to come into play on that discussion. Yeah, and it's going to evolve as we're looking at here, right? It's a little bit we're projecting, but at the same time there's kind of a logical sequence. Events I think we can all see will kind of occur. We've even seen that and we'll talk about it, but more recently in some of the governance and some of the law setting and legislation that's starting to build as well.

Bryan Christ (07:41):

Yeah, it's interesting you bring that up. Jim. Courtney, if you want to just go ahead and jump onto the next slide, that'd be great. You talked a lot about positives in that previous slide and what AI can bring to the table and decisions you make about whether you're going to do that rules or you're going to use AI and the positive outcomes that it can bring. But when you're talking to folks about ai, there is a very almost an alarming element to it and folks are realizing, hey, with the good comes the bad. And so what do we need to do to put guardrails on it? I'm going to Jim, let you largely talk about this, but there was kind of a compact or an accord, if you will, with some of the major players behind AI and the White House. There's a huge, the executive writeup is available if you go to the White House website. I cherry picked three things out of that statement that I thought were interesting. You can see 'em up here on the screen. Jim, you were more apprised of all of this. So if you would just unpack your observations if you would, for the folks on the line about what you've seen around this, specifically this agreement, but other things that are similar.

Jim Skidmore (09:05):

So some of these are standard security like controls that we would think about or risk management controls. And then I think other ethical areas that have not really even been delved into yet in the market I think are going to really come into play when we're talking about proprietary and unreleased model weights. I think governance is going to have a lot of say into how these things occur, and I'll put some context around these subjects also, we want to be able to obviously differentiate between AI generated audio and video. I know there's been a lot of challenge with that with misinformation and disinformation already, and I'm not sure we have our arms completely around it, so it's going to be a journey. And then the social risks such as discrimination and privacy. And we've seen all of this occur when people have a desire to accomplish a specific objective.

(10:09):

It could be political, it could be geopolitical, it could be from a vertical or lobbying perspective kind of making things happen. But there's some social responsibility and I think most of the providers out there now are starting to really gain an understanding of what's at risk to them from an internal perspective and the potential massive legal ramifications that exist out there. I do have a sample of what a couple of those might look like because now we're not only managing the AI inside out, we're also responsible for all the data. We're responsible for all the governance and the access governance to that data. And then obviously we have how are we going to administer this for the public face on behalf of our organizations to ensure that we're not putting ourselves at risk or our constituents, whether they're clients, ecosystem members, opt-in lists, whatever they might be.

(11:18):

And I think we can also learn from some of the things that have occurred in the past. We know recently, for example, that European lawmakers came together on much tighter, tougher draft and they've been pretty good as a bit of a beacon of reality, not just through GDPR, but through wanting to protect the privacy of their citizens, wanting to reduce risk, but also, for example, we know that they voted against facial recognition pretty recently and that kind of amplified a debate around the world. That's just one sample. But also as the data provider, there's a lot of issues here to be drawn out as well. So knowing what those data sources are useful to address in an AI question or problem is really important. And kind of being able to construct those or at times deconstruct those to make sure that they're meeting the objective.

(12:22):

That's part of an ethical and quality area of this. I think people are going to really have to be aware of how the data is actually being used in the algorithms because a lot of times people just kind of throw everything at a problem or an opportunity that they see, but there's going to be corporate and organizational responsibility in managing these data sets. And then there's obviously assessing the data quality because people ultimately will probably be rated in markets based on that capability. How are they cleaning and treating the data over a period of time? Because as we know, it will continue to grow and grow and grow. And eventually the goal is to not make this be unmanageable. I mean, if things get too large and out of control, we'll start to see inaccuracy, we'll start to see more fraud and people trying to curate data for their own purposes, having a focus on the detail is going to be very important.

(13:31):

I think it's going to also entail, this is a little bit of opinion, but I've got some backup from some thought leaders in the industry, but possessing the strength to push back on technical teams because everybody's trying to accomplish an objective and they're trying to get things done in a certain way to drive market share and for other reasons, which is important. And then I would say kind of a last bullet is knowing the typical ways to do data transformation. Many people don't really understand what this means yet for a lot of organizations, they're trying to figure out how to transform that as they go through different releases of product cycles and things like that. And they have a role in developing what I would call responsible ai, right? All the product makers now you're hearing are talking about vetting use cases, making sure customers want to use this for the right reasons because this responsible AI narrative will drive ethics just like it will in other markets that we all work in. They're capturing data from various sources, so they're going to have to validate that, transform it probably at times, match the data with other data to make sure to ensure conformity at times they'll want to enrich the data at times they'll want to filter it and understand what the ramifications are for doing that to the dataset. So, and obviously the overall quality and accuracy. So this is quite a mouthful, and I think I've probably boiled the ocean there, but these are some areas I think are going to be mission critical as we progress.

Bryan Christ (15:26):

Courtney, if you could do me a quick favor and just jump back to that previous slide for a minute. What I want to do is I want to give the audience an opportunity just real quick to in your mind's eye, take a snapshot of those bullets, just bookmark the key points of them. And then as we walk through again, go ahead and go back to the next slide. But the interesting thing, and Jim, I message you offline about this as we were prepping, the interesting thing is as I cherry pick those out of that white House release, and then as you and I have been doing, we were looking for things to what's the latest and greatest that's happening in the realm and the correlation between what those safeguards are supposed to do and what's happening in the world was just strikingly uncanny.

(16:20):

For example, there was on the previous slide a bit about protecting privacy, the new release of GPT bought, right? So chat, GPT has been trained on a large data set, but I joked about this, it shows my age, but in short circuit, there's Johnny number five, and the first thing he does is he starts rifling through books and saying, need more input, need more input. And that's kind where we are with these systems is now we're looking for ways to add more data to the dataset, but it comes with a privacy concern. So there's some rails, but your average Joe isn't necessarily going to understand how to safeguard their blog. So I was making an illustration pick on Courtney again, but if Courtney's got a blog on her favorite travel destinations and she's not thinking, Hey, I need a block of bot from consuming this data, then all of a sudden her data becomes part of the conversation. Jim, your thoughts on that? I know you touched on a little bit already, but

Jim Skidmore (17:36):

Yeah, it's hard to really comment on the bot situation until they figure out how they're going to govern how far and how wide that can go. And hopefully things step up more quickly on a global basis because we're just now at the advent of how we can govern this. And I think for a lot of legislative groups, they're trying to figure it out, but this has potentially enormous ramifications as we were talking about before, the quality of the data that the GPT bot discovers over the course of time is going to come back as good as what's been discovered for better or for worse. So over the course of time, I think we're going to have to figure out how we're regulating that. And as the earth carves itself up into a series of oceans, lakes and ponds, what consumers really need to use versus everything that's out there is important.

(18:42):

The other thing that's important is there are a number of organizations that are seeking to be these global data provider standard bearers and capture markets that may or may not be guided for a little while. And there are some corporate responsibility parts of this that again, they're going to have to have, and I should say government responsibility, corporate responsibility, and just what I would call humane and ethical responsibility for this. So yeah, the global GPT bot crawler is something to really think about, and I think it's something that we're all going to have to get our arms around and play along with whenever governance comes forward.

Bryan Christ (19:37):

I mean the ramifications to corporations when they consume data in their use of AI without really understanding where it comes from. And I think this headline that you see up here where a lot of these big companies are now lobbying to say, well, we realize there's a problem here. And unfortunately the solution, and it seems to be from the big companies is, well, we need to just change copyright law. We need to make sure that AI is immune to things like infringement. And so it's a problem, it's your problem to worry about as an organization, but the fact that these big companies are worried about it really reinforces

Jim Skidmore (20:26):

It. Brian, I think there's some very consumable parts of that too. How are we creating a process that can trace a document to the origin of the data models, what's associated metadata and what are pipelines of this, and how will you monitor those for audit? I think these things are all going to come in what I would call standard data governance required model, and it's going to be quite a wild ride until we get there.

Bryan Christ (21:05):

Agreed, yes. We kind of glossed over it. I know it was up on the screen for a minute, but R abilities are also kind of a big concern right now in the sense that well, okay, so for example, these ransomware scans, R abilities, the things that you see up here on the screen in that charter was this idea of, Hey, we need to put rails on weights and proprietary things and Facebook, in fact, with their meta voice box. So it's a really just amazing technology, but at the same time, it's quite scary that it can clone your voice with 96% accuracy given only a two second sample. So there was a real world incident of a lady who heard her daughter on the phone was told that she was kidnapped and she needed to wire a large sum of money. And the whole, wow, this is not really her daughter, it's ai, it's a scammer, and her daughter was really somewhere else with some friends.

(22:25):

That's an example of something that's very scary in the real world. Google engineers, and you see the article up here, but their AI learned, Bengali had never been trained on this foreign language before and it learned it. So there's a lot of Pandora's box on this. And again, the good and the bad come hand in hand. And so what in the previous sessions we've been talking about is this idea that you need to start thinking about your security posture in terms of classic attacks that are being enhanced by ai. So we'll look at some constructive ways here in a minute that you can use ai, but just understand that for everything that you're going to be doing positive with ai, there's going to be the bad actors that are going to want to invert that, flip it on its head, take these classic attacks and make them better with ai. Jim, you got any thoughts on that before we

Jim Skidmore (23:23):

No, I just try to think of all the attack vectors,

Bryan Christ (23:27):

Which

Jim Skidmore (23:28):

Are many, right? It's text, it's images, it's music, it's sounds, it's videos. These are all issues to be addressed through our discussion here,

Bryan Christ (23:40):

And it really can be overwhelming. Courtney, if you want to go onto the next slide. So now that we've scared you a little bit, we want to talk about in the identity space, how can you leverage AI to improve your posture? So there are bad actors out there. They're obviously going to be using technology in a bad way, and so you want to be able to leverage the same technology and defend yourself. And so we'll talk about some practical applications of ai. There's some that are very generic and obvious, for example, natural language queries and reporting. Actually thought in hindsight after putting this together, that natural language interaction would even be a better way to characterize this. But again, I go back to that sort of Star Trek analogy where you're just simply talking to the computer and you're looking for a set of information. You're getting that back in a human consumable format. Jim, you want to comment on that at all?

Jim Skidmore (24:49):

No, it's a very hot topic right now. I know folks are racing for funding of new organizations that are doing the large language model analysis and stuff like that, and I think it's going to be its own kind of pop-up vertical market now.

Bryan Christ (25:04):

Yeah, agreed. Carolyn, Courtney, if you want to go ahead and skip on to the next one. So I think this is where it starts to get interesting. So I think everybody can appreciate the ability to use the natural language, and that kind of runs the gamut in terms of what kind of technology vertical you're dealing with in terms of improving your defense. One of the wonderful things that AI can do is generative policy creation and explanation. So I hope that many of you are on the line, got to listen in and watch our keynote with Vera Cloud, the star of the summit, this go around. But as policy was introduced in that demo, you see up here an example on the screen. I don't know anything about rego. I'm not a rego expression guy, but I just simply asked the ai, I need to plug this into a cloud. Can you provide, create a regular expression that matches this criteria and just dispute it out? And that's a really nice thing to be able to do. Jim, your thoughts?

Jim Skidmore (26:26):

Yeah, I mean, I'm glad we're getting to the positives. So no, this type of thing for anybody that has to go through, especially the compliance regimen that our identity clients and stuff have to go to, it's nice to be able to overlay tools that will enforce policy without rules that we don't normally have to enforce with the technology that we utilize. So this is just going to be one of many really positive items.

Bryan Christ (27:02):

Courtney, if you go ahead and flip to the next slide. So I'll just kind of show you real quick the inverse of what happened here. So hopefully this is large enough for you to see on the screen, but as I said, I don't do rego expression. That's not my wheelhouse, but just like I used AI to create a rego policy, I can turn that on its head and I can just basically take a policy like that, which is an open standard. I can dump it into AI and I can say, Hey, explain this to me, kind of boil it down, walk me through what this policy is doing, and I can make sure that what I've already got in place is going to do what I expect it to do, or if I've crafted something new and spot check that I've actually done my job in crafting it. Jim, anything you want to say about this or

Jim Skidmore (27:55):

No? No. This is the first of many great use cases. I think that

Bryan Christ (27:58):

Folks will gain benefit from

(28:01):

Courtney. So what we're going to try to do in the next couple of slides is just sort of spark your creativity. The interesting thing that Jim and I have observed after doing this, now this third conversation on AI is it really is a blank canvas. Every time I start talking about this, I can think of new ways to do things, how AI can benefit me in certain areas and disciplines. So we're not giving you the kitchen sink here. The idea is just we wanted to throw a few things in your direction so you can start thinking about how you could apply AI when you have the right data, when you have the right tool set. So this is an example here of flagging orphan and dormant accounts. I mean, this is probably one of the things, Jim, would you agree? It's probably a top five things that people get dinged on? I mean, definitely.

Jim Skidmore (28:56):

Bryan Christ (28:57):

It's absolutely up there. And so I just simply said, Hey, create a rego expressions for a policy that'll flag people who haven't logged in and a certain number. Again, I didn't have to know rego expression, they just created it for me and then I could take this and apply it. But yeah, what do you think? Top of your head, right? I mean I know I'm putting you on the spot here, Jim, but orphan dormant accounts probably, what is that number one? Number two, I mean it,

Jim Skidmore (29:27):

It's very big for a cadre of compliance issues, and I'm sure anybody on here that's been involved in managing or facilitating identity and access policy attestation or access recertification, whatever you want to call it, is unbelievably difficult. And I have been in cases where people are doing attestation, they come back, they flag things, and it turns out that they were actually looking for attestation from orphans that are in the LDAP that have obviously recently left the organization or they've done some sort of move at change kind of thing, even move to a different group, and then it just makes the process that much harder. I mean, obviously tools like vora help people to automate that process, but at the end of the day, this is usually a corporate kind of requirement or an organizational requirement that we all go through and some folks are able to automate that. Some folks are still working on spreadsheets to account at the station that's occurred. We feel your pain if you are. The good news is there's not really any need to do that. But yeah, I mean the net is that, yeah, this is definitely one of the top use cases in regardless of socks or HIPAA or GLBA or whatever you're wrestling with. Unquestionably,

Bryan Christ (30:59):

For sure. Courtney, let's give them a little more food for thought here. So identifying entitlement outliers. This happens in most organizations flying under the radar. When I'm doing some sort of demonstration about our identity product, I typically make the illustration that it happens almost. It's very difficult, it's very difficult sometimes to root out where this came from, but I think a lot of times it happens in the onboarding process because well, they don't have good tools in place. We'll do things like, oh, you know what? Jim's doing the same thing. This new guy, Jim's doing the same thing that Courtney's doing. Well, if the person that's pulling the lever and doing a clone operation, which there's a whole nother conversation, we'd say, don't do that, but a lot of organizations do. And the problem is, is that if Courtney's been around for a long time in the organization, she might have an entitlement due to her tenure that is extraordinary, not ordinary I guess for her to have, but there's rationale behind it.

(32:13):

But the guy behind a screen pulling a lever or turning a knob to do that copy doesn't necessarily know that backstory. And then so all of a sudden, Jim, day one employee get some entitlement that Courtney 10 year veteran has had and he has no business having it. So that's sort of the scenario I set up here, and I basically took AI and I spoonfed it some data that represented some users holding some roles, and it was too large to see on the screen here. So if you're looking at it going, where's the rest of the data? I mean, it was two pages long, so it couldn't take a really good screenshot of it. But I took that data and I fed it into open ai and I said, Hey, tell me what you see that's unusual here. And you can see what it did. It analyzed the fact that there was a marketing role and the standard users, and then it said, Hey, you know what I found it's really unusual for this entitlement to exist because of the department that John Doe is in. So this is another use case of allowing or empowering AI to find the things that are often very difficult without extreme cross-referencing and reporting. Jim, your thoughts?

Jim Skidmore (33:35):

Yeah, it's earth shattering to have to go through what I would call an attribute workshop where we're trying to reinvent the wheel and say, okay, organizational, what are the attributes that we're going to put together? Do we even know organizationally every single job responsibility and title entitled or maybe we're taking them from SAP or some sort of ERP solution or something like that, because that was kind of the easier way when we started out to go back through that development for a large or midsize organization is a wholesale change that is snowflake schema related kind of stuff. So a lot of people don't want to do stuff like that, nor will they, and sometimes people don't have the knowledge base that even developed it in the first place. So the easiest thing to do is to just press on in cases like this with the fix you were just talking about, we can press on intelligently and just look for the exceptions, which makes life much easier than having to go back to the drawing board and say, okay, let's map the entire organization by salary grade, by all the other parts that might go into that employee numbers, all the other intelligent parts that go into that alphanumeric string or numeric string and kind of go from there.

(35:00):

So this definitely makes that whole problem much simpler to solve than reinventing the wheel, if you will.

Bryan Christ (35:08):

Yeah, absolutely. The power of it is astounding. On this slide here, we're giving you sort of another couple of use cases. Again, the idea here is not to be comprehensive in what's possible, but just really to kind of spark your imagination. But I was talking with our CTOA couple of days ago and the issue of detecting rehires came up, and I think this is a really good case where AI can be super beneficial because in a rules space and there's a place for rules, we'll talk about that again here in a little bit, but there's a place for rules, but rules are very wooden and very rigid, and we were thinking about this idea where things like security groups aren't always named aim A vow gets dropped out of it intentionally, right? You're trying to shorthand it. So instead of spelling out SharePoint, you have S-H-R-P-R-T, and we understand that.

(36:08):

We look at that security group and go, well, that's a SharePoint group, but a rules-based system isn't going to necessarily pick that up. But AI on the other hand can interpret that and say that that looks like, let's look what you're saying. SharePoint, when you take that same idea and you can apply it to a rehire where your identity program and solution should be identifying somebody that left the organization five years ago and is coming back in, but what they've done is when the human being is re-keying that person into your HR system and they spell that last name with an I instead of an E. A rules-based system isn't necessarily going to pick that up, but AI could, right? Yeah. Jim, I'll let you talk about unusual requests. You brought this up when we were pulling this together, so why don't you take that one on?

Jim Skidmore (37:00):

Yeah. First I want to delineate the difference between user entity behavior analytics, which is like a UO solution. And we've seen those right speed of typing a lot of keyboard level components that go into that. This can be anything with regard to time, anything that as we're analyzing normal behavior patterns of the user out there, it might just be EBA really accounts for, well, to write the word chrysanthemum, it took Brian 23 an average of 23 seconds. This is more like Brian's logging on at three o'clock in the morning on a Tuesday, which is an unusual pattern. And as we're looking at a number of these different patterns, he's also looked for access to the AP module. And this is not to be confused with privileged access management because that's obviously monitoring and carries all the other components within their session recording and everything else.

(38:08):

This is really just enforcing the authentication side of this. So as these behavior patterns change in a person, again, very different than U aba, but it's an opportunity for us to understand these behavior patterns that we're collecting with the AI and then automate that back to our rules-based process to, Hey, based on these anomalies, it's time to step up authentication. So let's either force whatever the action is that's desired, whether it's two FA and say, okay, we emailed you a code, please verify or look at additional active workflow approvals that basically say, I'm going to escalate this to your upline manager, or I'm going to do whatever's required there. So these types of use cases are I think very valuable because 90% of organizations are probably much greater than that. Don't use user entity information in the first place and just to get their selves normalized a use case.

(39:24):

This can be extremely valuable because the moment we even see something skittering around before the red team is even saying, I have a question about this, I'm going to stop this packet, or I'm going to try to worry about what this potential payload is that hit the email box or whatever, we're able to step up authentication and prevent any of that process from even being an issue. So yeah, this is a very valuable use case and every time we do something like this, it takes breach percentages down, like factors of 10, even just for the average organization, I know about 87% of people are less likely that just use standard MFA, but they're much less likely to be breached. And obviously if we can step up authentication in unusual times during predetermined timeframes, that's going to have a tremendous impact.

Bryan Christ (40:31):

Yeah, thanks Jim. Yeah, I mean we kind of left this one for last because we wanted to talk about remediation. So there's some, AI brings a lot of good analytics and the ability to understand the data and craft the data, but remediation for the foreseeable future always will be some sort in human engagement. And I think Jim did good job explaining that. If we could just move on to the next slide, what I'm going to probably say just a couple of bits here about this, and then Jim, I'm going to turn this over to you. I know you've been tracking this quite a bit in your world. If you think about the way these language models are working, well, the AI works, your data set becomes super important and it becomes super important for a number of reasons. Again, I'm going to touch on one of them.

(41:30):

If you had the chance to attend the earlier keynote and we talked about the data and the analytics that revera Cloud can bring into scope for you, that's really what we want to be the staple for identity data. We're not trying to be the authoritative source on American history, but here at Vera Security, especially with revera Cloud, we want to be the authority on your identity data. And so there's a lot of conversation going on these days about data lakes data pond. So you can think of us sort of with cloud as that data pond that you can tap into and you can feed to your AI and know that you're getting good data for your AI to make recommendations and decisions on, and you can craft policies around. Jim, I know you wanted to talk about some of these other things here, so I'm going to turn it over to you to talk about that.

Jim Skidmore (42:41):

Sure, absolutely. Yeah, again, this is a loaded question. I'll leave the SIM one I guess to last, right? But I think all data providers, and I know Brera is doing an excellent job of this as the brera cloud components come out and also through kind of analysis of the pond and what's going to go into say, a client's larger data set. But I think it's really important for all AI data providers to understand they have a really significant role. The term I use again is responsible ai, and you're hearing that from a lot of the vendors in the market. Now, the bottom line is the larger ones are collecting data from a variety of sources. So what this means is that their responsibility, and I mentioned that these steps earlier, but to validate, transform, match, that's important, enrich, filter and improve the quality of data.

(43:47):

And this is an ever overarching, always odd infinitum kind of thing that will never end. We have to worry about the ethical use of the data, obviously. So the other thing is it's really important and from a legal perspective, there's so much risk. If those representative algorithms are unbiased, nobody will have much of an issue. And I think it's really important to keep that unbiased, overarching theme involved in your development of the dataset. Transparency is really important too. I see the legal world kind of licking its chops over a lot of the issues that could potentially transpire, but transparency, people should be able to explain their AI model and help them to understand across processes, functions, how this works, why it works, because this will build trust over the course of time with customers who are using the data sets, obviously, but also even internal employees that are reliant upon this that I gave a little bit too large a blurb about data governance earlier, so I won't go completely into that.

(45:13):

But just suffice to say, data governance practices are really going to be important coming in the future. We need to minimize any of the bias or any unforeseen consequences. And by the way, as we all know from DLP and other areas, protecting the privacy and security of the data is critical in the US We now have coming toward us right now, the cousin of GDPR found that in the EU and in other parts of the world, we are going to be responsible for constituent data and the privacy of that by law. So if you know the GDPR rules, you know, have to have a data privacy officer who also has personal culpability, but also in their world 4% of the previous year's, revenue is the penalty. So we're not talking about 20 grand here, and we know those, we won't use their names that do violate this and pay dearly every year. And also it's, it's not easy to see what these data sets are going to look like from a compliance perspective, but there are a lot of pending regulations and new ones that are taking steps right now to ensure ethical AI practices. So if you're delivering high quality, well-managed data sets to customers, things will work really well for you. Obviously we have to be very careful from an ethical perspective and all most providers around the world now and certainly around the US have sort of taken this ethical pledge to look forward to do that.

(47:05):

Again, it's kind of a mouthful, but for data to be unbiased, for it to be ethical and to be carefully managed and to go through again those processes, validate, transform, match, enrich, filter, and improve the quality continuously, if people are able to manage that, I think they'll be very effective in the market for providing this. So,

Bryan Christ (47:31):

Alright, thank you. Jim, you didn't get a chance to say your bit about there.

Jim Skidmore (47:39):

My apologies. My apologies.

Bryan Christ (47:42):

I'll just say this. So SIM is going to be part of your data lake or your data federation strategy. I have some running challenges with my colleagues that'll probably result in a bet over some beer or something. But the real challenge I think with SIM today is how, it's a lot of data that has a cost associated with it and some computational challenges associated with it. But I didn't get a chance to talk about the nuclear arms race that folks like a MD and Nvidia are in right now. But I see the hardware side of the equation really solving that. And like I said, maybe I'll lose the bet, maybe I won't win the beer, but I think I will because I think in 18 to 24 months we'll see that we're actually able from a technology perspective to consume a fair more amount of data than people realize today. Courtney, if you want to go onto the next slide, I think we're,

Courtney Auchter (48:43):

Yep. And Jim and Brian, we just want to leave a little bit of room for questions.

Bryan Christ (48:48):

Sure, absolutely, absolutely.

Courtney Auchter (48:50):

So we only have a few minutes.

Bryan Christ (48:51):

Yeah. So I want to wrap this up here from the brera side. So again, today one of the stars of the show is brera Cloud, and it's intended to a be that really good data store and for identity for surfacing compliance issues. And it's architected in such a way that it's a, it's friendly to rules because we didn't really talk about this terribly much, but rules do have a place, they're very, very quick. They're easy to implement, low friction in terms of integration. So they still have a place in the world, but we've also architected this such that it is AI friendly. So I don't know if this will catch on, but bring your own AI is kind of the way I'm characterizing it, but bring your own AI and allow it to engage with the data. And again, I think that eventually SIM will be part of that conversation, but we'll see.

(49:59):

But revera cloud aims to be that central repository that you can apply rules to, you can focus your AI on, and you can get really, really meaningful actionable compliance and audit engagements out of Courtney, if you want to go ahead and go onto the next screen, we would invite you if what we've covered today really gets the imagination running, we would invite you to contact us regarding a demo. We do have an early access program now for Vera Cloud, and we would love to talk with you further about this, sharing greater insights and again, showcasing some of the stuff that we've talked about. And Jim, if you want to talk about what Gros offering.

Jim Skidmore (51:01):

Yeah, yeah, it's funny you bring up zero trust, that's central to our world as well. So really we're getting hit up by a lot of people now talking about their overall AI use strategy before they go into things. So if we offer a free, what we'll call AI pre-assessment, which helps people to understand where it might be useful in their organization or possibly not, also what the security ramifications are, right? If somebody, for example, is just using generative AI for marketing content creation, but they want to probably put in their security policy, this needs to be network zoned away from X, Y, and Z, these are all things that we'll take into account and kind of help you to get a quick look at that. The zero trust side of things to get the least privilege is something we've been doing for quite a long time as well. Yeah, and these go hand in hand with the RA security kind of demo request because obviously as you're looking at the capabilities, understanding what your design concerns and what use cases you can fulfill and what ones you may want to let rules handle is an important part of the discussion as well. So we're happy to spend time on our dime to help people to become successful there.

Courtney Auchter (52:36):

Thank you everyone for joining us for this session. We will now open it up to questions, so if you have any questions, please put it in the chat. And it looks like we have a question from Trevor. Ian and Nick talked earlier about revera cloud and the intersection of data management with the capabilities of AI to analyze and surface things that existing tools or regular expressions don't do a great job of. Can you and or Ian talk a bit more about leveraging AI within revera Cloud for things like rehire detection out of band, password changes, accounts with non expiring passwords, et cetera?

Bryan Christ (53:20):

Yeah, so the thing that's nice about regular expression, so we'll say RegX is the rego and things like that. Regular expressions are really great because they're fast, they're very well known, and most developers can craft a policy around a regular expression pretty quickly. So that's a benefit to it. The problem is they're also relatively wooden. So I gave the example earlier about a rehire specifically. It's not necessarily going to pick up that Bob Martin, Martin spelled with an I versus Martin spelled with an E is going to be the same thing. And that's where things like AI Excel, right? It can make those sort of interpretive inferences that we as human beings make. And so when you want something to be lightweight and fast and low friction, then probably a rules-based, like a regular expression is the way to go. But the beauty of AI is that it is got that holistic view of your data set, right?

(54:50):

So we talked a bit about your data, whether it's a pond, a lake, your federation, you've got data sources. In this webinar we've been talking about revera cloud as a data source, but AI then plugged into a good data source has a holistic view of all that data, just like you and I do when we're reading a book, we can get the conceptual things in what we're reading. And so AI can pick up on that and it can surface big picture items where you might be able to get the data by, in the old school way, we would run five different reports each having a nugget of data, and we would then cross-reference one report with the four others, and we eventually get to the solution that we're looking for. And that's a lot of manual labor, human being time consumption, that we can push that data set into AI and simply ask it that question. So there's a place for both, and really the decision has to be around things like speed, dataset in question, so on and so forth.

Courtney Auchter (56:09):

Brian, thank you for that thorough answer. And Trevor, thank you for asking the question. I know we did not get to your question, but we will have somebody from the security team reach out to you to address your question. Unfortunately, we have run out of time. Please stick around for the next power of one session. Brian, Jim, thank you so much for your insights today. Everyone continue with this session, the rest of the sessions this afternoon. Have a great one.

Bryan Christ (56:33):

Thank you all. Thanks.