Securing Your Data: Comprehensive Guidance to Governing AI Identities in IAM
Watch On Demand
Contents
Read straight through, or jump to the section you want to read:
Watch Webinar On-Demand
Why watch AI Identity Governance?
Who Should Watch this Webinar?
Webinar Highlights
Free Download: 5 Point Checklist for Securing AI Assistant Identities
Schedule a Complimentary Personalized Demo or AI Policy Pre-Assessment to Evaluate Your Team's Needs
Read Full Webinar Transcript
Watch On-Demand
Securing Your Data: Comprehensive Guidance to Governing AI Identities in IAM
In an era of technological advancement where AI (Artificial Intelligence) is becoming a game-changer in many industries, it is crucial for organizations to understand how to effectively govern AI identities, including those acting as AI assistants, within their Identity and Access Management (IAM) strategies. This webinar is designed to help participants navigate the complexities, emerging pitfalls, and legal implications of AI identity governance and ensure compliance with industry standards and regulations.
During this interactive session, our IAM and InfoSec experts dive into the essentials of integrating AI identities into your IAM strategy, ensuring effective governance, and maintaining compliance. We will explore the challenges and unparalleled opportunities that AI presents to assist your organization within in the IAM landscape, showcasing real-life scenarios and providing practical solutions to issues that could have detrimental consequences.
Key Topics include:
- Understanding the role of AI, data, and access controls in IAM and its implications for identity governance.
- Strategies for considering AI identities and integrating into your existing IAM strategy.
- Practical steps to ensure regulatory compliance when governing AI identities.
- Insights into future trends in AI identity governance and IAM.
Whether you're an IT professional, a compliance officer, or a business leader looking to enhance your organization's IAM strategy, this webinar will provide valuable insights and practical guidance plus an opportunity to learn how you can factor the benefits of identity analytics into your IAM strategy. Stay ahead of the curve and ensure your organization's IAM strategy is robust, compliant, and future-proofed.
Why Watch the AI Identity Governance Webinar?
For professionals eager to deepen their understanding of AI’s role in IAM and to refine their strategies for AI identity governance, this webinar is an invaluable resource.
-
Enhanced Understanding: The webinar complements provides a comprehensive look into the role of AI, data, and access controls in IAM. It addresses the implications for identity governance, building upon the principles discussed.
-
Interactive Expertise: Our IAM and InfoSec experts discuss the intricacies of integrating AI identities into existing IAM frameworks.
-
Real-World Applications: By showcasing actual scenarios, the webinar contextualizes the challenges and solutions in a tangible way. This practical approach helps bridge the gap between theoretical knowledge and actionable strategies.
-
Forward-Looking Strategies: The session not only covers current best practices but also offer a glimpse into future trends in AI identity governance. This foresight can be crucial for organizations looking to stay ahead of the curve.
-
Compliance Guidance: As regulatory landscapes evolve, the webinar offers up-to-date advice on maintaining compliance when governing AI identities. This is critical for organizations that must navigate complex legal requirements.
Who Will Find The Content Valueable?
This webinar is tailored for a diverse group of professionals, including:
- IT professionals tasked with securing their organization's digital assets.
- Compliance officers responsible for ensuring adherence to industry regulations.
- Business leaders seeking to fortify their IAM strategies in light of AI advancements.
- Security architects and engineers looking to integrate AI identities effectively.
Webinar Highlights
During the session, the content delves into key topics that are essential for any robust IAM strategy:
-
Understanding AI in IAM: We explore the significance of AI and machine identities, examining their impact on identity governance frameworks.
-
Integration Strategies: Learn how to consider AI identities within your IAM strategies, ensuring seamless integration.
-
Regulatory Compliance: Get actionable steps to help ensure that your organization remains compliant with industry standards and regulations when governing AI identities.
-
Future Trends: Insights into emerging trends will equip you with the knowledge to future-proof you IAM strategies.
- 5 Point Checklist: Get access to a summary checklist to help you govern AI identities in your strategy.
Presenters
Bryan Christ
Bravura Security
Sales Engineer
Bryan specializes in security and access governance. For more than twenty years, he has focused on open-source and software development opportunities with an emphasis on project management, team leadership, and executive oversight including experience as a VCIO in the Greater Houston area. He was recently published in Cyber Security: A Peer-Reviewed Journal.
Ian Reay
Bravura Security
Chief Technology Officer
Ian is a key member of the executive leadership team, accountable for leading a mature, scalable, and high-performing development organization, delivering all engineering-related activities. He started out as a developer with the company in 2006 and has taken on increasingly large roles in developing the company’s identity, privilege, password, and passwordless strategies and solutions. Ian's responsibilities include setting technology and tooling direction, prioritizing feature requests, managing release scope and timelines, the build infrastructure and UI/UX.
Jim Skidmore
intiGrow
VP, Solutions Group
Jim, a consultative Solutions Executive, help clients implement on-prem and cloud based SAAS Solutions to achieve desired outcomes across cybersecurity, compliance and risk management, IoT, and AI. Jim has consulting experience in a variety technical disciplines including eradicating compliance issues.
Schedule a Complimentary Personalized Demo or
AI Policy Pre-Assessment to Evaluate Your Team's Needs
What to Expect In a Demonstration
Our demos are comprehensive and highly interactive. Please plan to spend 45-60 minutes with us to view our solutions in action and have a conversation with our IAM specialists. We'll also briefly review our pricing structure with you and follow our demo with a written quote customized to your unique IGA and IAM needs.
What to Expect In an AI Policy Pre-Assessment
In a 60 minute complimentary advisory session our IT Security experts will discuss your identity, privileged access, and password governance strategy with you to highlight policy considerations for robust access controls, secure authentication practices, and stringent password policies. We will also provide guidance on your organization's insurance renewal or other key areas of concern that are specific to your team's needs.
Review the Full Session Transcript
No time to watch the session? No problem. Take a read through the session transcript.
0:02
All right, welcome everyone.
0:04
Welcome to Comprehensive guidance to Governing your AI identities and Identity Access Management brought to you by Reverse Security and Integro.
0:13
My name is Carolyn Evans and I'll be your host for today's session.
0:18
Before we get started, just wanted to remind you that this webinar is being recorded and we will short share the recording after the session.
0:27
We have a fantastic lineup for you today with our expert speakers, Ian Ray who is our CTO here at Prefer Security, Brian Christ who is our Senior Solutions Engineer at Prefer Security and Jim Skidmore who is the BP Solutions Group at Integral.
0:42
Ian, Brian and Jim will be discussing how you can secure your data in the age of AI.
0:48
We want to this to be an interactive session.
0:50
So please feel free to submit any questions that you have at any time in the chat function or the Q&A function of Zoom.
0:57
And we'll also have a dedicated Q&A session towards the end of of the webinar.
1:02
Without further ado, let's give a warm welcome to our speakers.
1:05
Take it away.
1:08
Hey, Carolyn, thank you so much for the introductions.
1:11
I just want to reiterate that we're grateful for those of you that are attending today.
1:15
This is 4th or 5th in a series that we have been conducting on the advancements of artificial intelligence, especially as it relates to the cybersecurity landscape.
1:31
One of the things that we want to do today before we get started here into the slide content is we do have a brief poll that we would like for folks on the line to participate in, if you could.
1:44
If the screens popped up for you, just take a, we'll take about 60 seconds to give folks a chance to to read the questions and and provide their feedback.
1:55
So we'll do that right now.
2:24
OK.
2:24
I think we have closed the poll, so if you didn't get a chance to chime in, sorry about that.
2:30
But we do appreciate those who have participated.
2:35
Carolyn, if you want, can you share the, the, the poll results with folks or is that already happening?
2:42
Maybe it's already happening.
2:44
That should be happening.
2:48
You should be able to see it.
2:49
OK, yeah, there we go.
2:50
OK.
2:51
So how well are you keeping up with AI trends and what kind of AI you're using?
2:56
This is good feedback.
2:57
We appreciate it.
2:59
Some mixed results here.
3:00
So it wasn't really clear to us going into this what we would see, but it looks like it's a pretty, pretty mixed bag.
3:08
And so hopefully today as we go through some of this content, a little bit of the at least in terms of what you know will shift a little bit.
3:18
So this first slide up here that we're looking at, we've consistently presented this over the last.
3:25
Like I said, this is the 4th or 5th in a series that we've done on artificial intelligence.
3:30
We've consistently presented this slide.
3:32
It's sort of an hey, here's where we are in the world of AI.
3:37
Generative AI is the thing that's making the most noise.
3:40
So folks hear about ChatGPT or they hear about mid journey or stable diffusion.
3:47
And so just a quick kind of reminder of the differences here between generative AI and AGI is a generative AI is singularly focused on a specific task.
3:59
So ChatGPT obviously involved in dialogue, creative writing and those kinds of things.
4:06
AGI is really the next big milestone, which is I I often give the analogy that it's like the computer on Star Trek, right?
4:17
You just ask a question, it doesn't really matter what discipline of knowledge that's in question, it just gives you the answer.
4:25
And so I think that that piece is fast approaching Jim.
4:31
I know you keep up with this landscape somewhat aggressively as at least as much as you can.
4:37
That's the trick, right.
4:40
But when you read the tea leaves, you know we've we've placed the you or hear bubble sort of in between because it's it, it seems like it's it's probably imminent.
4:47
What are what are you seeing out there Jim?
4:49
Yeah, I, I and I think it's difficult to to decide what innovation may come along that changes the pillars completely.
4:57
But we're basically finding that people are pretty new into this because we're obviously building global scale IAM solutions and doing planning you know strategy and and compliance planning for a lot of enterprise clients.
5:18
They're just kind of I guess at the tipping point, right.
5:22
A lot of folks we're we're starting to get into some posture and policy development.
5:27
We're talking a little bit about you know some controls in places, but I don't think in a lot of cases folks have dove in as deep as we will today with regard to how this will affect policy.
5:45
You know so I don't I don't want to open Pandora's box and let the the cow out of the barn already but you know our our our looking at the pools, the lakes, the ponds, understanding the data sets very well, looking at potentially impact analysis kinds of stuff or or other stuff.
6:01
So it's it's kind of we're sort of at the advent, I guess and people are I think the polls sort of backed that up a little bit probably maybe, you know, people are getting into it, but they're not headlong into it yet because I think of a lot of the other day-to-day issues we all have to deal with.
6:19
Yeah, I I'm glad you said that.
6:21
I was actually going to ask your opinion on what you thought.
6:24
I mean you you deal with, you know, our focus here at reverse security is very narrowly focused on a particular segment of cybersecurity.
6:34
I know that at Indigo you do a bit more in terms of of breadth and and So what you're saying is that there's it's a mixed bag folks are coming to you with varying levels of knowledge about AI.
6:49
Yeah.
6:50
I mean it might be anything from you know what should the generative ChatGPT generation and things mean to us on out to holy cow some of my developers are getting and and Ian smiling as I say this AI assistance and other components that we're woefully worried about.
7:08
Yeah everybody's in a little different place LA so it's I'm glad you brought that up.
7:12
Let's not get ahead of ourselves 'cause we are definitely going going down there you know Haley.
7:17
Carolyn, if you would we go ahead and move to the next slide real quick.
7:21
I'll talk to this but at some point I'm gonna hand this over to Ian to to wrap up a conversation on what we're seeing here in front of us.
7:28
So I would say at least four or five years ago I was talking to folks about the landscape of identities.
7:36
You know, 15 years ago, 20 years ago organizations were really I I would say not even really managing their employee population very well.
7:47
There was a fraction of them that were doing it.
7:50
And really things like contractor contingent workforce, they were an afterthought if they were a thought thought at all.
7:59
And so as time has evolved, it's my perception that organizations aren't really keeping up with the various kinds of identities that you should be concerned about.
8:14
And governing.
8:16
I I am somewhat optimistic that we're seeing folks now start talking about non human accounts, service accounts, things that applications tend to be powered by.
8:28
So that's good.
8:29
But I get the sense that they're still still behind quite a bit.
8:34
And so Haley, Caroline, if you'll spring to jump over to the next slide, I want to ask Ian, you know, what is it that you see, you know, so if we see folks not dealing with these services, non human accounts, how is how is an AI assistant?
8:51
So that's going to be largely the topic of conversation today.
8:53
How is AIS an AI assistant different then something like a non human service account can can you maybe unpack that in just a little few words here?
9:05
Yeah, for sure.
9:06
Because also this is where AI is being introduced to many different things and so some of them can be handled under, you know, our existing practices and whatnot.
9:17
But there's also a kind of a new distinct area that I'll get into here now.
9:22
So again I am platforms tend to have historically focused on your consumers, partners, employees and you know the the people that drive our businesses forward here.
9:36
But more and more as people are looking at, you know, modern risks, modern attacks trying to see that say like the service accounts are key weak points and that's trying to become a lot more visible as people are.
9:49
You know, we're trying to make sure improve good governance for the systems of the whole that they are tasked with, with, with supporting.
9:58
So service counts if it's sort of been kind of murky and now they're becoming a lot more visible.
10:04
And again, devices, endpoints and when you have the, you know, making sure the people have access to those administrative interfaces, those ones that control all the devices that that underpin how your company's operate.
10:17
Those are in the last number of years those two have been, you know, getting a lot more attention due to, you know the the level of access that they have in your organizations.
10:28
But what really kind of sets or in an AI is being introduced to your devices.
10:33
Like pretty soon we're going to have AIS operating for us on our laptops, definitely on our phones.
10:39
You're already seeing that with the work that Google and Apple and Microsoft are doing.
10:43
They're pushing AIS onto your devices and also too by on your endpoints.
10:48
And many software applications are adding AI support to them.
10:50
So that's your services, and that's the stuff that often can be covered in many ways by your existing practices.
10:58
These tools tend to work with the data that is in those devices and for purposes specific to those devices or those applications.
11:07
Where AI assistants are a little different, though, is we as people we work across a range of different applications and a range of different devices.
11:18
What makes us valuable is that ability to bridge the information and bridge the tasks across those different systems and applications.
11:29
That's why we're needed otherwise the applications that have just handled this stuff and so as but us as humans we're feeling taxed.
11:36
We're feeling stretched.
11:38
We all want to do a few more things than you know what the day might allow for.
11:42
And also there's some areas where we might not be you know, all that comfortable and maybe it's something new to us.
11:48
Maybe it's something that we're going on and that's where we're many people are trying to see how assistance can help us fill in those gaps, fill in the weak spots that we have where the things that we don't have time for.
12:02
And we're just starting to see this with some of the assistance that people are creating in chat bots for topics at work, You know, helping to create marketing content, helping people to use your products better, helping staff to reference documentation easier.
12:19
Those are kind of people just putting their toe in the water, so to speak here.
12:23
But there's also a set of assistants that are just on the horizon and some of us have started to use such as say, like coding bots and troubleshooting bots and you know, bots that really help us on kind of our primary responsibility in a company.
12:41
And for those bots to excel, they need access to that information that spans the applications and devices In some ways.
12:50
Those bots need to have a memory that starts to kind of relate and look a lot like our memories and that's where it becomes easier as we can not have to remember quite so much and not be tasked with being great across the board.
13:04
We can start to leverage our systems to help us out here more and more.
13:07
And that's where these assistants kind of are a distinct thing between people and software.
13:15
They're not like the software applications and devices that we've used to date.
13:19
They're a little different.
13:20
They're a little bit more like an artificial helper.
13:25
They're going to help us to do a number of different things.
13:28
Many, many people need a way of doing research and to be able to find out what is happening in their industry or research a topic or there's a problem at hand.
13:37
You know, people across industries have that kind of challenge and also too certainly with say like higher Ed's, that's a unique challenge here where they they exist largely to do research.
13:49
And so again, how can those bots be responsible with reaching, with doing research on both public data sets but also more and more on proprietary data sets.
13:58
And just like an assistant, they're going to, they're going to generate findings and they're going to help you summarize things, they're going to create a draft then you can review.
14:06
And that's something that will we can really excel at.
14:10
We have expertise.
14:11
We know what we're trying to do as part of you know the the institutions or the business that we work for.
14:17
We need that help by having those assistance to, to help us with sort of, you know, some of the more mundane tasks or at least the things that we might not have time for or sometimes we're just not very good at.
14:27
And that's where one of the things that we'll be talking about through this webinar is this kind of this new entity that needs to be able to see what we see to be able to access the data that we access that.
14:40
That's just a little different than the applications and devices we've been using today because they start to approximate what we need, what what, what our memories are.
14:50
And so it's kind of exciting times here, but also it starts to raise a whole lot of thorny questions.
14:56
I think we're going to get into a lot more in this presentation here there, Brian.
14:59
Yeah, yeah, let's, let's, let's hold the the thorny questions for just a minute.
15:03
What I want to do is I want to ask want to think back a little bit to the previous slide for just a minute with with Jim here.
15:10
So, so Ian, thanks for kind of unpacking.
15:13
You know what that AI assistant looks like in terms of capabilities.
15:19
Jim, I want to ask you this question which is right before we introduced AIA since this I was talking about the service accounts and the non human accounts in the more traditional model, How in your experience, yeah, how in your experience do you think organizations are handling that today?
15:37
In other words the question is, are they handling it well today?
15:41
And then the the second question I want to kind of springboard off and ask is yes or no.
15:48
Do you think they're prepared for the this new identity type, this AI assistant.
15:55
So, so go you're you're you're the answer there.
15:58
That's kind of a loaded one.
16:00
Yeah.
16:00
Yeah.
16:01
So service accounts are often dealt with in you know what I would call you know kind of fragmented areas.
16:11
It it's almost like people that manage multi cloud environments.
16:15
We have the technology obviously to automate provisioning and deprovisioning joiner mover lever across different cloud accounts for example.
16:25
But often times people don't go the extra I, let me just say most of the time, nearly all the time, people don't go the extra mile to still look at an authoritative source, you know, kind of directory to help to manage all of that.
16:41
And it gets it.
16:41
It can be a little fragmented depending depending on what cloud service you're logging into, unless you know, sometimes the HR system, for example, say somebody's using an SAP or you know an ERP, you know, that may be a little bit more of an authoritative source.
16:57
But for a lot of the service accounts for standard applications like Work Day or others, you know, the identity strategy is a little fragmented.
17:06
And that's concerning because it's about to get a whole lot more complex.
17:12
And you know, at the end of the day, if you go, let's say, all the way to the goal line and you do the automation so that we can automatically provision D provision users all the way through to, you know, the account stakeholder on the other side of the cloud.
17:33
You know, Bravo and you know you've done well.
17:35
And if you're still using an authoritative source to do all of your governance within the organization, then that's an awesome thing, right?
17:42
You're doing access recertification or at the station as we call it and everything else.
17:47
But for most people, that is not the case.
17:50
And as they might have tipped off in the first kind of snippet of information earlier, I am woefully afraid of these AI assistance.
18:00
And I think our clients and folks around the world really need to be thinking about the potential that they have because we are not governing them.
18:10
And I'm not aware of any customer that is governing them today.
18:14
And I don't even think people know where they all are, especially in the hands of a lot of technologists, developers and you know you know, focus sys admins and others out there that are starting to use the technology to automate their daily lives.
18:29
And that can be, as Ian stated earlier, anything from auto generating code to upgrading your if you're doing a firmware update or anything in your OS.
18:41
Which is basically going to expose a lot of PII from your machine to the network, from the company to your machine to the external Internet to any number of, you know use cases that you can throw out there that are potentially very dangerous.
18:58
So OK, so, so let me let me summarize what I think I heard.
19:03
I think I heard service accounts, non human accounts in a in a traditional sense it's fragmented.
19:10
It would be fair to say most organizations still aren't handling that well.
19:15
And then to add to that, you're really concerned that folks are are are ill prepared for things like AI assistance, Is that characterizing that fairly?
19:26
I think it, it depends on the company.
19:28
You know if you have a new organization or newer and you decide that all we're going to do, we don't have legacy applications.
19:34
I don't have 45 AD forests, I don't have you know all of these existing legacy assets, then life becomes a lot more manageable and simpler.
19:45
But for a lot of larger organizations, they do have a lot of the, the legacy components that they have to think about, which basically turns what I would call a star scheme into a snowflake, right?
19:57
We have dependencies all over the place.
20:01
You know, we're not sure what we can even get rid of.
20:05
You know, when you're kind of in that mode, it makes it makes things extremely difficult to manage and it makes it difficult even just to keep, you know, standard, as we say, orphans out of the LDAP.
20:15
Because if I'm not sure if I need somebody, it may be questioned at the audit.
20:21
But I'm going to leave them there just in case, right.
20:24
Just so I don't, you know, take a hit on that and we and we do see lots of lots of behavior like that.
20:30
Also it's an ongoing challenge to always, you know, continue to redo role modeling and to do attribute development if the identities are, you know, if you have new categories and classifications in your organization or stuff like that.
20:45
So it becomes very difficult to continually audit, analyze and recreate your strategy as you go along.
20:54
And for a lot of folks that you know, their identity assets may be a decade old or or so So and it and it's difficult, you know, people leave organizations, they may have had a thought leader who was working on something and they're not able to administer it to that degree anymore or what have you.
21:13
But yeah, it's it's a challenge and it's going to be an ongoing challenge as far as we see it currently.
21:21
And there are exceptions, right.
21:22
There are outstanding, you know, best practice focused best to breed organizations.
21:26
But for the larger part, it's extremely difficult to keep up and they just keep growing new cloud share assets and keep signing up for new applications and keep, you know, widening the playing field instead of narrowing it to kind of get control of it.
21:43
So I, I would definitely agree with that.
21:49
Haley, Carolyn, if you would just go ahead and jump us over onto slide seven.
21:54
Yeah, so, so, Ian, so characterized it.
21:58
So Jim, I think you've made it clear that most folks aren't prepared for this.
22:03
Ian introduced us to kind of the concept of an AI assistant.
22:09
This is just a thought provoking slide here that's intended to give you an idea of what those AI assistants look like, different kinds of verticals where you're going to find this stuff.
22:19
And we're ultimately just wanting here to tease out the risks we'll talk about here in a minute.
22:26
But again, just to start thinking about what does it mean when you have something like a legal assistant as an AI assistant, What kinds of things are they going to get access to that is really quite sensitive.
22:40
So I won't read the slide for you, but I think you get the idea that you're in order for these AI assistants to be effective, they have to in turn just like a human being, be granted access to to data that is often times super confidential in nature.
22:58
If you want to go ahead and skip on over to the next slide, we'll talk about that a little bit.
23:06
So again, these AI assistants are going to need access to data sometimes broad in nature, sometimes sensitive in nature.
23:16
Ian, Jim, I'll let either one of y'all jump in.
23:18
I know we talked about this slide over the last couple of weeks pretty exhaustively.
23:24
And so if one of you wants to just jump in and sort of unpack, what's the danger here?
23:29
What what's the what's the, what's the real issue with all of this?
23:39
I I can touch it all, add color.
23:41
So one of the things that I've seen from a number of the groups that I've been talking to here is that they start off with, OK, I can, I can turn on the bot and I need to give it some some of my data, some of my documents or access to an API.
23:58
And so then then, you know, the natural question is, OK, I'm going to give it access to, you know, everything that I have access to, OK.
24:06
But then rapidly they search, oh, I want to give this so that somebody else can use it and I don't have access to that other area of the data.
24:14
So then they're like, well, why don't we just add more data to the AI?
24:18
We'll just give more of it and rapidly if you're not careful here, your AIS start to have access to way more data than what your employees have access to.
24:29
There's an inversion that develops and that's certainly far from a sort of a zero trust or a best practice style, that approach of just keep, you know, giving more and more data and in effect, you know if you don't trust your employees with it, you really should not trust your it's not necessary trust is probably the wrong work.
24:48
But for to ensure that people have access to the data that they need for the job, that they're at hand, that's at hand, that they're not becoming a risk in terms of, you know, being exploited to divulge that data.
25:01
The AIS certainly can be tricked and divulged to exploiting data.
25:06
Seems like every week there's new attacks that people frame here about getting them to make content that we're at very, very early days there.
25:13
And so we really have to think carefully about the kind of data that we give those AIS.
25:19
And that's when you start thinking about the data and the amount of scope that they have, you start naturally thinking about, wow, I kind of need to provision them and permission them kind of like a person.
25:29
I need to kind of apply the same rules that my that I apply to my employees.
25:34
And then also too sometimes there's stuff that might be sensitive enough for the air.
25:38
I shouldn't tell I have standing access to those that documentation in case they get exploited.
25:43
They might only need it for say like the quarterly report that need that they're assisting with in creating rather than having standing access.
25:51
And so definitely of that.
25:53
And then also as institutions and organizations work more and more together on things, there's a natural need for sharing information such as you know research institutions as they need to share information between each other or even sales department sharing their customer contact lists for coordinated go to marketing strategies.
26:12
These are things that you definitely would like to have AI assistance with it.
26:16
But are you like, what is the amount of data that you're giving them?
26:21
Because the AI is the standard kind of controls around employment agreements and disclosure and acceptable use.
26:28
And you know, the hammers that are criminal and civil liability, those don't apply to AIS.
26:35
And that's where us as people, we need to put in place a suitable judgment.
26:39
So that because those penalties will apply to us as the people who are managing and deploying these AIS.
26:46
But but again the AIS, these don't affect them and that that is gonna be a key challenge for us.
26:53
We have to make sure that we're not giving them too much data because that that liability transitions on to us as the people managing and deploying them.
27:02
Otherwise we're just you know creating a problem for ourselves Thoughts Jim and with with what's yeah I that's very well said.
27:11
The keyword here is accountability to me.
27:14
You know, if we're looking at SOD and we're looking at compliance issues that we have to deal with and anybody that's on here may have heard of GDPR or dealt with it in the past.
27:23
Now the US has taken on its own version.
27:26
India, Japan, many countries around the world are are going through kind of the same PII accountability steps.
27:35
Unless we go forward and do something like Ian said, making an identity role manageable, right, and giving it true access, government privileges, then we're gonna have a problem.
27:49
Because AIS right now govern themselves, right or not.
27:54
And they kind of give themselves access rights over time to things they think they need to fulfill their mission regardless of what we think.
28:04
So at the end of the day, you know, making those, turning those into identities and creating that accountability seems to be the only, you know, non insane kind of way to deal with this.
28:17
Because people that create the AIS are going to also have to be responsible.
28:22
I I brought it back to station before.
28:25
How are we going to attest that I'm sending something to an AI to verify that it is who it says it is for Graham Leech Bliley.
28:33
For HIPAA, for Nurk and FERC, for whatever the issue might be.
28:38
How will it reply if a human is not responsible for making it reply because that person is accountable for their AIS.
28:45
So you know this is a tipping point issue right now and I think it's going to get more and more complex over over the course of time for sure.
28:55
But you know, this is part of the challenge I think that we face.
29:01
And and I think as we've all discussed the, the AI assistant part of that is really scary because the PII and corporate data or organizational data or compliance related data, you know, maybe on both sides.
29:17
It may be legible to anybody even when it needs to be masked legally.
29:21
It may be in any number of states that can be really difficult for us to kind of have to deal with.
29:27
So it's kind of two cents I guess.
29:31
Yeah.
29:31
Thanks Jim.
29:31
I just will throw 1 to this pale of this conversation here that I think one of the things that is tempting with these AI assistants because the reality is, is when when they're used, they're extremely powerful and have sort of a multiplier effect on personal productivity.
29:53
And the temptation right, is to allow them to do more, feed them more.
30:00
And inherently what you don't realize is you've violated.
30:07
So the underlying segregation of of data and access that a human being would think normally think twice about because of these policies and this accountability.
30:21
Haley, if you would just drop us on to the next slide 'cause we're gonna kind of shift into so.
30:27
So you know, with that temptation and with the kind of the blurred lines of, you know, not being able to hold someone accountable, not being able to, you know, truly vet an AI is, is or who they they claim to be, the, the right answer tends to be to treat your AI assistance as if they are actual persons.
30:55
So I think we'll do paper, rock, scissors here sort of.
31:00
And I'll let Jim go first because I think Ian started on that.
31:03
But Jim, do you want to kind of talk to that concept here of right access, right people, right time?
31:09
Yeah.
31:09
I mean I I think it's a little redundant probably from what I was just saying, but it all comes down to we need to create an entity again for accountability.
31:23
So you know we can even temporarily provision an AI, It's not like a seasonal employee perhaps but or a contractor.
31:33
But there are times that you know we will probably utilize that AI identity at times we won't, right.
31:43
So this accountability and I think people will hear us say this over and over again throughout this discussion, but it it needs to be codified in a way that I it strongly identifies this personal administrative account, basically this, this AI being that's that's out there acting on our behalf also because we have to look at what data it has access to as well as you say stated so.
32:11
So Ian, sorry, I don't know if you wanted to add to that.
32:14
But yeah, like a common pattern we tend to see with a number of our customers is, yeah, they create personal administrative accounts for their their, their system administrators and that way they have clear accountability and clear audit ability about when those administrative rights are being used.
32:34
But also, as we're moving into an age of assistance helping you with things, the assistant you definitely don't want to give them your personal administrative account, or at least probably not.
32:45
You might want to give them a subset, like a Personal Assistant account that has the right permissions for the tasks that they're assisting you with, and that then kind of inverts the problem space and it definitely makes it a little harder and also too sometimes.
33:05
And that's for say like you as a person but then also projects like and having an assistant that knows the documentation for a project could help greatly with answering questions, being able to help with preparing material or and and went on to allow project to continue forward or to track to to assist with tracking progress or to work into ideas with people if they.
33:30
If you're bringing somebody new on on to the project, those kind of things would be very valuable.
33:34
But then would be more of a shared resource across that project.
33:38
But you also wouldn't want in either case those assistance to know about other projects or to understand and have memory from other people that they've assisted with.
33:51
Because then you have bleeding out of this data, bleeding out of this information.
33:55
And again, you have this, you know, they, they have they they, due to Brian's point today, they just keep eating the data up.
34:04
They love the data they just keep because that, you know, helps them excel better.
34:08
But then you have a lot of follow up challenges about how do you, you know put the genie back in the bottle so to speak once you realize that that data has bled out through a wide range of assistance and now you have this structural problem or if one of those is compromised, they know all your projects, they you know might know all of your customer log files if you gave them access to that to help with system maintenance and everything.
34:36
So we really, we need to think about that, that scope for the purpose and the task that is at hand.
34:43
And then ensure the accountability that the, you know, the person who is who has this personal assistant, they're going to be responsible for that personal assistant.
34:52
Just like maybe the project manager for the project that has an assistant is going to be responsible for that assistant as well.
34:59
And making some of the decision points about how much data that they should have, just like the people who are involved in here.
35:05
So it's pretty thought provoking.
35:08
Yeah.
35:08
My one, one of my biggest fears is that if if automation works well enough for folks and they can get a lot of administrative tasks done that they give it some sort of administrative privilege, almost like a Pam kind of scenario, which would be insanity.
35:24
But I could definitely see it happening, you know and and then you know, bar the door because we'll basically be, you know, giving things access rights at the how 2001 Space Odyssey level.
35:38
It'd be very convenient to give them all of that access.
35:41
It would be they'll do all my admin for me and all the updates and patching convenience versus security And that's going to be a very thought provoking balance that we are going up to strike in 2024 here.
35:53
But if you're listening, don't do it.
35:56
Yeah, I I actually appreciate y'all saying this and then we'll just we'll move on here.
36:00
But it it it goes back to that conversation a minute ago where I talked about the temptation, right.
36:07
And I think we all can understand this because we've all said this I wish I could clone myself, right.
36:12
That's really what we're we're looking to do is we're looking to multiply our efforts and you know if I could clone myself then that mean you know that there's a there's another individual and that same individual should be bounded by the same constraints that I'm I'm bound by and that's really in order to sort of wrap your mind around how you should treat an AI assistant you really have to think of them in terms of of a human being.
36:36
If you'll.
36:37
Haley, Caroline advances to the next slide.
36:40
With that in mind of, you know, hey, like we need to treat AI assistants like human beings.
36:45
Like, understand that in some ways maybe they're like AI don't know a Dumber version of ourself, if you can, if I could say it that way.
36:55
But you know, with any kind of, you know, identity, governance, human being or otherwise, a lot of it's governing that's always going to begin with this conversation of analyzing your your risk.
37:13
And that's really kind of the intention of this slide is to talk about that.
37:17
I'll start with with you, Jim, on this one.
37:19
You want to talk about, talk about this slide just a hair.
37:23
Yeah, I'm a big believer in DPIA, right.
37:28
I think and I, if you look at ISACA, if you look at SISA, if you look at a lot of our governing and standard bearing organizations out there, you know they say things like anyone processing personal data has a duty to assess the risk involved, right.
37:44
So and we're giving, as Ian stated earlier, these AIS are just gonna keep seeking and swallowing more data all the time, odd infinitum forever.
37:54
Amen.
37:55
Right.
37:56
So if an enterprise believes that a plan process is likely to pose a high risk to somebody's personal rights and freedoms, you know, basically in every compliance area it has a duty to conduct a ADPIA.
38:10
So if you're not familiar with that, we can certainly talk offline for anybody that might be confused by what a data protection impact assessment might look like.
38:19
But you know, there is a requirement to assess the impact on a personal privacy on somebody's personal privacy by, you know, considering all personal details in cases where data are used and you know to automate decision making or or other areas.
38:38
So this is going to be the case on a very large scale and you know I think it's going to be very difficult for folks to monitor you know over the course of time.
38:52
And we're going to need root cause understanding to kind of get to a point where you know here's what we're giving access to and it could potentially involve this.
39:03
But we need to be thinking about this and the AI, you know kind of product companies are actually going through these types of discussions in some cases to before customers are even allowed to buy it.
39:16
So depending on how large the use cases or something like that, so you know, because ethical is is a keyword, but if you do not understand the impact potentially that this AI will have in in its use case in which you're giving it access to, I suggest you think again or maybe take another breath and and go back through what the potential risks are that are out there.
39:41
Yeah, absolutely.
39:42
Thanks Jim.
39:44
I'm gonna move us along to the next slide here just for the sake of time.
39:48
I think we could probably talk about the data risk side of things for a while.
39:52
But Carolyn, if you want to move us on to the next slide and focus a little bit on that big, bad, scary word, compliance and and regulation and kind of move the conversation over to what do we do about this like?
40:13
So we've kind of hinted at some of the things, but kind of get a little more specific here.
40:18
Ian, I'm going to turn this one over to you to talk about some of these bullets, you know, at a high level, just floated the idea that you should be, you know, treating these AI assistants like human beings.
40:31
This is sort of the practical side of that, Ian.
40:33
If you want to go ahead and maybe cover this for us, that'd be great.
40:37
Yeah, for sure.
40:38
So with these AIS, obviously you're going to have to control the level of access that they have to your datas and to your data.
40:48
And your AP is, that's the AI eyes and ears to the world is, AP is and the data that you feed it.
40:57
And so it's going to definitely require careful thought around this assistant, how much access should it have.
41:05
And we're going to have to make sure that that access is appropriate for the purpose and the role that it has.
41:11
And if you're uncertain, absolutely assess the data impact here.
41:16
And then these AIS just like service counts, if you don't track them, they're going to get very murky.
41:24
And then in 1015 years you're going to be like, what is this?
41:27
Why is it here?
41:29
Do I still need a, will I break anything?
41:31
That's where getting ahead of the curve and at least having a proper inventory of these items and being able to periodically review and make sure that they have a clear owner, somebody's responsible for them is going to, I think, be absolutely critical here to make sure that you're not leaking access or that you know you have these hidden, hidden little areas that an attacker could then ask questions and exploit.
41:57
And as we get tempted by the convenience that these assistants can give, establishing clear separation of the duties about what these a is can actually act upon what they can do as as an assistant of us versus what needs us to approve and review before action is taken is going to come really, really quickly.
42:19
There's tools right now that we're experimenting with that I'm personally using that have a rudimentary form of this where it's basically asking you, can I do this before doing this command?
42:31
And yes, right now I do, yes.
42:34
But it's audit logging is nonexistent.
42:36
It's accountability is nonexistent.
42:39
How do you tell really the difference between what decisions I made versus the decisions that the assistant made.
42:44
Those are going to be key things here and it's gonna really test us to try and think through around how do we separate this.
42:52
But also too separation of duty from a data perspective as well.
42:55
As people are creating these assistants, they're gonna wanna feed the assistants data, but you can't feed it potentially sensitive information, customer confidential stuff with PII data in it.
43:07
You might need to scrub that and you might want another assistant to do the scrubbing of that.
43:12
So that's where again it starts to look a lot like how people do things where you have an an assistant that is curating and creating an acceptable data set, the hand off to another assistant who will be applying that data set for a different purpose.
43:27
Those are things that I think are going to come pretty quickly for us here as we're all faced with this.
43:34
And that's where again, coming back and thinking of them very similarly to how people do this and how we compartmentalize and how we make sure that people remain responsible and accountable will I think be some of those like those truisms that we fall back on here with the A is and help us as a guiding light.
43:53
And then also to like with these assistance, you know, if they have standing access, that's a potential problem.
44:00
Just like for people like if they have access to a data set that they only need for one or two days out of the year, why do they have standing access to that data?
44:10
That could be exploited, that could become, they could be coerced, they could be socially engineered.
44:18
And that is something where if they just don't have access, then they can't be attacked like that.
44:24
And I think that's where you know these privileges are gonna be a key topic.
44:31
And then in spades as organizations, as higher Eds, as enterprises need to share data with each other more and more to do you know to to drive our our you know research our capabilities, our products forward here and also to our sales programs.
44:47
We're gonna want to share information and how do you make sure that the other side is responsibly handling that data that's were to you know how do you redact the information so it's acceptable for somebody else to be given it and without putting you at risk.
45:02
All key questions that right now have kind of been kind of murky, but I think are going to come to the forefront very, very quickly as people, you know, see the the real business and organizational value that these assistants bring in terms of cost savings and and efficiencies.
45:19
It's just going to be you can't ignore that value, it's just how do you apply this at scale securely because that's a these are going to be fun questions for this next year.
45:30
Absolutely.
45:31
Thank you, Ian.
45:31
Jim, do you want to have any final comments on the slide before we move on?
45:35
No, I just wanted to call out one term that Ian used in the beginning of this I think is very important.
45:40
Log event management will not be available auditable, monitorable or capable even to feed into any kind of SIM or however you aggregate your data for compliance or risk management reasons.
45:57
So I think that's just an important punchline, remember.
46:02
Yep.
46:03
At the very early days and there's some improvement.
46:05
These tools are these assistants are going to require for us to really trust and hold them accountable?
46:10
Yeah, absolutely.
46:11
In fact, having said that, if Caroline Haley, you want to move us on to the next slide.
46:16
So we've been sort of focusing as we go from slide to slide a little bit more granularly.
46:22
Ian just said the key thing that I wanted to hear him say, which was talking about tools to do exactly what we were just speaking to on the previous slide.
46:33
So Ian, you want to dive into it a little bit more and talk to folks about what we at Brevura can can do and bring to bear on this?
46:42
Absolutely.
46:42
Like, I think kind of a lot of the same fundamentals that apply to your employees, your people are going to be exactly the same things we need to apply to these AI systems.
46:52
Where are they, Whose account?
46:53
Who?
46:55
It manages them, The level of access that they have, do you still need them and are they compliant with your policies?
47:03
Those fundamentals are going to be needed here.
47:08
And then the key question is, OK, how do I find them?
47:10
How do I visualize them?
47:12
How do I make good decisions and how do I alert problems?
47:15
That's where for cloud, we're introducing a lot of these fundamentals.
47:19
And so we just brought in people.
47:20
For example, in our December release we're introducing the concept of a person and it's a great question, how do we really want to bring in a is in the next quarter or two.
47:32
Should we be bringing them in as a type of non human person as much sense as that makes or do we need to carve them out as their own independent thing with their own attributes, with their own descriptive like their own descriptions, their own you know, characteristics of self that you're going to need there.
47:52
And it's very timely too because say like the skim standard that is being now actively worked on again here, they're actively having these conversations around how do we bring in devices and IoT.
48:05
And again it could be a very timely thing depending on, you know what people's thoughts are here about should we be starting to think of, you know, AIS and service counts as top level entities that need to be represented in our identity standards so that people can get the right tools in place quickly and cost effectively.
48:25
So they can make sure that this is being handled now rather than in three years when you know cats out of the bag and now we're all struggling to get control again under here.
48:36
And that again like the concept of, you know, how do you make, how do you continually make sure that your A is are compliant with your policies.
48:45
Like again policy could be that every AI systems have an owner.
48:49
If AI assistants use passwords to talk to systems, making sure that the passwords of those accounts are being rotated periodically, making sure that you certify their existence periodically.
48:59
And again, if it's a sensitive thing like something like an AI that's assisting you with a production environment, as much as that might be a little scary, I'm sure a few of us are going to push a few boundaries on there for better or for worse and again, treat them just like a person.
49:17
Would you entrust a person with that level of access?
49:19
If the answer to that is no, then get the remove that access to the AI.
49:23
And if the answer is that you would trust a person with them, you still need to have super second thought about whether or not the AI it can be trusted with that due to you know, it's account the challenge around accountability or who do you transfer that accountability onto?
49:36
Who owns that AI because they would have to take on that accountability if they're to deploy this.
49:42
And so that's where I think there's some real good opportunities here to and and something to think about.
49:50
Also to where should these a is be tracked?
49:54
Should they be in your LDAP?
49:56
Should there be in a directory store?
49:58
Should they be tracked in a database that you'll require?
50:02
And?
50:02
And again, how do you give them accounts or access to the different systems that they're going to need to assist you with?
50:11
And how do you track third party AIS that you're given access to, similar to a contractor or a supplier or a research partner here, How do you track them with entities that are affiliated with your organization?
50:25
Those are going to be fun ones to to talk through I think quite a bit in the next in the next 6 to 12 months because this is going to come pretty quickly I think for a number of us.
50:36
So absolutely, absolutely.
50:41
Jim, did you wanna chime in, in any about UH tool sets or compliance or should we UH, should we move toward Q and AI think we we could probably move toward Q&A.
50:52
The one thing I would say is if anybody needs a second blush in terms and like this is a very complex process that Ian was just talking about, right?
51:01
Umm, it could involve re looking at your role modeling.
51:04
It could involve looking at how your attributes are attributed to your identities and your LDAP structures, right?
51:14
It could be you know, your ace, your ackels may be a bit challenging right now.
51:20
You know your control lists are kind of used to a certain kind of thinking.
51:24
This will require change and absolutely.
51:28
You know, I think that's the thing like it's not a situation where as AI grows we do nothing, we can't do nothing.
51:35
We have to do kind of whatever it takes to manage effectively.
51:40
And and as Ian said, there's a, there's great power here for self healing and to make us ever better, but there's also great power to get way out of control and cause extreme danger.
51:50
So you know and that that's what we kind of do with little posture and policy kind of assessments and help on in this in this realm.
51:59
So that's that's really all I wanted to add.
52:02
All right.
52:02
Thanks, Jim.
52:04
Hey, Lee, if you want to move on to the next slide.
52:06
So we started this conversation today with sort of nudging you in the direction of of what do you think you know about AI?
52:15
What do you think your current landscape in your environment looks like?
52:20
We certainly want to be there at wherever you're at in that journey.
52:24
And so one of the things that we've got for you today, this is common sense, but sometimes you know it helps to just sort of sit down and think about think about where your landscape is.
52:38
And so we've got a convenient checklist that you can use the resource tab here in in the webinar to download or you could request it via the QR code that you see on the screen.
52:50
But this will just sort of kind of capture everything that we've discussed in sort of a bite size format here.
52:57
And again, just sort of nudge you to be thinking about where you are in your landscape.
53:03
If Haley Caroline, you want to move on to the next slide, I'm going to come to do this in inverse order.
53:08
I want to talk about a, a demonstration.
53:11
So you you looked at, you saw a couple of screenshots of what we call Brevira Cloud, which is uniquely designed to surface identities in your organization and at a glance give you a great insight as to how those identities either connected to entitlements or group memberships or affiliations in your organization.
53:37
And the punchline being that it's going to be really important to do that with these non human AI assistance in your environment.
53:46
And so that conversation of bravura cloud in and of itself is would be about an hour long conversation and we'd love to have that with you.
53:56
So we would encourage you reach out to us directly and ask us about how we're we're actually building capabilities and to prefer cloud that are specifically designed to think through this AI assistant dilemma.
54:10
And so we would we would like to talk with you about that.
54:13
And then on the integral side, Jim, if you want to wrap up and just sort of explain to folks what it is that you're offering in terms of pre assessment, yeah, yeah, it's it's a gratis pre assessment and and people are starting to come out of the woodwork now and say hey you know as we discuss other items and we talk about how much of your technical team has AI assistant capability now and people say I have no idea or I don't know what they're doing or what we're doing because that's their own personal thing, but really it's not their personal thing anymore, right.
54:45
But that data is all potentially going to be shareable on both sides.
54:49
So you know what we basically do is sit down and try to determine where folks are in the in the voyage and help to provide some guidance with regard to how they might look at posture initially.
55:04
And then if they should want to take a leap to kind of like we're going to create enterprise policy, you know, can you guys help us with that then that's sort of a project that we do and then if people need help with things along the way after that we talked about the role modeling stuff, we talked about attribute development or any of the strategy.
55:27
You know, we can you know kind of help them think through each individual technical to do kind of that's in the checklist.
55:34
So.
55:35
So that's kind of what what that entails.
55:38
You know again it's it's complimentary to kind of you know have the the preliminary discussion to kind of make it I guess make their their AI world finite and kind of capture kind of everything out there.
55:52
And it's not a bad idea to just kind of get a grip on what you think you what what you have versus what you think you have cause usually the answer is much different than you think there are technical people doing things that they don't know are wrong or they don't know are risky or anything else.
56:09
So absolutely.
56:12
Yeah, it looks like we have kind of run a little bit long on this, Carol.
56:18
And I don't know, do we have a minute or two for Q&A or absolutely.
56:23
Yeah.
56:24
We have actually have two questions that have come through.
56:26
If anybody has any others, please send them in the chat or in the Q&A function and we will again send out this recording afterwards.
56:34
So if you have to drop, no worries, we will get the recording for you to re watch share.
56:40
First question, what should I let our institutional resources have access to or not?
56:48
Oh wow, that's a broad question, Jim, I'll, I'll, I'll, I'll punt that one to you.
56:54
Yeah, no, that's I I think I, I was just kind of going down this road based on you know compliance requirements that you have, based on PII requirements.
57:06
You know, it's fair to assume that probably, I'm not sure what industry you're in currently, but it's fair to assume probably that marketing folks may have access to ChatGPT or other of the 22 new generative AI tools that are out there.
57:24
You know, network zone segmentation isn't a bad idea by the way, and keeping that away from the rest of the house.
57:32
But I would say that if you would be willing to entertain a discussion, we can help you put context around there.
57:40
Based on your specific infrastructure topology and goals and compliance requirements that are out there, you know things that are managed well, including keeping log track of what people are using out there.
57:59
If you want to create, for example, an AI assistance standard that's traceable, that's not a bad idea.
58:07
It's difficult.
58:08
It's a difficult question.
58:11
I hate to say it, but it's kind of like the house construction question, right?
58:14
Build me a house.
58:15
Well, I'm not really sure where the bathrooms go or or anything else, but if we did know a little more about the org, I think we can make very targeted, specific recommendations for you.
58:26
Yeah, that's fair.
58:28
One thing I might say on here, it just is one thing is if you're if you're struggling to come up with this, think about the scope of the assistant being a person, you name it if that helps you relate to it and think about the task that's at hand and would you be comfortable with a real employee having that level of access just when you're when you're gaming this out in your head about you know could this be safe, what kind of concern would I have here.
58:59
Think of it in terms of a new hire having that level of access and what degree of comfort you have there And if it's a third party, again think of it in terms of that third party was to not have your best interests in mind.
59:12
What would happen with that data that you grant them access to those AP is just when in doubt, humanize the problem and see if that helps with a little bit of clarity.
59:23
Couldn't agree more.
59:25
Yep, second question, we're already struggling to do this.
59:31
Sorry, I just closed that window.
59:32
We're already struggling to do this for identities.
59:34
How do we add AIS?
59:36
And I think your last answer probably applies here as well.
59:42
Yeah, like that's where it's already a tough problem.
59:45
So if we could leverage some of the tools and practices we're trying to put into place here, that makes it a little bit more approachable.
59:53
It's just that of course, they're not the same as people.
59:56
So there will be a few challenges here, but again, the less new stuff you introduce.
1:00:03
The less opportunity for mistakes or other challenges to develop here.
1:00:08
So if you if this is a strategy that can work here, it can help.
1:00:12
Like you're going to have more identities to manage, but hopefully still applying a lot of those consistent best practices that you've already been working towards here.
1:00:21
And then adding a little bit of color and flavor when required.
1:00:26
When you're when you're introducing something completely new to the organization, that's when it could get a little challenging at times.
1:00:33
And and people need to be accountable for the AIS, they create accountability.
1:00:39
Number one thing, if you can't name who's accountable for the assistant, then that assistant shouldn't probably exist yet.
1:00:46
So that may answer the previous question too.
1:00:49
Actually, yeah, it definitely does.
1:00:52
OK.
1:00:52
Well, thank you for that very insightful presentation, Jim and Brian and Ian, I know you're doing a ton of research in this area across the board.
1:01:02
So we certainly appreciate it and we appreciate the time that everybody took to join us today.
1:01:08
We welcome you to join us again in the future.
1:01:10
We will send out the recording and that checklist.
1:01:13
Please keep an eye on your inbox.
1:01:15
Thank you for joining today.
1:01:20
Thanks.