How ChatGPT & AI Affect Your Identity Strategy

ON-DEMAND WEBINAR

After the colossal LastPass data breach created a map for hackers on where to focus, organizations are starting to question the amount of trust they place in closed-source password management solutions, what to look for in a new solution, and what to do if their LastPass vault has been stolen. 

Exposed URLs from LastPass make it easier for hackers to identify users and launch phishing attacks against them. With AI like ChatGPT accelerating, the specter of next-gen phishing looms. With more convincing phishing emails, increased speed and efficiency, and even polymorphic malware code, an uncharted danger awaits just beyond the horizon.

Who should attend this webinar?

  • Business and Cybersecurity Leaders
  • Organizations looking for LastPass alternatives

Join Bryan and Jim as they discuss:  

  • The most alarming aspects of the LastPass Breach and safeguards your organization should have since adopted or be considering
  • What should your day one onboarding look like, and how to ensure it’s being done safely in light of the new ChatGPT capabilities
  • How zero-knowledge password managers combined with strong phishing-resistant MFA is becoming critical to protect your organization 
 

Presenters

Bryan Christ Headshot (1)

 

Bryan Christ

Bravura Security

Sales Engineer

Bryan specializes in security and access governance. For more than twenty years, he has focused on open-source and software development opportunities with an emphasis on project management, team leadership, and executive oversight including experience as  a VCIO in the Greater Houston area. He was recently published in Cyber Security: A Peer-Reviewed Journal.

Jim Skidmore

 

Jim Skidmore

IntiGrow

Vice President, Solutions Group- Security & Cloud Consulting, Integration & Managed Services

Jim, a consultative Solutions Executive, help clients implement on-prem and cloud based SAAS Solutions to achieve desired outcomes across cybersecurity, compliance and risk management, IoT, and AI. Jim has consulting experience in a variety technical disciplines including eradicating compliance issues.

 

Review the Full Session Transcript

No time to watch the session? No problem. Take a read through the session transcript.

Carolyn Evans (00:06):

Good afternoon everybody. Thank you for joining Brave Reverse Security Today and Grow. We will just give ourselves a moment for a few more people to join and then we will get started. All

(01:02):

Right, thanks again for joining Reverse Security and into Grow in today's webinar. How chat GBT and AI affect your identity strategy. It's been a topic floating around the internet and many people have been talking about it for quite a while and a number of weeks. My name is Carolyn Evans and I'm the director of marketing here at Reverse Security. Today you will be hearing from two identity security experts, Brian Christ, who is a senior sales engineer here with Reverse Security and Jim Skidmore, who is the vice president of the Solutions Group of Security and Cloud Consulting at our partner Integral. So today they will be talking about three key things, the most alarming aspects of the last pass breach with some very timely updates and the safeguards that your organization should be considering or have adopted since learning about that breach in January, December. Also, what your day one onboarding should look like and how to ensure how it is being done safely in light of the evolution in breaches that chat GPT and AI are causing essentially. And then finally, how zero Knowledge Password Managers combined with strong phishing resistant MFA is becoming critical to protect your organization. So Brian and Jim will be available to talk about any of your questions as well. So please feel free to pop them in the chat here in this Zoom invite and we'll get to them right at the end of this webinar. Over to you Brian.

Bryan Christ (02:43):

Hey, thanks. Carolyn did want to talk about the giveaway at the Gartner Identity Access Management Summit. So we will, that'll be held March 20th through 22nd. Carolyn, did you want to add anything about the people who register and the 50 people who register?

Carolyn Evans (03:07):

Yes, absolutely. So as advertised, the first 50 people were entered to win the All Access pass at the Gartner Security, Gartner Identity Access Management Summit in March in Grapevine, Texas. So we have a pass that we'll be giving away, we'll be doing that draw tomorrow and we'll be notifying you via email if you are the lucky winner.

Bryan Christ (03:29):

Alright, wonderful. Thank you. We go to the next slide. So we're going to be talking about two kind of high profile items that may seem unconnected at first blush, but we're going to talk about both the last pass breach and really chat. GPT is sort of stealing the limelight, but really under the hood what we're talking about is ai, a lot of it being open AI as the platform. As many folks know, LastPass did suffer a high profile breach last year. Interestingly, hot off the press, there's some more details that are coming out about that breach, but if you look at the press release that was issued by the CEO, one of the interesting things about it is, so there's been some opaqueness to this lack of clarity around what the attackers were actually able to garner or not. Some of that we've kind of identified here on the screen, but we've taken the time to bring an analogy to the table, the game of Clue and how that relates to the last pass breach. If you've ever played the game of Clue, Jim, you've played clued, right? We had a lot of fun putting this together because some of this information here that on the bullets, the really allow an attacker, they don't get direct access to credentials, but if you think about the game of clue, I need a few things to be really effective at it. Right. So Jim, what's your favorite actor in Clue?

Jim Skidmore (05:28):

Gosh, that's a good question. Maybe Peacock, I would've to

Bryan Christ (05:35):

Go with. Yeah, peacock's a favorite. Colonel Mustard is a good one, but in the game of Clue, you're basically looking for a couple of items to help you narrow down how to make that educated guess. And really that's what LastPass breach does. Early on we learned that things like customer billing, information, name, address, those kinds of things were revealed, but then it gets a little more granular. URLs associated with secrets that were stored in LastPass and other bits of information that can be used to really sort of laser focus the attack. Jim, your thoughts on that?

Jim Skidmore (06:20):

Yeah, it was a lot more far reaching, I think, than a lot of us thought originally. Turns out we have customers, friends, some very close friends actually industry colleagues that were involved in it. I think what we're now finding even in the last couple of days, I know you have some updated information on that, but it seems to be, we know there were 25 million users involved. I don't think we understood the order of magnitude and the fact that they were even able to compromise sort of identity roles potentially to kind of orchestrate any number of things, potentially provisioning or deprovisioning of users to things call home capabilities that are not, they're seldom seen in breaches of this type. So yeah, it was definitely really impactful.

Bryan Christ (07:22):

One of the things that I thought was really interesting, I started to talk about this a minute ago, which was in the press release that the LastPass CEO issued. The first one that I saw the guidance was to be aware of on high alert for phishing and social engineering. Interestingly enough, if you read the latest Verizon report, you'll find that phishing and pretexting, they remain still the number one form of attack. So there is quite a bit of exposure here on that front because the data itself, again, while it doesn't directly give you the secrets themselves, it really gives a wouldbe attacker that narrowed down direction just like in the game of Clue. You're just looking for that one last piece and it really gives them all of the things that they need to narrow down those phishing and pretexting types of attacks. And we'll talk a little bit more about that here in a minute. Jim, any final thoughts on this?

Jim Skidmore (08:32):

No, I think it's lessons learned and I think we need to think about how we're securing everything. This is a development environment, obviously this is not just a standard user, so we're getting it from all sides now. So I think it's important to be reticent about all the surfaces that are out there. Now

Bryan Christ (08:58):

Haley, if you want to move us on, I want to just before we get into the content on the slide, I want to put a mental bookmark in your mind as we were talking about the game of Clue. I'm going to come back to sort of this gaming theme here a little bit later. So I said that we're going to be kind of looking at LastPass and then we're looking at chat GPT just as kind of the darlings of the headline right now. So chat, GPT is getting a lot of attention. It's getting so much attention that I had tried to log into my account the other day and this message popped up. This chat GPT is at capacity. So folks are really starting to explore this tool. That's what it is. It's a tool just like anything, it can be great in the right hands, it can be horrible in the wrong hands.

(09:55):

Gem, and we had quite a bit of fun. We were talking through this particular slide. I'm going to tell a little story. I think he's going to tell a little story and I think it'll give you the idea of where this could go both good and bad. So I was tinkering around with it the other day. I do rental properties on the side, and so I said, well, hey, what if I wanted to build a set of stair stringers, right? I've never done it before. Can I do it? They just asked chat GPT to write me some instructions for building some stair stringers and gave it some rise and run and those kinds of things. And it spit out a beautiful set of instructions for building some stair stringers. But then I kind of did bring it down into the technical world a little bit.

(10:44):

I'm an old C programmer, been around the block quite a bit on that front. And so I started to throw some curve balls at to chat GPTI asked it to write me a little program that finds a square root. I did it easy, no problem, turned around and said, okay, let me see you do that, but I want you to allocate all the memory on the heap instead of the stack. And it did that just fine. And then I took it a little bit further and I said, okay, I'm going to give it something really obscure. I said, I want you to draw a circle and I want you to use duff's device for iterations and man, it knocked it out of the park. So those are some examples of some really good ways that I think tools like chat GPT built on top of OpenAI are really going to bring help to areas of vocation where said like carpentry could be one, the programmer that's trying to figure out how to do something or needs a second check on their code could use chat GVT for this. And so you've got some really great productive use cases for it. But Jim, tell me about the bad stuff that you can do.

Jim Skidmore (12:03):

Yeah, that's a great question. So basically I think when we started to get into this, we reached out to some of our partners, we talked to some their industry friends. We realized I was kind of familiar with the open AI environment originally and was aware of potential spoofing that could go on with second factors of authentication. And I think that's going to be one of the bellwether discussions today and how to reduce risk along these lines now. But what we found from one of our partners basically was that they'd already been looking through underground chat rooms and things like that, that chat, GPT and more specifically chat GPT three don't necessarily need to create anything sophisticated to be effective. And the net is they've already seen research mentioned in the forums that a lot of threat actors are discussing right now, the use of the chat bot to improve malware code, right?

(13:19):

Because we know it doesn't make it syntax errors, it, it's kind of like a QA for the dark web. So a lot of the researchers tested the coding capabilities of the chatbot basically to see how it could help or what it could potentially provide to hackers. And now I think with each passing week since we've been researching heavily, we're seeing a lot more capability. And it's not so much that people need to create anything very sophisticated, it's mostly about how many people now that can enter their circle now and develop other developers, cyber criminals, and develop tools. And honestly, one of the funny things was that they were talking about in the forums was how interesting they thought it was from a cost saving perspective, sophisticated threats can be created much less expensively now because savings are super important when you're writing malware or looking at ransomware, attack vectors or other things.

(14:27):

So I think we're going to learn more over time probably with each passing month since the release of different use cases and different methodologies. But what we know and what Brian and I were talking about the other day is people that were never developers before and never created code or malicious code for that matter, are able to now experiment and create potentially malicious payloads using simple scripts, using Excel attachments using any number of items. So that's right to your point, definitely one of the areas that we're seeing already and it's already been kind of proven. I know also IBM did some research on open AI prior to this. I believe the tool they created was called Deep Blocker. You might want to take a spin out there or somebody may, but they've already kind of verified this as well and sort of underground activity that's been out there. So we keep usually thinking, wow, where could we be in six months? The bottom line is people are using chat GPT right now to develop malicious code. It's underway. It's not anything anybody's waiting for. The capabilities are there and regardless of what you sign up for when you're registering or licensing or anything else, that's not even a blocker if you will. So that's kind of a little of the experience that we've been seeing recently and we're continuing to see more and more and gain more perspective from both sides.

(16:14):

I could probably throw a couple of other use cases out here, but I don't want to dominate all our time here. So

Bryan Christ (16:21):

I think we'll go on to the next slide. And Jim, you kind of hit on that. Oh, before we mentally move on that picture of the genie that you saw, just for fun, I asked Dolly, which is built on OpenAI, I said just, Hey, give me a picture of a genie coming out of a lamp. So just piece of artwork in 10 seconds or less so you can keep going forward. I just wanted to throw that out there. So Blackberry did a study basically covering a lot of the concerns around chat GPTI would expand it to say it's really AI in general, but it was all of the things really that Jim was talking about with the malware. And one of the things that came out of the LastPass breach, the most recent report is that a C two software was deployed and C two software downloaders.

(17:22):

If you look at the Verizon report, they're the second thing that someone does when they get their foot in the door, right as they get their foot in the door and they want to keep it open. And so look at the percentage of concern about here, I think these are all really, really valid. The ability for someone who's never written malware to just have chat GPT, write it for them so that can deliver those dangerous payloads. I want to talk a little bit about the phishing specifically. So if you think back to one of those first slides where phishing social engineering continues to be at the top of the list in terms of attack vectors, coupled with the fact that even LastPass themselves said, Hey, at risk here, social engineering and phishing. And I think this is where chat GT gets really kind of dangerous in this arena is its ability.

(18:20):

If you think about the game of clue that we started with, if you have a set of data, known data about the folks you want to attack, and you can feed that into chat GPT and you can allow it to craft some really almost personal, very believable phishing emails and today phishing emails, when you look at 'em, if you're relatively astute, you can sort of tell that they're boilerplate. But imagine if it was sort of personally tuned for each person it was delivering a payload to how powerful that can be. And again, you're putting this into the hands of those that really wouldn't normally know how to do any of this stuff. Jim, your thoughts?

Jim Skidmore (19:04):

Yeah, I agree, and I think there's some very top of mind use cases that come to play. Some of them are mentioned in here, but especially around two-factor authentication. I think this is going to change a lot of security policy and the way people look at their posture now. I see facial recognition being spoof able geolocation obviously anything. So the bottom line is we're going to go back to I think our roots with regard to identity proofing and cas and how we're kind of looking at the world because I don't think it's going to be very long at all where someone's going to be able to spoof their way into that two FA arena also. So

Bryan Christ (20:00):

Yeah, before we move on to the next slide, this just occurred to me yesterday I was thinking about how chat GPT could improve the landscape for the hacker in terms of phishing and social engineering and obviously better crafted emails, a larger volume. But there's actually really interesting video on YouTube from open AI where a video game is trained, you had a game, a hide and seek and it took millions rounds of iterations, but the AI figured out how to basically improve its score over time, break the physics barriers of the game. And so if you think about this conversation, this analogy of a game, I think there's a real potential here for something for these phishing and social and engineering campaigns to be trained and waited towards successful attacks. So it starts to improve upon the kinds of messages that figuring out what's more effective and less effective and over time tailoring those based on results. Hadn't really intended to talk about that, but just occurred to me the other day. So I figured I'd walk that into the conversation If you want to go onto the next slide, Jim, I'm going to let you start off with this one. I think this one is a little bit near and dear to you. You had some stories as we were talking through all of this, especially with regard to the education landscape. So

Jim Skidmore (21:41):

Yeah, yeah, I kind of provided some predecessors here, but when we look at Fido Alliance, when we look at all the standards that we utilize for identity proofing now, and these are relevant obviously for certain compliance issues as we know other than fingerprint biometrics in some cases or other forms of two F, a lot of things have been really eliminated from a compliance perspective, from being purely identity proof worthy for these compliance issues. So we were starting to get to, and by the way, even SMS now is up for grabs. We're looking at how breached that is and I think everybody probably on this call probably understands why everybody's using encrypted services like Signal. And obviously WhatsApp is that way now and with the early advent of proton mail and everything else, I think it's pretty easy to understand that some of the easy second factors that we are used to that we've become accustomed to are going to be extremely difficult now misinformation from a facial recognition perspective and the exploits along those lines are seven or eight years old.

(23:01):

Now this is going to make that easy and it's going to make it easy not just for deep seated misre behavior, but I think we'll see a whole new level of average users wanting to get in the game and figuring out how to do that. As I mentioned earlier, we can also emulate geolocation. So that is another area of concern. I'll be able to basically spoof that I am somewhere where I really am not and there are definitely use cases that are going to impact industries there. The quality of phishing, I talked a little about syntax and other issues before are going to continue to get more and more believable and better and better. And interestingly, as I mentioned earlier as well, that syntax is almost perfect with AI now and with chat GBT. So there'll be really a quality assurance capability there to make sure that that code gets executed, payloads get executed.

(24:15):

So again, as we talked about, it doesn't really require any kind of advanced technical knowledge. We see average desktop users being able to create some of these AI use cases without a whole lot of problem. And as I mentioned earlier, even simple scripting cut and paste probably will enable people to launch some things and not really be recognized. We may not find that person's domain or URL or anything else because they may be, we were seeing other use cases where that's almost being hit too. Now we know if we're collecting packet data or if we're collecting IPV segment data, that can potentially tie back to a specific person, but who knows where that'll even go. So yeah, the identity lifecycle is in a way going to get more complex, but in a way it's going to get simpler. I think we're going to go back to our knitting.

(25:24):

I think we're going to probably realize that we can't go very far from a potential spoofing perspective. We may go to more AI proof kind of methodologies, but certainly in two FA, there's ways that we need to think about this now because again, all you have to do is look at your phone now there may be exploits eventually that'll even go on there that recognize your face and be able to kind of hijack things. So it's a very different world that I see. But I do think if you adhere to standard identity principles, if things are properly encrypted, if you have at attributes created correctly for users and roles, if things are standardly built in a best practice way, that's really going to kind reduce the capability for a lot of this to happen. We all the more this changes and I've been working in this realm for a couple decades now, the more it changes, the more it becomes important to enforce the standards of identity and reverse proxy authentication, but now cloud also. So yeah, I would say that's kind of a perspective. It'll be interesting to see over time how this evolves and changes and I think quarter by quarter we could almost do an update to say here's kind of the new thing we're seeing as different factors of authentication get taken over at times or compromised and we'll probably see other use cases that we're not even considering at this point.

Bryan Christ (27:17):

I couldn't agree with you more there, Jim. I think what we're going to see, and you and I kicked around the idea of maybe doing something in three or four months from now and seeing where unfortunately the bad actors take this stuff but had a lot of fun prior to preparing for all of this. I think I sent you the link where they trained a voice in 30 seconds of audio and was completely able to clone a voice. And imagine if the bad actor uses that to leave a voicemail that says, Hey, pretends to be the CFO or the CTO or somebody of importance that has access, most people are going to trust their ears, right? Oh, the C Ffo asked left me a voicemail, he is traveling. He needs access to this password, he needs me to send it to him right away. I mean, totally believable.

(28:06):

And so I think what you're going to see is these different factors of authentication that have been trustworthy in the past. We're going to see sort of a compromise and kind of a ratcheting back to things that are more tangible, more rooted in reality and grounded in things that are hard to spoof as we move forward. In fact, I think we'll touch upon that on the next slide. So if you want to go ahead and move us. Yeah, so we'll talk a little bit about zero trust. I thought as we were putting this all together, I said, is this old hat? Are people sick and tired hearing about zero trust? Turns out they're not. I went and pulled some stats on zero trust and the trajectory and the interest in it continues to grow, but I think what's really kind of the variation on the conversation today is what can you trust? So never trust, always verify, but well now how are you verifying right now we're talking about the quality of what you verify with. So this something you are the have and that's something not all are equal, right? Jim, you kind of alluded this a minute ago, but I mean they're not all equal, are they?

Jim Skidmore (29:21):

No, they're not. We're sort of almost getting rid of one of those factors at this point. Something you have obviously some people, developers clearly and others will use tokens. Tokens as I mentioned before, are going to be difficult if SMS becomes a nonstandard unless they're using hard tokens. Something you are is what we're challenged with here today to some degree. Right. And it's interesting because as you were talking about this, I thought about some friends of ours in pharma and in direct store delivery and retail and others that use IVR based solutions and VR use voice response units to basically report in from the road to all of the applications and in some cases use them to unlock.

(30:14):

It's going to be very interesting to see how those kind of road roadworthy people deal with the something urs. Because again, unless somebody compromises national fingerprint databases, which everybody is not even, unless you've been a bonded employee of the government or a bank or something like that, yours probably aren't out there. But that's becoming more and more sort of the single source because something is clearly going to be in question here and something you are. So the zero knowledge password manager part, I think at the outset of this discussion, we pretty much understand why that's completely up for grabs as well. It's getting harder and harder to protect the vaults. It's getting harder and harder to do anything except basically log into an encrypted service, which is what banking has gone to with GLBA and also with some of the other commerce requirements that are out on the web now for moving money. So yeah, I would say, I'm sure people hopefully on this call are kind of thinking about that, but I think it's a consideration that we're going to have to think about as we go forward.

Bryan Christ (31:43):

Yeah, it ultimately comes down to what can we do today That is, I'm beginning to think that Pandora's box being opened is going to cause a little bit of chaos I think for the next, I dunno, 12 months until we really get a handle on what capable of what not capable of you. And I talked about the crystal ball and how this was going to be an interesting set of conversations just because you don't know what you don't know, but I think one of the things that we ly say is that a zero knowledge solution where you're storing your password so that something is one of the weak vectors. So you need to do the absolute best you can to protect it and the something you are some sort of biometric and something you have, and I think something you have is probably in the longterm going to be one of the most important.

(32:42):

It's going to be really difficult to spoof something you're holding or it's with you. And I think we're already starting to see movement into that and which is why I kind of want to talk about what's up here on the next slide. I know we're running long on time here. What I want to do is first of all, I want to thank everybody that attended today. We are going to carve out a few minutes for some question and answer, but I want to make you aware of a 90 day trial offer that we have. So it's not no obligation. We just want to get you exposed to a set of tools that couple exactly what we're talking about, zero knowledge, strong zero knowledge. So that thing, something you know can protect it as best you can, but it also gets coupled with strong biometric Fido certified authentication with something you have into the zero knowledge solution. So we want to give everybody on the line an opportunity to try that out, put it through a test drive and do that for 90 days with again, no commitment. And so at this point I think we've run over time a little bit, but I think we can still afford to take a few questions if anybody's popped any into the chat channel. So

Carolyn Evans (34:11):

Thanks Brian. Yeah, so if you have any questions, please pop them into the chat function here. We've had a couple come through. Brian, I'll send this one to you. So you've mentioned a zero knowledge password manager a few times. Are you referring to a business password manager?

Bryan Christ (34:30):

Okay, great question. So zero knowledge simply means that only the user themselves has access to it. So in a lot of shared secret models of storage, there's sort of an omniscient view where somebody at maybe the top of a hierarchy can peer down into it that would not be zero knowledge. Whereas if my secrets are encrypted in such a way that I'm the only one that can get access to 'em with something like a strong master password coupled with biometric authentication, that makes it really, really challenging for anyone to gain access to those. Maybe they could even steal the data storage mechanism for something like that being super generic when I say that, but if you don't have a valid means of decrypting that object, then you're dead in the water. And so that's what zero knowledge truly means. Great question though.

Carolyn Evans (35:42):

Okay, thank you for that. Somebody asked for the URL for the trial, so I just popped that in there. We'll also send out, sorry, we have one more question that just came through, but I wanted to mention that we will also send out a recording of this webinar and then a link to the trial as well. So if you don't get it in this chat, we will send it in an email right after this webinar. So the next question, and it looks like the last question unless any others come through. For Jim, what's the lowest hanging fruit for LastPass customers to protect themselves?

Jim Skidmore (36:18):

It's funny, we actually kind of made a move in this direction ourselves. If you are literally working with the keys to kingdoms and whether you're a privileged user managing network access applications, cloud assets or what have you, we have gone to kind of what we've been talking about, an encrypted service that enables us to pass information back and forth that because it can be unencrypted. I know that Vora has a service like this, I think called Safe, which is great, and we really think that is a policy change that is absolutely required in an organization. People are sending things on chat, on Slack, on obviously email and other areas. And now when you look at a lot of the behavior that's out there, a lot of the bots and more automated threats can basically read and look for in unstructured data, password like components that are out there.

(37:37):

We've already seen this, so I would much rather get an email that says Carolyn or Brian or whomever sent me that information to go log in and to kind of check that out takes about 20 seconds. And then I know I'm moving things in a very intelligent way just because we've seen so many bad things happen when people are sharing this information back and forth and including insider threat. So that's kind of another thing. Even from a compliance perspective, if you look at people that are working in financial services or energy or healthcare, that's a very common one. We've seen situations where people are not even allowed to use faxes anymore because if they're in a common area, not open to just one person, they can't do it. I think it's very important to kind of think about an encrypted service or something like that to ensure that you're not going to have these types of challenges. A lot of the leading tools now that we were relying on with the old kind of methodology of ESSO, if you will, are no longer completely going to be viable or safe. I think we need to think about how we really deal with that going forward. And for us right now, the encrypted service is definitely the way to go. So

Carolyn Evans (39:09):

That's good advice. Oh, we have one more question come in. How open ai, oh no, sorry, it was just a comment how open AI security and air gap security works hand in hand.

Bryan Christ (39:23):

There's always something to be said for air gap, right Jim?

Jim Skidmore (39:27):

There sure is. Give me some scissors and I'll, I'll stop the problem right now. Right. No, I think that's more of from Ari there. I think that's more of a statement and I would definitely agree with that statement. Yeah, air gap is a good way to go when you have any questions about things and taking best practices dev environments that you're working on. As Brian talked about earlier when we were talking about LastPass, the reason things happen was gaining access to a dev environment, right? At the end of the day, at least 25 million, probably more like 35 or 40 million identities were compromised simply from getting to one desktop. So it's pretty important to reduce risk anywhere that you can and make your organization aware of this new wave of how we need to think about things. We used to run around telling people two A is really important. I still think zero trust is underused and underplayed in any other way than marketing. The bottom line is if you think you're at least privileged or getting there, you will be ultimately safer. There is no magic silver bullet for that. You will hear all over the market other than going through the cultural change that you need to make it happen.

(40:58):

But yeah, good point. By our watcher out there is no, to Brian's point, there is no substitute for Disconnectivity.

Carolyn Evans (41:11):

Okay, well thank you very much Brian and Jim. That wraps our webinar discussion for today. We'll be hosting another webinar in two weeks actually on identity security with another one of our partners, our tech partner Elastic. So we'll be sending out some emails on that shortly. And we also, somebody had asked how to get a recording of this, we'll be sending out a recording of this webinar and also a link to that 90 day trial this afternoon. So you can click on either of those and you can also respond to it if you have questions. And we will set up some time to talk with you or answer them via email, whichever you prefer. Thank you very much for your time. On behalf of Reverse Security and Integral, have an awesome day.

The first 50 people are entered to win an all-access pass to Gartner 2023 NA Identity & Access Management Summit! 

Held March 20 - 22, 2023, at the Gaylord Texan Resort & Convention Center in Grapevine, Texas, attend the Gartner IAM Summit to network with peers and get the latest insights, tools, and strategies for your identity strategy. Full conference pass includes sessions, presentation materials, receptions, and meals listed as part of the conference agenda and is valued at USD 3,675.