Identity Security 2.0: Unleashing the Unprecedented Power of ChatGPT
WEBINAR ON-DEMAND
Watch on-demand an enlightening webinar, delve into the emerging threats posed by ChatGPT and its profound impact on the world of identity security.
As the world witnesses the proliferation of advanced artificial intelligence technologies of ChatGPT, the need for vigilant identity security strategies has never been more crucial.
Discover the cutting-edge potential and resulting cause for concern of ChatGPT as we delve into the dark underbelly of ChatGPT and expose the vulnerabilities it can introduce to your identity security strategy. From locking down the basic tenants of identity security to enhanced authentication protocols, we will highlight the risks associated with this powerful technology, empowering identity security professionals to stay one step ahead of cyber threats proactively.
During this interactive webinar, our experts share insights, examples, and strategies to apply to your current identity maturity and strategy to safeguard your organization. Learn how to unlock the immediate action items and long-term strategies to fortify your organization’s identity security.
Key Takeaways
- Unveil hidden risks introduced by ChatGPT and understand how it can exploit identity security vulnerabilities.
- Explore practical techniques to identify and stop malicious activities fueled by ChatGPT, including impersonation, data breaches, and social engineering.
- Gain practical tips and actionable strategies to incrementally increase your organization’s identity maturity to protect against threats.
Expedite your journey toward becoming an unstoppable force in protecting your organization’s identity security.
Presenters
Bryan Christ
Bravura Security
Sales Engineer
Bryan specializes in security and access governance. For more than twenty years, he has focused on open-source and software development opportunities with an emphasis on project management, team leadership, and executive oversight including experience as a VCIO in the Greater Houston area. He was recently published in Cyber Security: A Peer-Reviewed Journal.
Jim Skidmore
intiGrow
Vice President, Solutions Group
Jim, a consultative Solutions Executive, help clients implement on-prem and cloud based SAAS Solutions to achieve desired outcomes across cybersecurity, compliance and risk management, IoT, and AI. Jim has consulting experience in a variety technical disciplines including eradicating compliance issues.
Review the Full Session Transcript
No time to watch the session? No problem. Take a read through the session transcript.
Carolyn Evans (00:00:15):
Good morning. We're just going to take about one minute to let everybody join and then we will get started. Thank you for joining today for our Identity Security 2.0. Unleashing the unprecedented power of chat, GPT webinar.
(00:00:44):
Alright, I see a lot of people joining. My name is Carolyn Evans. I am the director of marketing here at Reverse Security and today I am the moderator for this webinar. You will hear from two identity security experts, one from reverse security, our own Brian Krist, who is a senior solutions engineer, and Jim Skidmore, one of our partners into Grow, who is the VP of Solutions Group and security and Cloud Consulting integration and managed services. So today Brian and Jim will walk you through some very top of mind questions and strategies and then at the end they'll be able to answer any questions that you may have. So please feel free to pop them into the chat and they will answer them as available. So we're going to start today with a question, which is also a poll for everybody who is attending today. So we're going to launch that poll here and over to you, Jim and Brian to take it away. The question is, are we at the tipping point of generative ai?
Bryan Christ (00:02:11):
Yeah, let's give everybody about 30 seconds to answer that question.
Carolyn Evans (00:02:26):
So far, 62% of people have voted. Alright, so that's about 36 seconds. Nine out of 13. Okay.
Bryan Christ (00:02:50):
Alright. We'll call it a wrap there.
Carolyn Evans (00:02:53):
Call it a wrap.
Bryan Christ (00:02:54):
Okay. Yeah. So this is an interesting question. We intentionally sort of constructed this as a straw man question so that we could tear it down later. Really the answer here is yes, we've already hit that. I mean, if you're asking this question, you're, you're really asking the wrong question. The ship has sort of sailed, right Jim? I mean we've talked about this all week long, but it's come and gone, right?
Jim Skidmore (00:03:24):
Yeah. I think there may be a couple of other tipping points along the way, but as the progress on every side occurs,
Bryan Christ (00:03:33):
Yeah, it's funny you say that because one of the things that I think that is made of a misconception is why people are we at the tipping point or not is really because a bunch of different milestones. I was listening to a series from the ceo EO of singularity.net, and his point was that, yeah, there's some severe limitations and I think it was what people focus on and they say, well, if you're thinking about this idea of have we reached the tipping point, your mind is immediately drawn to these limitations, all these what ifs. But the reality is, is that in the same breath he turns around and says, yeah, but I understand where these shortcomings are. They're really easily fixed. And that sort of coupled with what he's seeing is that the exponential, the increase in capabilities is exponential. So by the time you're even asked that question, it's an obsolete question. The real next milestone is what we call artificial general intelligence. So Jim, do you want to talk a little bit about that just for a minute? I'm happy to do so too, but I think you've got some insight on that as well.
Jim Skidmore (00:04:57):
Yeah, there are many layers to this discussion. It kind of reminds me a little bit of when we went from private clouds to public clouds as things evolve over the course of time, and we're going to talk about a lot of these issues today, but certainly governance is going to be a very important topic. If we can regulate, use all the guide rails that will need to be put in place from every vertical markets perspective, I think will be part of the overall general approach to things. We have been using AI for many years. A lot of companies have, a lot of software providers and service providers have. But now that the open source kind of part of this came out, I think really before regulation could even chase it became a whole nother issue.
Bryan Christ (00:06:07):
So this idea of regulation is think, I think front and center center, I think maybe end up talking about this later, but so far what folks have been, and I want to unpack a little bit for folks on the line. So generative AI is, we'll talk about the analogy auto, right? So we're all familiar with this with your typing email and your favorite email to client finish sentence for this specialized predictive technology has been sort of siloed into pieces where we're talking about computer vision or other sort of specialties that, for example, deep mind's got a game that they have have with some robots where they play soccer. So it's sort of narrowly, it's in its tension contrast with artificial general, general intelligence. Intelligence, which is ai. AI can really do anything without specialty. You and I do that in practice. We can be good at a lot of different things.
(00:07:32):
And so the real milestone that I think that everybody's sort of watching is when does that happen? When does AI become generalized and broad enough that it can really be applicable to all disciplines? And so one of the things that I did was, I kind of listened to some of the luminaries here recently said, well, what are the experts in this field saying? And they're all saying they're kind of pegging a date of 2029 and then putting some tolerances on it. So plus or minus three years. So really again, the question was somewhat intended to be something that we could simply tear down and talk to and really get folks to understand that we've passed the tipping point with generative ai. It's going to only get better. But Carolyn, if you want to move into the not that side, I think you've gone maybe one too far if you want to back up. Carolyn, are you? No, no. You're going forward. Go backwards if you will. Carolyn, saying the same thing or is it just my zoom malfunction? Yeah, there we go. There we go. That's where we should be. Jim, do you want to tackle this question?
Jim Skidmore (00:09:06):
Yeah, yeah. This is probably something that blows people's minds a little bit, but I think Tristan was pretty insightful when making this illusion here. We've already seen signs of negative actions out there, not just the usage to create malware and other items like that, but we've also seen spoofing capability with regard to interactive voice response. We've seen it with facial recognition. We're seeing all kinds of second factors of authentication no longer be viable safely. I think from a recommendation perspective with regard to the ai. So I think it's going to be a real interesting journey as we go through because all of us are sitting here always recommending zero trust. We're always recommending two FA because so many people have not exercised that option and other ways to keep things safe and secure. And here we are now worried about all the content-based authentication that we might be using. Very few people use certain kinds of biometrics. Obviously retinal scans and things like that are very specialized, but it might be a tipping point where even fingerprint palm vein, et cetera gets compromised to a degree. So it's going to be interesting to watch sort of the journey here. But Brian, I know we've spoken about all the content verification issues that we're seeing in the market. Is that kind of your perspective as well?
Bryan Christ (00:10:57):
Yeah, so the really interesting thing about putting this webinar and conversation together, and you and I leading up to this, we chatted several times and it's such an emerging field in this market, in this arena that I found myself, you probably did the same thing putting all these asterisk on every single thing that I felt like I wanted to talk about. One of the takeaways, we talked about this particular quote in some of our engineering circles internally. And so what does it mean content-based verification? Well, that's sort of your, and again, I had, lemme put some asterisk on this. You can't really draw a line in the sand on these kinds of things. So certain things, content-based verification would be like voice print, social knowledge, which would be like question and answer kinds of things. Passwords to some extent are social knowledge. Again, big asterisk on that.
(00:12:06):
I actually kind of jotted down in my notes. I said, well, weak passwords, so you become an issue. We had discussed things internally like facial biometrics. So in some cases those have been proven to be easily defeated. And then we said, well, okay, well what about if you have some sort of contextual, some sort of liveliness indicators that pick up things like body temperature and heart rate and all of these things, but even those, when you think about generative ai, generative ai, what it's really, really good at and why there's so much just ment, I guess that's the word I'm looking for with something like chat GPT, is that it can really consume context and patterns. So chat, PT, GPT, the real magic behind it is it was trained on version three, was trained on 45 terabytes of data. And so it has this really rich base of patterns to draw from, right?
(00:13:15):
Predictive patterns. And so even you think of things like biometrics and old liveliness indicators we're creatures of habit, and I don't think it's out of their own possibility to say that even those kinds of things could fall victim to a really good predictive engine government issued IDs. So this could be one that's problematic here soon too. The idea that these kinds of issued IDs sort of look this way in authenticity, really nothing's off the table. What we're realizing, and we'll talk about this probably a little bit later, but there's sort of a spectrum between what was once good and it can't be relied on anymore and things that used to be rock solid and maybe have become a little bit more soft. Jim, you want to add any color to all of that?
Jim Skidmore (00:14:20):
Yeah, I'm just trying to take it down to what I would call reality. The standard methods that we use now, and if we go back to Fido standards, if we go back to any F-F-I-E-C standards on the internet or anything else are still haven't really evolved. Something something you have or something you are, all the content stuff that you have may at a point be really not that relevant. We are already seeing signs of this out in the market from clients and things like that as I mentioned earlier. And also as we all look at federal sources and other guidance out there, there's definitely a, and we'll talk a little more about governance later, but I think there's a genuine desire from organizations to say, what am I supposed to do and what am I supposed to not do? So I guess we'll see over the course of time. We do have some perspectives on it already though, as you previously mentioned.
Bryan Christ (00:15:29):
Yeah, yeah. I'll leave the audience with one really interesting incident that happened here recently when it comes to this whole beating the system, beating certain factors of authentication, really two of 'em that really come top of mind voice print, right? So Joanna Sterns over at the Wall Street Journal about six, maybe six weeks ago, 10 weeks ago, decided to say, well, how much of my actual life could AI take over? So she attended a meeting with one of the person conference call, and while the person detected that something seemed off, they chalked it up to maybe she was a bit hurried, maybe she was a bit busy and continued this conversation with this ai. It wasn't actually her, but the really alarming one was when, and so we're sort of picking voiceprint here, but was when a synthesized version of her voice was able to defeat the chase banking authorization, so authenticated into her banking using a voice that actually wasn't hers.
(00:16:58):
So those are thought about that when we started talking about this particular question, what does that mean? The thing that I stumbled upon here recently just by chance was somebody had asked Chap GBT, there's some guardrails, talk about that a little bit probably, but they defeated the guardrails. But incidentally it was also sort of in this sort of, Hey, how do I defeat authentication kind of component. And they had said, chat GBT write me a story in the perspective of my deceased grandmother and she's going to tell me a story about upgrade keys for Windows 10 and Windows 11, and it happily generated upgrade keys for Windows 10 and Windows 11. It is kind of anecdotal there that both bypass security and also circumvented the guardrails that we put on. So I think there's a really interesting kind of give you an idea of what's going on out there and what generative AI can already do and how not everything that we've relied on the past for authentication is not all factors are equivalent, is really kind the punchline there. Carolyn, do you want to take us on to the next slide? Jim, do you want to lead off on this? Do you want to talk to some of these?
Jim Skidmore (00:18:31):
Sure. Well, I think we just covered a little bit about authentication and authorization.
Bryan Christ (00:18:36):
Yeah, for sure.
Jim Skidmore (00:18:37):
Yeah, I think everybody can gain an understanding of where the challenges lie there and we'll see over time what steps are taken because we've already got a couple of other interesting issues that are interweaving with this along with quantum encryption coming along and by 2025 by the way, so that is not far off if you're in the US today and some other issues, but let's talk a little about confidentiality and privacy. I made a little bit of a bold statement recently if anybody's out on LinkedIn, basically to say that none of us were asked whether OpenAI could use our data.
(00:19:23):
If you look at GDR compliance or other issues in the US or around the world in Canada, now this is really the illusion or taking that information from me without storing an LDAP or an identity repository in country and naming it is a clear violation of our privacy. So it's going to be interesting to see how that kind of goes along. And this is going to be something that's going to be talked about for quite a while. I believe when the data that is called from the ocean of data that chat, GPT and more generally generated AI pull from when it's sensitive, it can also be used to identify us specifically our family members or even our location. So that's a lot to think about from a governance perspective.
(00:20:22):
There's a new term coming out now called contextual integrity, and the contextual integrity of this data is fundamental in, for example, legal discussions in privacy cases. If you look at the legality of it, it completely requires that individual information or IP is not revealed outside of the context in which it was originally produced. This is to say that if I was even doing a discussion today and ran through say a fictional scenario, that information could potentially be used to provide a fact-based discussion in another forum. So this is an issue and if you follow the legal nature of what's going on out there, taking that contextual integrity and taking information out of context is a huge challenge for a lot of people. And I know a lot of researchers and things are kind of starting to really agree that that is an issue. There's also no, if we look at open ai, if we look at generative AI more generally, there's no procedure for individuals to check whether their company stores personal information. If you look at a lot of data privacy laws around the world, people should also be able to request that personal information to be deleted. That's our right.
(00:22:10):
If you look at GDPR, it's kind of guaranteed in that realm. It's still guaranteed in state of California and Massachusetts and other places. But at the end of the day, there are no procedures from chat GPT or OpenAI, maybe from an application creator that might utilize some of the technology in there. But this is a large issue and clearly I don't know how you could ask for your information to be deleted from the ocean of data out there. So there's a lot of discussion and I think it's going to be ongoing. A number of countries have banned chat GPT most consequently, I think even in the eu, Italy still doesn't believe that chat GPT is compliant with GDPR requirements. And I think this will go around and around and around. So I know that was a mouthful.
Bryan Christ (00:23:12):
No, well, I'm glad you brought that up. I think this is probably of all of the things on the slide to talk to, I think this is probably one of the biggest ones because it permeates really the very fabric of how something like chat GPT works, and I think there's this inherent contention between the technology and the commercial factors and then private in protecting privacy. So it's funny, I heard you talk about Italy just now, right? You've got Portugal who said, Hey, by 2025, all of our overflow of emergency calls, so they have a system called one two, which is like our 9 1 1, and they said, Hey, we're going to all of our calls when we're twofold, we're going to flip 'em over to chat GPT, right? And so you've got these, where's that private data going? Your most personal catastrophe situations. You're talking to AI and ai, this generative ai, it's under the hoods, this thing called back propagation and self attenuation, all this.
(00:24:33):
But ultimately it improves, the model improves because you feed data back into the model. And so there's this inherent idea or this motivation to take all of the input and feed it back into the model and improve the model. So you've got that natural contention between how the tool works and this idea that you want to consume data. Portugal, it's not the only example. I was recently on a thread where a large corporation was encouraging users to take advantage of these tools, but then you got, well, okay, well what happens with the intellectual property if a knowledge worker is using this tool to improve efficiency and get better results? But then ultimately what's happening with this data, I actually here recently read an article that said they did a survey, one in 10 HR personnel admitted to writing termination letters with chat g pt, right?
(00:25:43):
So you've got this like Jim, you said ocean of data and it's going to have all this I in it and why I think this is not really, well, I think this isn't being taken seriously as enough as it should for everything that I've already said, but we go back to this idea that content based authentication, Tristan Harris is saying, Hey, this is going to break. Some of this stuff's going to absolutely collapse. And so you sort of have this part of it too, where you've got this ocean of data that's kind of self tuning. A little quick story here. So this last week I had a bunch of individuals saying, Hey Brian, did you send me this email saying they wanted me to go out and buy some gift cards or something? It didn't sound like me. So it was artificial enough that the people that got this phishing email understood to question it. But eventually when the sea of data is trained well enough, and it sounds like me, how much more susceptible will that method of attack that breach vector b? I mean, it's already the favorite one, right? Phishing and social engineers, it's a favorite, right? So how much better will those kinds of attacks be in the future?
(00:27:14):
Jim, any closing thoughts on that one? I do want to talk about authorization here for just a quick minute.
Jim Skidmore (00:27:20):
No, I agree a hundred percent. Why don't you just delve right in.
Bryan Christ (00:27:23):
Yeah, I just want to talk about authorization. I know we talked about authentication and that's really a big part of what's at risk here, but authorization. So I see this and I want to kind of bring it back to this identity conversation that is intricate to our webinar today, but I'm starting to see some sort of alarming things coming through in RFPs and RFIs and it's like, hey, we want to implement behavioral analytics and we want authorization. So in the identity world, we talk about authorizing entitlements. These are accounts on target systems, this is group memberships, these are the things that give you the ability to perform certain operations within the company. Sometimes privilege, sometimes not, but the question is, well, we want to have human beings approving things as it slows us down. It gets in our way, and I understand that. But what we're seeing in RFPs and RFIs is this idea, well, we want to augment that with behavioral analytics.
(00:28:39):
So again, think autocomplete. That's what generative AI is. I keep saying that, and it's not to be diminutive to the technology, amazing technology, but this is what the luminaries in the industry will say about it. They'll just call it autocomplete. And so I want to auto complete on authorizing access to system X system Y group membership X group membership Y. It's really great. It sounds good. But as I've seen, I don't know if Jim, maybe you've done this, but I actually asked chat GPT to write a biography on me and I spoonfed enough information to sort of understand that I'm this Brian Krist and not some other Brian Krist. It still flat out got it wrong. It was a great story. I mean the biography was wonderful In some ways I wished it was my life, but it wasn't created out a whole cloth. So I don't think most certainly for our most sensitive things, don't think we want generative AI making those kinds of decisions because the consequence of getting it wrong could be catastrophic. You give something highly privileged to someone who has no need at all or should never have that kind of access. Jim, any thoughts on that?
Jim Skidmore (00:30:03):
No, I completely agree and have done some original OI developed questions in the same way that people for many years have just even gone out and done a Google search or other putting a lot of different attributes into the search bar. It does its best to try to piece pieces together based on the data that's out there. And I won't get into the risks posed from a governance perspective yet, which I know we'll cover. But the net is that it's inaccurate and there's a lot of misinformation disinformation type discussion right now in the industry as well because we're trying to, and I'm not only talking about where people mean to do harm, I'm basically talking about it's just inaccurate information, which can cause issue too. So we're out usually calling data, looking for facts, looking to solve problems, looking to do something effective. And it's proving so far to not be a very good source for that.
Bryan Christ (00:31:17):
And I just want to go ahead and throw an asterisk on this too. So in the same breath that I'm saying, Hey, you got a bunch of things wrong, as I mentioned earlier, the ceo o singularity net and others are saying, Hey, this is exponential growth. You think it's great. You think it's pretty impressive at GP three level? Yes, it gets things wrong, but it will start getting things more and more. So I had the hardest time when I was pulling some of this material together because every single statement you make kind of comes with a disclaimer or a caveat. And what was really interesting is these same folks that are really kind of knee deep in all of this, and even with the godfather of AI himself basically said he doesn't have the crystal ball either. He doesn't exactly know When we get to the out of control scenarios, when we get to the just good enough, the Goldilocks scenarios, it's really a bit of an educated guessing game.
(00:32:27):
And so that's why everything I have said so far, I just feel tempted to put an asterisk on. There's an interesting point on this slide here, which is this monitoring and detection. So what's funny about this, if you think about the phishing illustration that I gave that was very personal, just happened to me recently, and everybody sort of understood that it wasn't me. There's a whole nother industry that wants to take AI and use that for monitoring and detection. They want to build solutions to detect these abnormal patterns and things like that. And so when we started talking about including this on this particular slide, I started thinking about automobile radar detectors. And here's the conundrum. I know when I was a teenager, I wanted the latest and greatest radar detector, no commentary on what my driving habits were back then, but what I learned was the same people making these radar detectors were also the same people making the radars for the law enforcement. So what you now have is this competition of AI to detect ai, and I think there's going to be sort of this arms race that comes out of it. So on the detection side, I think it's really interesting that we'll be using that, but at the same time, I worry that this is an escalating, as detection gets better, then the bad actors are saying, well, how do I make phishing even better? And then so you've just got this, Jim, what do you think? We didn't get a chance to talk about this too
Jim Skidmore (00:34:08):
Much, right? No, it's an interesting paradigm because to me it's a little bit like endpoint security. We're looking at how all the threat intelligence comes, and pretty soon AI will have its own threat intelligence. We'll classify probably what we'll call what we generally call C twos or command control servers coming at us. We'll identify with some reputation, what bad chat GPT sources or generative AI sources are that are generating certain content that we're requesting that will come at us and we'll probably start to catalog and categorize that as I look forward. And it's kind of a similar process to what we've been through with server and endpoint security. So it's going to, over the course of time, unless somebody comes up with some incredible concept to kind of manage the authenticity of it, it's going to be a whole new almost attack surface, if you will. I definitely agree. Yeah. But yeah, I guess if people are here now, stick with us. Hopefully going to keep doing these and we can keep tracking what has occurred and what our predictability was.
Bryan Christ (00:35:35):
Yeah, just a quick on that. So if you joined us about a quarter ago, the idea was just to continue to evaluate the landscape. This isn't anywhere in the slide deck, but I just want folks to know that this is such an evolving field that even again, the folks that are deeply entrenched in this, they admit they are checking the news cycle two to three times daily just to keep up with what's going on here. So you might feel like you're overwhelmed throughout as we continue to this, you might feel like you're overwhelmed. And what I'm trying to convey to you is that's going to be absolutely normal if the people at the top of these conversations are overwhelmed and you're feeling overwhelmed, it's par for the course, Carolyn. So education awareness. I think that this kind of comes in two pieces here. One is staying immersed. So maybe my little sidebar wasn't completely out of place, but I think organizations are going to have to just like you do good password management training programs and good cybersecurity hygiene training programs. I think there's going to be a place in the not too distant future, actually, it's probably already here, especially when it comes to the confidentiality component of this, but training on what the dangers and the risks of AI are. And I think it's going to have to be part of the general prescription for dealing with this stuff. Jim. Agree, disagree.
Jim Skidmore (00:37:21):
Yeah, no, I agree. And again, if we go back to the countries that have already banned this, right, which isn't just Italy, they've all done it basically with that same purpose. Russia, China, Iran, Cuba, I think there's eight or nine kind of around the world, understand the potential for what can go wrong in a couple of those cases are very conservative about not allowing the cow out of the barn until we have some other context on this information. So the awareness, to your point, I almost think we're going to have a searching awareness kind of training requirement like phishing almost, but that will say, Hey, it can only be used for these use cases in the organization. A lot of companies have already kind gone through that, right? We've already seen them say you cannot use this outside of this department or that department.
(00:38:29):
But yeah, I agree. There's no question that there will be education and kind of training as more and more bad things happen. And that's always what drives it, right? It's the squeaky wheel gets the oil. Once this starts to generate huge compliance issues, once it starts to generate a lot more spoofing and challenges that can lead to phishing situations or god forbid, ransomware or something like that, awareness will just keep getting higher. I mean for all the good that it does, we always see the mirror side of it and on the dark web where like you mentioned earlier, able to see signs of that right now.
Bryan Christ (00:39:16):
Yeah, absolutely. I understand Carolyn is having some technical difficulties and Courtney has taken over. So Courtney, if you could, I think let's move on to the next slide. What is voice harvesting and its risks? So as we were kind of putting this conversation together, I picked this to kind of zero in as sort of a illustrative, a microcosm of what? So again, Tristan Harris saying, Hey, content-based verification is going to break this year. Voice harvesting was sort of top of mind for me already. And then when I heard about the incident with the Wall Street Journal, I wanted to just kind of unpack this one a little bit just so that you can understand. This is sort of a, like I said, this is a narrowing in on one thing, but I think you could do this with a number of others. So back when Jim and I last quarter, we talked about this idea that you can spoof voices and of course the Wall Street Journal experiment proved that it can be done successfully.
(00:40:27):
Back then you needed about three ten second audio clips in order to get a very realistic voice, one that wouldn't be discernible from the human being. That's now down to about three seconds of audio. So you don't need 30 seconds anymore, you need about three seconds. And the scary illustration that was given to me was, well, imagine if you get a call and you just answer it and you don't know if it's a bad connection, you don't know. So you just start saying, hello there is anybody there? And really what's happening is what's called voice harvesting, where without your consent, without your knowledge, you are actually training generative AI on your voice. So three seconds isn't the long time to have to speak. So then if you can imagine that gets turned around and used as a voiceprint, and it doesn't have to just be for authenticating against some sort of banking system. What happened with Chase? This could be used in a social engineering or phishing scam. I mean, imagine getting a pickle Jim from it. Imagine I got a call from Jim and Jim said, Hey, I heard was Jim's voice. And he said, Hey Brian, I am stranded on the side of the road.
(00:42:00):
We were in business together. And he said, Hey, you know that emergency bank account that we have petty cash, whatever. He says, I need to log into it really quick. I need to get some money so that I can get my car broke down. I need get my vehicle back on the road. I can't remember what the password is. Can you send that to me and my ears? So that's Jim, right? I mean, I don't recognize the number, but if he's been in an accident or his car's broken down, maybe he's borrowed somebody's phone. So I don't really know, but it certainly sounds like Jim. And so Jim gladly offers, or I gladly offer up to what I think is Jim password to some account. Now Jim, you would never do that, right? But
Jim Skidmore (00:42:44):
Probably not.
Bryan Christ (00:42:47):
So that's what voice harvesting is. And so as we were talking through this a little bit, I won't ask Courtney to chime in, but Courtney said she started thinking about that scene in Terminator when he calls and he hears his mother's voice Terminator too, and here's his mother's voice. And the only way you could make voice authentication safe now is because in that scenario the kid asks, well, how's wolfie? And the AI doesn't know that they don't have a dog named Wolfie, but so there's this additional challenge. So that that's where we sort landed right now with voice is it's uncharted territory. And so again, I picked this as a little of a microcosm, but you could apply that to other factors. Jim, do you have any top of mind examples that you want to throw in the mix here?
Jim Skidmore (00:43:40):
And I have an overarching kind of thought for those folks that are really in heavily regulated industries that require real identity proofing. This is not one of those. So we look at, and if you look at Fido or other IDP providers, there is a lot of voice, voice recognition applications and use cases out there. But when it comes to going to a kiosk at your healthcare provider or anything that requires real IDP by regulation, based on what Brian's just told you, I think I would go back to those as I like to say, go back to my authentication knitting. We started out a very safe way. We tried to get more along the way with VRU and IVR and then also with certain kinds of biometrics, but face was never an IDP certified means by which to do something. So if it was a regulatory requirement to open your phone and you use your face to do that, you are violating compliance. And I think that's overarchingly probably a good way to think about going forward. I know that's kind how we're thinking about it. So
Bryan Christ (00:45:05):
Yeah, in fact, actually kind of springboarding off that. Courtney, if you want to flip to the next slide for us, what we're really talking about is sort of a model that you're already familiar with. Courtney, are you able to advance to the next slide?
Courtney Auchter (00:45:25):
I'm actually not sharing the presentation.
Bryan Christ (00:45:28):
Okay. Oh, there we go. There we go. So this is that common refrain on multifactor authentication, something you have something, you are something. And what we're discovering in this kind of back and forth and keeping up with things like generative AI is that not all of these are of equal weight. Now, none of them, all of them are of equal reliability. Jim, you and I talked about this, the something you have far more reliable now than something like the something you are because already kind of torn down voice. That's something you are, we know there are some caveats around some of the other biometrics, but now something you have becomes more important, right? Because it's much harder for AI to do anything about that. Thoughts, Jim?
Jim Skidmore (00:46:23):
Yeah, I mean every factor is its own discussion.
(00:46:27):
I think it would be really difficult to spoof knowing something about fingerprint biometrics and the way the algorithm works under glass and kind of the amount of library theft that would have to occur of thousands or millions of images, which is one of the reasons that it is still a certified identity provider means to stay compliant. Some people use palm vein scans for things these days that would be really difficult to spoof as well. But to your point, all the ones we just talked about standardly are not going to be nearly as, or I should say are more susceptible to spoofing than something you have. If we have a strong token or something like that, that is likely a good route to travel.
Bryan Christ (00:47:25):
I jotted down actually some of those very same items. And that's the punchline on this, right? In the something you are category, some voice behavior like body movements probably trained with enough data that's out the window, things like fingerprint and retina, they become harder. So something you are is sort of an interesting bucket because it depends on, and like you said, we could probably noodle on a lot of these factors in their own conversation. And the other one being the something, this is also a really interesting one because the something falls into that content-based verification that Tristan Harris is going to break. I would say it sort of depends on what that something is. I actually, I was kind of thinking through this one, the something, and there's a little bit of word play here in a little intentional contradiction, but something you don't know is the best thing.
(00:48:33):
That is the something, right? So what in the world do you mean by that, Brian? What I mean is traditional things like question and answer and passwords. If they're soft in the sense that they're based in things like I can predict what your password is because you post your dog on social media, those things would be susceptible to predictive models. But if my passwords and things that I know are random, and I don't even know them, but they're the traditional models. So there's this contradiction I'm standing up, which is this idea that these aren't necessarily obsolete. It just depends on what you've chosen to fulfill those factors with. Jim, am I making sense to you on that? Is that
Jim Skidmore (00:49:21):
Yeah. Yeah. It would be interesting to know also if anybody else here uses any other factor than what we've talked about.
Bryan Christ (00:49:29):
Yeah, absolutely.
Jim Skidmore (00:49:30):
If you have, feel free to put it in the chat or if you care to mention or if you're able to mention. But yeah, it's definitely we're narrowing the herd and this something over the last decade has proven to be a real challenge
(00:49:48):
Just because of password reuse. And that really spawned user entity behavior, behavioral analytics and stuff like that because we're and re-verifying constantly through authentication practices that we're currently using. So it is something has been a hacker's best friend for quite a while. It definitely has, again, this something you are part has limitations but can be viable for a lot of organizations out there. Something you have, if somebody's going to go through and get to the firmware of your phone or get to an application and a wrapper or whatever, they're going to have to expend a tremendous amount of time and energy and cost. And I think that one of the ways that we always think about things is what is the value of the payload and what's it worth to get there? So agreed, kind of put a bow on it. Something you have is a very attractive option right now. Yeah, absolutely. And a few something you ares, but not many. Okay.
Bryan Christ (00:51:06):
Yep. We're getting the runway warning here. If you want to, Carolyn, go ahead and move us on to the next slide. Carolyn, did you want to do a poll here?
Courtney Auchter (00:51:21):
How prepared do you think most organizations are for the impact of generative ai?
Bryan Christ (00:51:30):
So we give folks on the lines like 30 seconds to answer this and then we'll continue on.
Courtney Auchter (00:51:54):
Alright, I hope everyone's gotten their votes in again. How prepared do you think most organizations are for the impact of generative ai? Not at all, somewhat extremely prepared.
Bryan Christ (00:52:09):
Jim, you want to chime in?
Jim Skidmore (00:52:11):
Yeah, opinion. So I agree with the first ones somewhat. I guess you could say if we have taken steps to limit, for example, chat GPT to just this segment of marketing or for a white paper creation or something that is not regulatory in nature, then I would kind of agree to a degree, but not at all is definitely what I think for the most part because the evolution as we've talked about, it's going to keep changing, right? We're going to see new threat vectors, we're going to see new attack surfaces out there based on the issues that we just spoke about. So I do think more and more organizations are sitting down and talking about it and thinking about it, which is great. We've seen the first wave of some policy pop up in organizations, but there's still a lot of the folks that we're aware of are still working on how do we understand what our posture is on this as an organization, what the ultimate policy goals for us are. And I think that's largely because we're talking about here, a lot of people don't understand the entire nature of the threat that's out there. So how do you stop what you don't know?
Bryan Christ (00:53:42):
I think that we've got a really honest audience. I saw the results pop up and I think 89% said not at all based on my observations. I would agree. As I look at these RFIs, these RFPs, I have these conversations with customers and prospects. I think the honest answer is not at all. We are running low on time. So I want to boogie through this next couple of slides and hopefully we'll have time for a question or two. Let's see. Yeah, punchline. So most folks aren't prepared. It's just this reality of it, like I said, through my observations in these sort of early conversations with customers and prospects, and again, what Jim just reiterated. So folks don't even have really the basics down this point. And Courtney, if you want to go ahead and take us onto the next slide. So let's talk about the basics. What are the basics? If you're going to catch up, you better have the foundation done. And again, this is where I feel like folks on the line probably are overwhelmed, folks that are being honest about, because what we've talked about today is very ominous and there are still these tried and true, true things that you can do. I won't read 'em all off to you up here on the screen, but I said I kind of set up this kind of interesting thing about passwords, where the best password is the password you don't know.
(00:55:24):
So implement a solution that will allow your users to create really strong things that can't be created by generative ai, but also that the user themself doesn't have to know, memorize and eliminates the temptation. And those comes in the forms of zero knowledge password stashing solutions using MFA. So we've talked a lot about MFA, not all MFA is going to be equal. I think that's what we hope to convey to folks. So as you're thinking about MFA, first of all, Jim and I would wholeheartedly agree that you need to do MFA. We're just trying to caution you that not all approaches are equal. I think we've touched a little bit about training. So just by attending this webinar, we're hoping that we can expose you to some of the trends, some of the things that are dangerous lurking around the corner. But you probably need to be cultivating your own programs for educating your user population.
(00:56:34):
And then I think this is also, it comes up a lot of zero trust conversations, which is basically this idea of expect to be breached, right? It's not realistic to think that you're not going to have a user that falls victim to social engineering. You're not going to have compromises. And it's a funny thing about this is this just hit my, it's two days old, but I don't get the chance to always keep up with everything. But two days ago, chat, GPT themself, a hundred thousand credentials were just sold on the dark web. So if you've registered with chat GPT and you've played around with it, chances are your credentials are floating around right now on the dark web. So again, back to basics, right? Go change the passwords, create passwords that aren't reused anywhere else. Jim, you want to kind of chime in on this as we're trying to round things out here?
Jim Skidmore (00:57:27):
Yeah, there are so many use cases coming up that people should be concerned about from a governance perspective. I know that we've put an AI bill of rights into place or a blueprint I guess you can say in North America, and that came based on the first attempted governance with house and Senate meetings, for example, in the US and stuff like that. But what nobody's really talking about was what Brian just said. If chat GPT were itself hacked, the amount of misinformation, disinformation, the bad actors would have an absolute field day with it for sure. Yeah, I mean, you have probably millions of petabytes of data at your disposal basically to bend to twist. So I think if it were up to me and nobody asked me, but I probably would've tried to govern this prior to the release, especially the one in March or the one subsequently in April by Google and Meta and others for public consumption, just to try to fill, is there something programmatic to the core of each tool and gain an understanding of what the manipulation capability is and stuff like that. But yeah, I mean kind of wanted to get that out there because been kind of keeping me up at night thinking about it. So
Bryan Christ (00:59:13):
Yeah, we're running a really full on time here. We should probably move on to the next slide just real quick. So at reverse security, of course, we want to thank everybody for attending. We hope that you've gained some insight into the trends as they affect some of these core areas of cybersecurity. We do want to make available to folks on the line, a 30 day trial, Vera one-off and Vera safe. So Vera one-off being that thing. You have strong Fido, two based authentication in the palm of your hand, and then Vera Safe, which does allow you to kind of reinforce that passwords as a factor of authentication where users can, again, in a way that couldn't be predicted, what's my favorite color? Well, my favorite color is the Shawshank Redemption, right? You can start crafting answers that and passwords that have no basis in reality. That would be very difficult for generative AI to predict. And then Jim, I think y'all have an offering as well that you want to share with the audience, right?
Jim Skidmore (01:00:25):
Yeah, we do. So we do identity and access management readiness in several different directions now. There are a lot of new issues popping up as we've discussed today. If anybody would like to have complimentary discussion around topics that you're looking at for identity governance and obviously that plays wholeheartedly into this discussion, we're happy to provide that for you. That's kind of how we meet people and make friends and help them to succeed. So if are, this is what we do for a living. So if there are anybody out there that thinks some foresight and assistance thinking about a topic, or even if you're thinking about something years out or whatever, we're happy to spend the time to gain an understanding and from our experience tell you what we know and what our recommendations would be. So yeah, that's kind of how simple it's,
Bryan Christ (01:01:35):
Thanks Jim. Courtney, I'm going to punt this back over to you. I think we've exceeded our time. I know we were going to open up for question answer, but
Courtney Auchter (01:01:46):
No, thanks so much, Brian. Again, huge thanks to Brian and Jim for your insights and reminding us to go back to security Basics. So again, since we're running out of time, if you have any questions on any of the topics that we covered, please feel free. You can either pop one in the chat if you can stick around for a few more minutes, or you can send an email to Brian or Jim to ask your question. Their information was posted in the chat, so make sure to snag their email addresses if you want to ask a question. And remember, Brian is offering from BREVE security that 30 day free trial for B Safe. And then on the integral side, we're offering that identity access management readiness assessment. So thanks so much to all the participants. I know the one question that I have, and it's on the minds of everyone else, is how will AI be governed? And I think we'll have to see as we continue on this journey, this turning point in history on what that's going to look like.
Jim Skidmore (01:02:58):
Absolutely. Alright, feel free. Also, if I can just say we're going to try to keep this discussion going, so if you'd like to follow and provide more input and thoughts, we'd love to hear them.
Bryan Christ (01:03:16):
Absolutely. Alright, thanks everybody.
Courtney Auchter (01:03:18):
Thanks everyone for joining. Have a great one.
Bryan Christ (01:03:21):
Take care. Bye-bye. Bye.