October 5, 2022

Cybersecurity Myths & Misconceptions with Josiah Dykstra

by Cyber Ranch

Show Notes

Josiah Dykstra, Cybersecurity Technical Fellow at the NSA and Author, kicks up the dust of some previous topics discussed on the Ranch and deepens the conversation on cybersecurity myths and behavioral economics. Prior to the release of his latest book, Cybersecurity Myths and Misconceptions, Josiah breaks down some biases, fallacies, myths, and magical thinking that cybersecurity practitioners fall victim to. Josiah taps into cyber’s psyche and exposes the errors behind practitioners playing make believe.

 

Timecoded Guide:

[00:00] Researching cybersecurity psychology & other exciting industry mashups

[09:22] Security logical fallacies: straw man, gambler’s, & ad hominem

[15:19] Cyber cognitive biases: confirmation, omission, and zero risk bias

[19:24] Perverse incentives & cobra effect: security vendors, bug bounties, & cyber insurance

[25:55] Creating an accurate measure of how secure we really are 

 

Sponsor Links:

Thank you to our sponsor Axonius for bringing this episode to life! The Axonius solution correlates asset data from existing solutions to provide an always up-to-date inventory, uncover gaps, and automate action — giving IT and security teams the confidence to control complexity. Learn more at axonius.com/hackervalley

 

In the context of cybersecurity, what are some examples of magical thinking? 

Magical thinking, or the belief that thoughts can influence the material world, appears alongside the most common assumptions in cyber, according to Josiah. Recognizing the harmful practice of cyber practitioners blaming users for bad decisions, Josiah uncovered that many security pros believe the user will make the right choice without any additional training. Unfortunately, this magical thinking only leads to users being unprepared and uneducated.

“We assume users will pick good passwords without providing them education. We can't just think in our heads that things will go right, that never happens. We need to make careful decisions, whether it’s how we configure systems, or develop software, or conduct training.”

 

Can you walk us through common fallacies in cybersecurity, like the gambler's fallacy?

While the straw man fallacy and ad hominem are often easy to identify in the cyber industry, Josiah explains that the gambler’s fallacy is just as pervasive and detrimental. The gambler’s fallacy involves seeing trends and “hidden” meanings in independent events. Most often, in security, cyber practitioners will believe a breach won’t happen if a company recently had a breach, even though these breaches would have nothing to do with each other.

“Imagine you’re flipping a fair coin, like a penny, and you get heads, heads, heads. Your brain starts to see an error, like, ‘I'm due for tails, if I had so many heads in a row.’ The fact is, the penny doesn't care about the last flip. These are all independent events.”

 

What about common cyber biases, such as zero risk, confirmation, and omission bias?

The cyber industry is ripe with biases. In fact, over 180 cognitive biases exist. Josiah’s book tackles a select few that appear time and time again, including zero risk bias. Zero risk bias is extremely common in cybersecurity. Security is about risk— understanding it, preventing it, and reacting to it. Many cyber companies will put all their eggs in one expensive basket, such as encryption, believing that this will create the impossible scenario of them having “zero” risk.

“We talk in the book a little bit about how you can never get risk to zero, right? Cybersecurity is always about risk management. There is somewhere between more than zero and less than 100% chance that your computer will get infected today.”

 

“The goal of a security vendor is to keep you secure.” Why is that a misconception?

Just like biases and fallacies, cybersecurity misconceptions can be costly mindset mistakes that lead to easily preventable errors. Josiah wants us to consider that security vendors are not altruistic, they’re running a business and making a sale. While many vendors have a goal to keep customers secure, that will not be the only goal they have. Josiah recommends taking precautions and never assuming the vendor will always put security first.

“The goal of any business is to make money. That's why that business exists. You could argue with me that it isn't an ‘either or.’ They can make money and we can be secured, we can have both, but that's an ideal world. I think, in reality, it's a little bit bumpier than that.” 

----------

Links:

Learn more about Josiah Dykstra on his LinkedIn and his website

Check out Josiah’s book, Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls That Derail Us

Follow Allan Alford on LinkedIn and Twitter

Purchase a Cyber Ranch Podcast T-Shirt at the Hacker Valley Store 

Continue this conversation on our Discord

Listen to more from the Hacker Valley Studio and The Cyber Ranch Podcast



Transcript

Josiah 00:00
We haven't really talked about risk yet, but we talk in the book a little bit about how you can never get risk to zero. Cybersecurity is always about risk management. There is somewhere between more than zero and less than 100% chance that your computer will get infected today.
Allan 00:13
Howdy, y'all, and welcome to this Cyber Ranch podcast. That's Josiah Dykstra, technical fellow of cybersecurity at the NSA and co-author of the upcoming book, Cybersecurity Myths and Misconceptions: Avoiding Hazards and Pitfalls that Derail Us. Now, we've done a show with Adrian Sanabria on cybersecurity myths, where we busted a lot of vendor claims and a lot of even well-known statistics in cybersecurity. This show continues in that vein by taking a deeper dive into questioning the fundamental precepts that drive our industry and our practice. Josiah is obviously from the government, being in the NSA, but his two co-authors are from academia and the private sector, respectively. All three perspectives are there, and I'm very much looking forward to this book. Now, I've gotten a sneak peek at some of it, and it promises to be both relevant and compelling. I want to point out that Josiah is also a frequent collaborator with Kelly Shortridge, who you may recall was my very first guest, talking with us about behavioral economics and cybersecurity. Josiah, as you can imagine from that
relationship, is also a very smart person in his own right. Josiah, thank you so much for coming on down to the Ranch.
Josiah 01:14
It's a pleasure to be here. Thanks for having me.
Cyber Ranch Intro 01:19
Welcome to the Cyber Ranch podcast, recorded under the big blue skies of Texas, where one CISO explores the cybersecurity landscape with the help of friends and experts. Here's your host, Allan Alford.
Allan 01:41
Alright. So, why don't we— Since I've already given a bit of a background for you with the NSA and the book and everything else, why don't we just dive into: What's your favorite high point in your cyber career? And then, tell us a little bit about your day job today.
Josiah 01:51
So, my high point actually is a little bit maybe atypical, and it was my stumbling into the intersection of cybersecurity with other disciplines about eight years ago. So, I have a PhD in computer science, I know a lot about how the computer works. I know the hardware and software, the networking, sort of inside and out, but I discovered that the intersection of cybersecurity with things like economics and psychology was this whole new field, this whole new sort of opening that I had never thought about before. So, I was working as a cyber operator at NSA and, like a lot of people, I was ready for a change of pace, looking for something else. I had this great opportunity to start doing research, to study the causes and implications of things like burnout. This was a really, honestly, high point in my career, I was ready for a change in the work, but I was so excited to have my mind open to this new area of
interest. It launched me in a whole new kind of direction to study people in cybersecurity, and quite honestly, that was the high points of my career so far. I'm looking forward to more.
Allan 02:51
Awesome. So, the new day job, then, you've gotten to do these things. You've gotten to roll these disciplines together, and you're getting to do cool stuff with cyber mashups, we'll call it.
Josiah 02:59
Yeah, which is very exciting, and I encourage people to think about this. I'm glad we have deep
technical experts, who spend all day every day thinking about memory security and software
development practices. We absolutely need those, but the multidisciplinary part, yes, it's so cool.
Allan 03:14
Okay, so, before we dive into the details then, I was going to ask you, "What was your motivation for writing the book?" But I think you've already characterized that, because this book is exactly what you're describing. It's a wonderful mashup of cyber and all these other disciplines all brought together. Was that the compelling drive? Was there anything more to it, specifically about the book?
Josiah 03:30
Yeah, a few other things. So, first, the book is written in my personal capacity. It's not written as me as a government person, even though that does happen to be my experience. My day job is in collaborating between government and industry, which I love to do. I think that's an important thing, but it's exciting to see both the big and the small in cybersecurity. So, I spend my day job looking at national security, nation state style. I have a small consulting business that does this for private health care. So, being able to see the big and the small is really exciting to me. And then, about 18 months ago, Leigh Metcalf, who works at Carnegie Mellon's SEI was writing a book on cybersecurity science. I'd done a similar book a few years before that and I was reviewing Leigh's book for her, and she had this really interesting small section that touched on this idea of pitfalls, things that people have misconceptions about in cybersecurity. I got to talking to her about that and we decided that could be a whole book on its own. That was a really exciting day because it started my brain just churning and churning. What are all these things we take for granted? What are all the myths that we've heard? I came to mention this book to Professor Gene Spafford at Purdue, and he had actually been teaching a class precisely on this topic. He wanted to jump in, too, so the three of us got to brainstorming about: What might this look like? What might we include? You had mentioned talking to Adrian earlier in the year about how to navigate myths in the product ecosystem, from vendors and things like that. I'm so glad that he talked about that. In my first book, which was an O'Reilly book called Essential Cybersecurity Science, I had this little appendix tacked on to the back about: What are questions you could ask vendors? And I'm so glad that Adrian sort of dived into that. We, in the new book, also have one chapter devoted to tools, tool myths, but in general, we took a broader view and we're gonna dive into that a little bit, I think, here in a second.
Allan 05:19
Definitely. I've got some questions for you. I ripped through the sneak peek I got of the book, I got a million questions for you. Let's dive in. Let's start with it. So, one thing you talk about is faulty assumptions and magical thinking, and for those who don't know, magical thinking is an actual real term from the psychology world. It means the belief that one's thoughts can influence the material world, the idea that you can cast spells, or you can will the car in front of you into changing lanes, or whatever. Magical thinking, like, your thoughts alone are enough to move the universe. In the context of cybersecurity, what are some examples of this magical thinking? What are we guilty of?
Josiah 05:51
These are two concepts that we mash together into one chapter. As you say, assumptions are sort of errors in how we decide to do things. Magical thinking is the sort of complement, which is like, the number 13 brings me bad luck, or I have a pair of socks to help me win the game. Cyber very clearly has both, and assumptions are sort of an underlying current in the whole book. We can't assume, for example, that if you're a web developer, that everybody has Firefox. If you assume that everybody has Firefox, you end up with a bunch of people who run Edge or Chrome who are awfully upset when the website doesn't look right. But let me give you a sort of hugely popular example of this, which is: Should we blame users for making bad choices? How often do we hear things like, "The user is the weakest
link?" That phrase comes up so often.
Allan 06:36
A phrase we are adamant against on this show, historically. I think every time this comes up on the show, I think most of my guests agree that we shouldn't blame the users and it's not really giving them a fair shake.
Josiah 06:46
Yeah, Danny Kahneman's book, Thinking Fast and Slow, really helped open my mind to this and that book isn't about cyber at all, aut it is about behavioral economics, something Kelly talked to you about. In that sense, a user would never pick the password "12345," it isn't a rational choice. If they care about their bank account, or they care about the privacy of their email, they would pick a strong password because it's worth it, but this assumption has several faulty assumptions built in. One is that users actually understand cyber threats. I think it's pretty clear, that's a difficult thing to communicate, but if we just assume that everybody on the planet is going to make the right choice, because, "Of course, they understand the risk," that is a poor assumption and that leads to all kinds of problems. It also assumes that the system is designed by perfect developers. If the developer like, lets the user pick 12345, why are we only blaming the user? We should also hold the developer accountable for letting that happen. So, it's just unfair and unhelpful to blame users.
Allan 07:49
And then, I think there's one last piece that is often overlooked, too. What other incentives and motives does the user have? Cybersecurity is just one interest of 10,000, and maybe they are judged by how quickly they can log into the system on a regular basis. That's part and parcel of their actual work life. So, to me, there's a bigger picture of other influencers that we, in cyber, selflessly put our blinders on and ignore. If somebody's sharing a terminal with a coworker, and it's a point of sale device, they're going to act differently on that device than if it's their own dedicated device, and that's such and such. So, to me, there's more assumptions, too, assumptions of neglect, almost. We assume that they have no other interests besides the ones we brought to the table.
Josiah 08:26
Because this is our profession, we assume that security is their goal. And security is not their goal, they're trying to do something else; send an email or photos or do their job. Security is supposed to help them out. Security is not the goal.
Allan 08:38
That's exactly it. That's exactly it. Alright, so we've covered assumptions, let's talk about magical thinking. What is the pivot to magical thinking? Once we've gotten past our faulty assumptions, or rather, we're in meshed in them.
Josiah 08:47
The magical thinking is that we can just believe that the user will make the right choice, or that the users automatically assume these things. There is no basis for just our thoughts becoming reality. So, if we assume that users will pick good passwords without providing them education, that is an example of magical thinking. So, we can't just think in our heads that things will go right. That never happens. We need to make careful decisions in whether that's how we configure our system, or develop software, or conduct training. All of those things help eliminate that sort of misconception.
Allan 09:22
Hope is not a strategy. Precisely. Let's see, magical thinking, faulty assumptions. What next do we get into? Ah, popular logical fallacies. You get into these in the book. I'm going to pick three at random. You had a huge list, but I'm just gonna pick three at random that we can sort of dive into. One is the straw man fallacy. One is the ad hominem fallacy. And then, the third one, which is what I'm not familiar with, is the gamblers fallacy. So, can you walk us through those three?
Josiah 09:47
Those are great choices. Some people might think that talking about mental like, logical thinking has an odd place, but if you understand the context, it actually becomes quite useful. We have a lot of discussion, argumentation, even debate in cybersecurity about, "Should we do A or B? Should we make choice A or B?" So, it's not about fighting, but it is about sort of real, legitimate discussion and that's why we included this. So, it might be obvious that vendors are trying to persuade you to make something, or to buy something and using some sort of poor logic, but this happens in everyday kinds of conversation. I'll give you examples from the three that you picked.
Josiah 10:26
So, let's start with a straw man fallacy. Imagine somebody comes in and says, "We're a software company, we should upgrade our software, and we should switch out the encryption to use quantum resistant cryptography." And somebody at the table opposes this argument and says the straw man argument, "No, we can't change the quantum resistant. If we change the software all the time, consumers won't like it." And what they're doing there is misconstruing the proposal and they're saying, instead of the one time change, they're switching the argument on you to this, "I'm going to argue against constant change," when that wasn't the argument at all. So, that's what the straw man fallacy looks like.
Josiah 11:06
Ad hominem, this is a Latin phrase, that means "to the person," and it's about skipping argumentation altogether and going to personal attacks. I'm a little bit sad that we see this more often than we should, maybe it's just a human tendency, but let's say you meet somebody at a conference, you have a nice conversation, they seem like a really smart cybersecurity person, and you go on LinkedIn, and their degree is in international relations. You say, "Oh, they can't possibly know what they're talking about," and dismiss their argument. That's a personal attack, without considering that they might be a very experienced professional. So, that's what ad hominem looks like.
Allan 11:41
I'm thinking of an exact and specific case, where there was a major breach and the press got into it, and discovered that the CISO had a music degree, and absolutely lost their minds. Everybody was like, "Oh, she's not qualified," there was all this ranting and all this stupidity. I went and looked at her LinkedIn before it vanished, she actually ended up taking her LinkedIn down at one point and, in fact, I think it's still gone. But there was plenty of room and space in what I saw to indicate a healthy and wealthy career in cyber, regardless of this music degree. Like, just that one, you triggered me with that one, that one got to me, because I came up with a liberal arts degree undergrad myself. Yes, I got my MS in InfoSec, but still, liberal arts degree for the undergrad. Okay, so, finally, we have the gambler's fallacy.
Josiah 12:22
This is a good one and an important one. This sort of human tendency to really love to see trends and meanings in independent events. The typical example is flipping coins. So, imagine you have a fair coin, a penny, and you get heads, heads, heads. Your brain starts to see an error, like, "I'm due for tails, if I had so many heads in a row." The fact is, the penny doesn't care about the last flip. And so, imagining that it has fairness built in is the gambler's fallacy, which is, "I'm due for red, I'm due for a particular card." So, what does that mean in cybersecurity? We just had a data breach, so that means we're not likely to have another one for a while, because we're not, quote unquote, due. This pops up all the time, and it just leads us down a rabbit hole. So, that's what gambler's fallacy looks like in cybersecurity. These are all independent events.
Allan 13:16
Did you ever see Robin Williams in The World According to Garp, the John Irving novel? The airplane crashes into the house that they're thinking about buying. They're looking at a house, they want to buy, the airplane crashes into the house live as they're sitting there in the driveway, and he says, "I'll take it. What are the odds this will ever happen again?"
Josiah 13:31
Totally independent events. Yeah, this shows up in the lottery all the time, where people say, "Oh, the number 25 got selected yesterday, it's not going to get selected again," but these events are totally independent.
Allan 13:43
Let's pause right there and hear a brief word from our sponsor.
Axonius Ad 13:46
Hey, everyone. It's me, Simone Biles. You might be wondering why you're hearing my voice on a cybersecurity podcast ad. Well, it's because I'm partnering with Axonius. Whether you're a gymnast, like me, or an IT, or a Security Pro, complexity is inevitable. I've learned that the key to success is focusing on what you can control. Go check out my video at Axonius.com/Simone.
Allan 14:21
Before we leave the fallacies, any other heavy hitters from the fallacy camp? I picked those three because those three seemed interesting to me. Any others that really deserve mention here?
Josiah 14:28
Yeah, maybe just one, which is called the sunk cost fallacy. Humans really don't like to lose, we don't like pain. Danny Kahneman and lots of other sort of psychologists and economists have looked at this. The trouble is, when we make a choice, and it turns out to be wrong, or it's producing pain for us, we stick with it for some reason. That's what the sunk cost fallacy is. We've spent time, money, effort on this cloud provider, on the security solution, and so, we don't move away from it because we've invested these resources, we have to, quote unquote, recoup our cost, when in fact, it would be a much more logical thing to just abandon that and move to something else. But we have these emotions of loss and guilt and pain, when really it would be better to think about: What do you control? What can you change in the future?
Allan 15:19
You got to know when to fold the cards, don't throw good money after bad, as grandpa would say. Okay, that covers our fallacies. Let's see here. Oh, cognitive biases. Let's get into these. You've got a whole list of these. I picked, again, some random ones. I picked zero risk bias, confirmation bias, and omission bias. I know I've got one good example in my mind for confirmation bias, but the other two I drew a blank on. Walk me through zero risk, confirmation, and omission.
Josiah 15:46
Yeah, this was a really interesting chapter, because we had trouble narrowing down which ones to include. If you look online, you'll see something like 180 or more known biases. We do not include them all in the book, but we try and pick the relevant ones. Just as a reminder, this is when we put too much weight for or against an idea. So, let's start with zero risk bias, as an example. We haven't really talked about risk yet, but we talk in the book a little bit about how you can never get risk to zero, right? Cybersecurity is always about risk management. There is somewhere between more than zero and less than 100% chance that your computer will get infected today. So, imagine a company says, "We are going to have amazing cybersecurity if we have strong encryption," and what strong encryption usually means is big keys, long encryption keys, because they're harder to crack. But you could have beautiful cryptography and still be very vulnerable, so imagining that your giant, very powerful cryptography is going to keep you secure, that it's gonna get you to zero risk, that is a total bias. Those attackers are very clever. They're going to find some other way in.
Allan 16:49
Okay, so this one reminds me of when budget season comes around, and the CISO is going upstairs to the CFO, and the CFO always has some argument back along the lines of, "Didn't we pay for that? What was that EDR thing you bought last time? Aren't we good?" And you have to explain that, yeah, layered defense and defense in depth, the castle, the moat, and all the other cheesy metaphors. It's oftentimes questioned. "I thought we already paid for the cyber thing."
Josiah 17:12
Yeah, we gotta be careful about that.
Allan 17:14
Alright, so the next one is the confirmation bias.
Josiah 17:18
Yeah, people might be a little bit familiar with this. It's a tendency to look for evidence that supports what you want to be true, and ignoring evidence to the contrary. When I was in graduate school, we had to remind ourselves about this in research, but quite honestly, it happens in practical life, too. So, say that the CISO wants to buy a new firewall, and looks at the past years worth of attacks and writes to this beautiful proposal that says, "Here's all the attacks that the firewall would have stopped if we had it a year ago," as convincing to buy it. That is confirmation bias. It is ignoring all the things that the firewall couldn't have stopped, like, insider threats and phishing. It's just cherry picking the things that support what the person wants to be true.
Josiah 18:01
So, the last one on your list was omission bias. This is one that might be a little bit more unfamiliar. but it's when we as humans tend to favor inaction instead of acting, again, because the painful action w feel is worse than painful inaction. So, we don't act when we should. You can think of examples of companies who covered up bugs or insecurities. You can think about users who don't enable two-factor authentication because it would be, quote unquote, painful if they lost their phone because they lose all their codes. And so, instead of enabling it, they just do nothing. They fall for this omission bias.
Allan 18:39
Okay, that's a great example. You said there were 180 total and you guys had to pick and choose your stack. So, I'm sure I missed a good one in my list of three. What's another one of your favorite biases that you guys talk about in the book?
Josiah 18:51
Yeah, maybe just a nod to something called knowledge bias. As an expert, somebody who has studied cybersecurity, I have to check myself all the time, to see if I'm assuming that the person that I'm talking to or communicating with knows what I'm talking about, because I am guilty of knowing things that maybe I shouldn't assume. This one, we have to be a little careful about not to be condescending, that's taking this too far. So, I try and ask gentle questions upfront, just to set the stage. Are you familiar with password managers? I don't want to assume that somebody knows what that is, if that's what we're talking about.
Allan 19:24
If their answer is yes, then you can take it to the next level. And if it's no, yeah, I like that. Avoid being condescending, while making sure you're not having your own knowledge bias. Excellent. Okay, so, we have covered fallacies, we have covered biases. Let's see here, perverse incentives and the cobra effect. These were two that I was super keen to learn more about. The cobra effect, for those who don't know, is an incentive that rewards people for making the issue worse. So, of these perverse incentives and cobra effects, my favorite selections were, let's see here: the goal of a security vendor is to keep you secure. Another one I found that I loved in your book was: bug bounties eliminate bugs from the offensive market. And then, the third one I picked was: cyber insurance causes people to take less risk.
So, how do you want to tackle those?
Josiah 20:09
Sure, let's do them in order. Let me start by saying this chapter was really interesting for me to think about it, as we were writing it together. In part because it isn't my natural way of thinking in cybersecurity to think, "How might this backfire?" That is a really important way to think. In the same way that we have to teach people to think offensively to be good defenders, to do good defense, I think we also need to think about, "How might this backfire?" The first one that you mentioned was this misconception that the goal of a security vendors to keep you secure. Yes, vendors absolutely need to persuade whoever is purchasing their service that they have something we need. However, it's important to remember that the goal of any business, in cyber and otherwise, is to make money, right? That's why that business exists. Certainly, you could argue with me that it isn't an "either or." It's not that we couldn't be both, they can make money and we can be secured, we can have it both ways, but that's an ideal world. I think, in reality, it's a little bit bumpier than that. So, take for example, auto renewal. If you sign up for a service and it auto renews every month, every year, I think that benefits the
vendor much more than it benefits you, because they are incentivized to just release more software, even if it has bugs or flaws, because again, they make money. I think consumers sometimes care more about security than the companies that they work with. So, we shouldn't assume that the vendor is putting security front and foremost, and as a result, we should take precautions, we should take care of the things we can control.
Allan 21:36
Okay, so we're getting into that hard reality of actual incentives. This ties into, within a cyber practice, not even talking about the vendors, but within the building, within the enterprise, the organization, this idea that you have to keep in mind that there are different goals on the table for all parties involved. Incentives is the way to get mutual goals met. It ties back to what we talked about at the beginning of the show, if their mission is to login fast, because they got this thing to do with the point of sale terminal, you've got to figure out a way to incentivize them to want security. And so, you have to assume that's not their incentive and you have to look at honestly what the incentives are. So, I think this bias works, this one is not a bias, this one works on the interior landscape, not just the external landscape.
Josiah 22:15
Yeah, it definitely does. And I will say it goes to a sort of theme in the book, which is the overstatement of myths, like the definitiveness, with which we say, "Oh, security vendors must be keeping our security first." Or, the next one that you picked about bug bounties. Like, "Bug bounties are going to eliminate all the bugs on the market." That's a very strong definitive stance, and that's where it goes awry. There is a grain of truth to it. Bug bounties absolutely can get bugs patched, it can take them off the market, but to say that is 100% true, is misleading. Depending on the context of conversations, I think we need to be really careful, because if we misrepresent what that means, it can lead to really bad outcomes. On that
one, in particular, certainly there are ways where that statement could be false. It could be that somebody finds a bug, they sell it to an exploit vendor, and then, they later sell it to the bug bounty, which means it still exists in the wild for a while and we're all worse off for that. I think there could even be vendors who say, "You know what? We're going to have a strong bug bounty program, but we're not going to hire any software security team. We're just going to pay the bug bounties when they come in and not focus on software assurance the way that we should." So, the overstatements are something we caution about, that's a meta myth.
Allan 23:28
Yeah, even with the best of intentions and the best design and the best effort and hiring the best pen testers on planet Earth, I'm with the infinite room full of monkeys with typewriters and eventually you get Shakespeare. Let's imagine you got the infinite room of the best pen testers on planet Earth, one of them could be having a bad day. There is no perfect state. You can go all the way to, "I literally have somehow magically snapped my fingers and gotten all the best hackers on planet Earth to attack this app." Tomorrow, there's another exploit tomorrow. Somebody finds a bug that you didn't find today.
Josiah 23:58
Yeah, again, back to the "we will never get risk to zero," which leads to, "we should just be prepared." There will be another data breach, there will be another insider, be prepared for that.
Allan 24:10
Yeah. Cyber insurance causes people that take less risk. I've got my hot take on this one, but I want to hear from you. You guys are the experts, you wrote the book, you tell me.
Josiah 24:17
There certainly has been conversation about this, and it gets into something called moral hazard, which is definitely not unique to cyber. This shows up in all kinds of insurance, and what that means is that we have an incentive actually to accept a risk, because we're not responsible for the full cost. So, why would you spend a million dollars on cybersecurity for your company, if you can get an insurance policy for $100,000, for a 10th the cost? If that was true, you should just let bad things happen. You should spend no money on cybersecurity, buy insurance, just let bad things happen and then, invoke your insurance. Now, insurance companies know about this, this is not a surprise, and so, they try and cap their payments, or they lower premiums if you show examples of having good security. Cyber insurance, I think, has evolved a lot in the last 10 years. It's certainly not perfect. It's still newer than many actuarial sciences, but I would like to see more incentives that reduce the likelihood of claims. I think that would be a better outcome.
Allan 25:16
Yep. I like that for sure. And I've seen it. I've seen it live in real time. Again, I hate to pick on the noncyber keepers of the purses, the holders of the first strings, but I've heard that argument from upstairs that, "Well, didn't we insure for that one? Why are we also layering on this extra cost? We insured against that one. I thought that's when we've transferred the risk. Even using Enterprise, "That's a risk we transferred." Yeah, but we still got to do our part. You don't want to just— Ugh, anyhow. How about other perverse incentives? Same thing, I picked a few of my favorites. I'll bet you've got a favorite I missed.
Josiah 25:47
Actually, you picked the highlights. I think those are the really important ones. We have a few more in the book that go into depth on a couple other items, but those were good highlights.
Allan 25:55
Right on. I chose well, that time. Okay. Finally, the question I ask every guest on the show, and we're going to use some magical thinking of our own now, because I am giving you a magic wand and you are going to wave it. With a wave, you can change any one thing in the entire world of cybersecurity that you want to change; people, process, technology, the ecosystem, the tech stack, the vendors, the community, anything. Wave your magic wand and change one thing, what's the one thing you change?
Josiah 26:23
I'm going to use my magic wand to create an accurate measure of how secure we are. We don't actually have that metric today, and it leads to all of these— well, every conceivable problem. We fight fires, and that's really exhausting. We're treading water on cyber incidents, and we don't know how to measure whether any action, any product, any choice is going to help things. So, if I could fix one thing with my magic wand, I want an accurate measure of security.
Allan 26:50
Oh, my goodness, that is literally the best answer we've gotten to that question yet. As soon as you said that, oh, the light bulb went off to us. That's the one thing we're all missing. That's brilliant. Josiah, thank you so much, that was a fantastic answer. Great review on these concepts from the book and I'm grateful that I picked some good ones here. I really am looking forward to reading the whole thing when it comes out. Thank you so much for coming on down to the Ranch. Thank you, listeners. Y'all be good now.

00:00:00