July 5, 2022

The Future of Artificial Intelligence with Jeff Gardner

by Hacker Valley Studio

Show Notes

Jeff Gardner, CISO at Germantown Technologies, comes to Hacker Valley Studio this week to talk about the future of cybersecurity and what up-and-coming hackers may encounter on their journey into an ever-evolving industry. With a specific focus and interest in artificial intelligence, or AI, Jeff’s discussion in this episode covers the current perception of AI in tech, the timeline of when we may see highly-intelligent AI come into play, and what the future of AI looks like from a cybersecurity standpoint. 

 

Timecoded Guide:

[03:54] Focusing on numerous areas during his day job as CISO and understanding the necessity of a strong team of trusted cyber professionals

[09:00] Getting excited about current and upcoming technology in cyber while remaining realistic about present day limitations and needs 

[15:53] Automating security analyst tasks and finding the quality control balance between machine knowledge and human intuition

[22:50] Breaking down the concept of “bad AI” and understanding how to address the issues that may arise if AI is used for nefarious purposes

[28:22] Addressing the future of unique thought and creativity for computers and for human beings 

 

Sponsor Links:

Thank you to our sponsors Axonius and AttackIQ for bringing this episode to life! 

Want to learn more about how Mindbody enhanced their asset visibility and increased their cybersecurity maturity rating with Axonius? Check out axonius.com/mindbody 

AttackIQ - better insights, better decisions, and real security outcomes. Be sure to check out the Attack IQ Academy for free cybersecurity training, featuring Ron and Chris of Hacker Valley Studio, at academy.attackiq.com

 

What are some of the things that you are expecting the next generation to be doing when it comes to bypassing security in a way that they won't get caught?

Jeff, like many hackers and security pros in the industry, started his journey in cyber by hacking different systems from his own computer as a kid just because he could get away with it. While that type of hacking still exists, there are new ways for systems to manage and counteract these threats and attacks, as well as expose who is behind it. The new generation of hackers will learn in different ways on different technology, and Jeff is confident that what they choose will come because of where the security industry is already going, with devices that use machine learning and pattern learning, as well as the continuing development of AI. 

“When it comes to artificial intelligence and all the myriad of models and neurons and all that, we're still pretty much at single neuron, maybe double neuron systems. But, as things evolve, it's gonna be harder and harder to bypass those defenses.” 

 

What is your perspective of AI not being here and available for us yet?

In Jeff’s opinion, the biggest thing missing from our current AI to really make it the intelligence we claim it is, is creativity. We have smart technology, we have technology that can automate tasks and can be told very easily what to do, all through feeding in data and processes. However, Jeff points out that most of what we call artificial intelligence in the cyber and tech industries doesn’t have the creativity or the human intuition to match the human brain. We’re in an exciting escalation of technology and intelligence, but we aren’t at true AI yet. 

“I think one of the things that's missing from AI, and it's being solved rapidly, is creativity. We train it through models, but those models are only the data that we give it. How smart is the system if you just give it a plethora of data and have it come to its own conclusions?” 

 

How far away do you think we are from highly intelligent AI?

Although the futuristic AI that appears in science fiction movies and books isn’t here yet, Jeff believes we aren’t far off from a level of computer technology that we have never seen before. With the quantum leaps in technology that we’ve continued to see, namely in computers starting to solve math problems we’ve never even thought of or engage with art in a way we’ve never dreamed possible. What we see now is the tip of the iceberg, but the future holds massive potential for what AI will look like and what automation of certain tasks will look like, with accuracy rates for analysis technology continuing to narrow to 99.9% accuracy rates. 

“When you can get to that level of processing speed, you can do things we can't even dream of, and that's what they're doing now. They're solving math problems in ways that humans have never thought of, they're creating art in ways that humans couldn't imagine.”

 

How do we create AI for good? 

The fear of the “evil” or “bad” artificial intelligence comes up frequently when we discuss what the future of AI may look like from a security standpoint. However, Jeff is confident that the issue is not as black and white as our fears make it. For starters, when we understand the purpose behind what “bad” AI might be programmed to do, we can put other measures in place to combat it. On the other hand, the struggle of good vs bad, right vs wrong has been a problem in hacking and in cyber since the first white hats and black hats came into existence. The fear of bad AI is a philosophical discussion instead of just a technical conversation.

“I think it all comes down to, like you said, purpose. What's the purpose of the bad AI? What's it trying to do? Is it trying to hack our systems and steal the data? Is it trying to cause physical harm?”

---------------

Links:

Stay in touch with Jeff Gardner on LinkedIn

Connect with Ron Eddings on LinkedIn and Twitter

Connect with Chris Cochran on LinkedIn and Twitter

Purchase a HVS t-shirt at our shop

Continue the conversation by joining our Discord

Check out Hacker Valley Media and Hacker Valley Studio



Transcript

Jeff 00:10
What's the purpose of the bad AI? What's it trying to do? Is it trying to hack our systems and steal the data? Is it trying to cause physical harm? Is it trying to destabilize an economy? Is it trying to shut down hospital system? What is that AI trying to do? Is it the ultimate evil, where it wants to eliminate all humans?
Hacker Valley Studio 00:30
Welcome to the Hacker Valley Studio podcast.
Axonius Ad 00:38
Hey, everyone. It's me, Simone Biles. You might be wondering why you're hearing my voice on a cybersecurity podcast ad. Well, it's because I'm partnering with Axonius. Whether you're a gymnast, like me, or an IT, or a security pro, complexity is inevitable, and I've learned that the key to success is focusing on what you can control. Go check out my video at Axonius.com/Simone.
Chris 01:04
What's going on everybody? You're in the Hacker Valley Studio with your hosts, Ron and Chris.
Ron 01:17
Yes, sir.
Chris 01:20
Welcome back to the show.
Ron 01:24
Glad to be back again. In the studio today, we have someone that's done a lot of things, been a lot of places, all the way from incident response to offensive security. In the studio today, we have Jeff Gardner. Jeff is the Chief Information Security Officer at Germantown Technologies, and he's going to be schooling us on some topics of his choice today. Jeff, welcome to the show.
Jeff 01:50
What's up, guys? I was watching the little audio bar of the recording. I think that was the longest held, "I'm here," that I've ever seen. Bravo, you must have some stage experience, Ron.
Chris 02:04
You don't even want to know how long he can hold those notes. Jeff, beyond excited to have this conversation with you today, but for the folks that want to know who you are, and get a little bit more information about you: What is your background and what are you doing today?
Jeff 02:24
Yeah, sure. So, I've been doing this for— I stopped counting at 20 years, you know, there's no point in counting after that, but I've done everything from system administration, network administration, help desk, analyst, engineer, architect, up through CISO. I've worked in startups, healthcare, manufacturing, the military, federal service, like you said, renaissance man. I've been here, there, everywhere, but right here now at Germantown, it's an interesting setup. So, we are the IT and security services arm of Rubicon Founders, our parent company. So, basically, they have a portfolio of companies, and we help them with all their security and IT needs, and I'm in charge of the security aspects of all of our subsidiary companies. So, it's a very different role than CISO roles I've held in the past, because normally, it's just like one place. Here, it's like, "Well, congratulations, you're now going to be CISO of five, six, however many companies we have." Not complicated at all, no pressure.
Ron 03:24
It sounds like you've essentially signed up to do 3, 10, 20 jobs all at one place, which is what a lot of CISOs describe as being a CISO, they're doing many jobs at once. But what got you into wanting to do that? It sounds a bit crazy when you say all the functions out loud, and all the places that you have your hands in, but what compels you to focus on so many areas at once?
Jeff 03:54
I mean, it's a very unique opportunity and I've never been presented with it. I mean, I can do the one to one all day, every day. Give me a single company to run the program for and that's, I'm not gonna say it's easy, but it's something I've done. This is, I'm like the Neil Armstrong, going into this new situation, I'm stepping foot on the moon and there are no footsteps here. How this is going to shake out? I have no clue. Is it gonna work? I certainly hope so. You know, I'm gonna do my best, but it's uncharted territory and that's kind of exciting being able to be a pathfinder. So, I brought on a couple of my most experienced and trusted guys from positions past. So, it's not like I'm going this alone, and that was one of the key things when coming over here, I was like, "I need to have my people, and if you can get a and b and c to to join, then I'll come over and we'll do the thing." And they came over and now we're doing the thing.
Chris 04:50
So, paths unknown, being an explorer, stepping into the unknown, the uncharted. Is that something that you had a habit of doing throughout your life? And when did that start? I would love to hear a story about that.
Jeff 05:01
Oh, God, absolutely. Much to my parents’ chagrin, I've kind of always been that way. I mean, it's even the way I got started in this industry in the first place is, back in the, I'll say late 80s, early 90s, you know, when payphones were still a thing? I had a cousin who was much older than I was, he was into freaking, frequency generators, all that fun stuff. So, he kind of taught me how to build one of the boxes, I'd go home from school every day on my bike, cash out the payphone, go to the local lamppost pizza and start playing Super Street Fighter, which I also hacked that machine, too. So, it just kind of went on from there, and then, I actually started to go down the path of the dark side, which is, "Hey, the library BBS is actually hooked up to their system, which late fees, I can just erase my leaf, that's pretty cool. Let me do that." And then, my parents found out and they're like, "We should probably divert your interests in this some more productive means." And then, it went on from there, so I've always kind of had a knack for figuring out loopholes and going into the unknown and just figuring out how things
work. I think it's how we all are, at the end of the day, you know, in security, we like to take things apart, figure out how it works. We don't know when we're going in, but we just kind of learn as we go.
Ron 06:18
Yep, and we don't like putting them back together a lot of times, especially when you're on the offensive side of the house. I know, when I got into my first bit of malware, which was sent to me by someone I did not know, I had to figure out how to get rid of it, just because I knew I would be in so much trouble if my parents ever found out.
Jeff 06:36
Yeah, and that was back in— I'm gonna just assume and say that was back in the day before
virtualization was a thing. So, you're probably executing that malware on your actual system that you use every day, which is all kinds of fun.
Ron 06:53
You know, that makes me think about the future a bit, just because we were able to get away with so much, especially when it came to IT security. You were able to look at these late fees, even view them without a consequence, let alone change them. What are some of the things that you are expecting the next generation to be doing when it comes to bypassing security in a way that they won't get caught?
Jeff 07:17
I mean, a lot of that's going to come down to where the industry goes with security. I mean, AI, artificial intelligence machine learning, the way that it's defined, I don't like it, because it's not actually what it really is. I mean, most of the systems that we're actually working on right now are machine learning, yeah, pattern recognition, data sets, and all that. But when it comes to artificial intelligence and all the myriad of models and neurons and all that, we're still pretty much at single neuron, maybe double neuron systems. So, as things evolve, it's gonna be harder and harder to bypass those defenses.
Jeff 07:54
We've seen a lot of the AI ML on the defensive side. I'm curious of when that's going to start happening on the offensive side, you know, the Skynet scenario where we're gonna have Skynets battling each other, like blue AI versus red AI. And then, what are we going to do? At the end of the day. That's probably not in our lifetimes, well, maybe not, I don't know, but I just see these things happening. It's not like back in the day, where there's all these unknown attack vectors and we're kind of being pioneers. New attack vectors, new TTPs, they're not really happening at the pace that they were, but when they do happen, the consequences are more severe because we think we're good. Like, we've got these systems in place, yeah, we might get breached, but we can respond. Eh, not always, especially if the systems that you're using to protect are the ones that are actually getting breached, like your perimeter defenses, your firewalls. If that's the thing that's been breached, and they're using that as a beachhead: Are you really monitoring that? And are you really able to tech something that's coming in from that attack vector?
Chris 09:00
Oh, you just said so much to unpack, but I want to go back to what you were saying about AI. I think you're looking for something that passes the Turing test, right? So, do you find yourself reminiscing more about the old days in the BBS times? Or, are you thinking about some of the technology of the future more? What is that technology that really gets you excited today?
Jeff 09:00
It's this weird scenario of somewhat overconfidence in tools and forgetting that we all, at one time or another, were all command line jockeys. That skill set, even in younger analysts, they're used to tools, but it's like, "Alright, I'm gonna take away everything from you. Go, do incident response." How? You start looking at logs. "Okay, which logs? How do I get those logs?" Well, you need PowerShell. Like, you can do a lot through PowerShell if you're on Windows, or Bash if you're on Linux, if you'd like that and it's like, "I haven't really done that before." Okay, okay. It's almost starting a martial art, you start at a white belt, you go yellow, red, whatever, green, then Black. It's like, we all tend to just jump in the tools and think we're black belts. Alright, but you haven't earned your stripes. You haven't gone up through the ranks yet. I don't know if that's just as a sign of the times, we have all these advanced tools we kind of forget— I don't want to say the purer path, but the way we all came up. We all came up using basically DOS, most of us who are older, and it's like, "What do you mean is there was no icons?" There were no icons, there was a little blinking cursor, and that's what you looked at. If I mentioned auto exec bat config dot CIS, and people are just like, "What's that?" I'm like, "Oh, God. Really, youdon't know what that is? Oh, we're in for a treat."
Jeff 10:49
I mean, I'm stuck in this middle ground of living with one foot and each. I look forward to the future, but at the same time, I don't want that to take away from the basic knowledge that I think we all should have. And it's kind of like, how do you convince those people who go, "Well, if this doesn't work, then there's these 20 other things that we can use that could work." And it's like, "No, no, no, no, no." What if, absolute worst-case scenario, nothing works? It's like, well, we're never going to be in that scenario. Absolute terms are the enemy of everybody. Never? Never, 100%, we're never going to be there? You're totally sure of that? Okay, but in terms of technology, I mean, there's a lot that's being done now in the email security space, which is phishing emails, ransomware, all of that, that's coming down the
pipe, where it's actually using the machine learning. I'm not gonna say AI, because it's not AI yet, but I've seen the way some of these systems look at emails now and it is looking at it like a junior analyst would and that's pretty cool. I mean, looking at word frequency, looking at devices, looking at context, and it's like, Jesus, this 10 years ago, would never even have been a thought. Like, yeah, I can scan for malware in an attachment, or does this attachment have a macro? Okay, that's fantastic. Is it pointing to a bad link? Cool, but the stuff that's coming out is looking at it like I would. That's good from one side, but on the other side, it's like, "All right, that's kind of scary." Like, it's getting pretty freakin accurate at doing some of that stuff.
Ron 12:23
You know, you're not the first person has come on the podcast and said, "AI is not here yet." And I've always been curious, like, why is it not here? Like, we are advertising it, we're talking about it. We've even named programming packages and libraries, "AI models," and whatnot. So, I guess, what is your perspective of AI not being here and available for us yet?
Jeff 12:46
There's a lot of different definitions of what AI is and what constitutes AI, versus human thinking. I think one of the things that's missing from AI, and it's being solved rapidly, is creativity. We train it through models, but those models are only the data that we give it. How smart is the system if you just give it a plethora of data and do your own thing, come up with your own conclusions? What's it going to do? And how creatively can it solve those problems? I was literally reading an article today about how researchers are finally utilizing AI to come up with new ways of looking at complex math problems that humans have never come up with. So, they're starting to do this research with AI, but it's not very at print. And I think, primarily, because going back to the neurons, it's like one neuron or two neurons. How many neurons are an actual functioning human brain? We're not quite to the point where AI can think and have those— I don't want to call it gut feelings about things, but it only knows what it knows
and it only knows what we've showed it to this point. True AI in my head is when you turn it on, it's like, "Hello, Dave."
Ron 14:03
Wow, how did you know I was Dave?
Jeff 14:05
Those, "What is happening?" moments, but it's all those tenants of AI, too. Like, Asimov's laws, and blah, blah, blah, blah, blah. It's an exciting time, but it's not there yet. Even most AI researchers that I've spoken to agree that we're not there yet, but we're rapidly approaching the horizon where we need to start being a little more cautious. I forget who did it— Was it Google or Microsoft, or somebody else? They created an AI system, and it started talking in a language that the programmers couldn't understand. That it was getting close, but that's also one of those scary things where it's smart enough to invent its own language that we can't figure out and we don't know what it's doing. So, where do we draw these lines? How do we put boundaries around these systems when it actually gets to the point where it's smarter than us? It's gonna figure out a way around the boundaries, because it's smarter than us and it can very easily and rapidly become smarter than us.
Chris 15:09
Is AI an interest of yours? Like, outside of cybersecurity and technology. Or, is that something that you're trying to utilize for your day job right now?
Jeff 15:18
I'm interested in any kind of science and technology, and I was a physics chemistry nerd coming up, that's how I ended up in information security, through being that kind of a nerd. But no, it's just fascinating how their programming these autonomous systems to be autonomous. But what is that going to mean? Like I said. What's that going to mean for our future, when these things do become ultra-highly intelligent, and you have AIs battling each other? What's going to determine a superior AI? Is it going to be processing power? Is going to be datasets? It's an unknown. It is exciting, slightly terrifying, but exciting.
Ron 15:53
You're definitely getting into a bit of a philosophical topic, right? And I like it, because I'm a huge fan of all thing’s automation. If I can get a machine to do it, then I am a happy camper. So, I'm just gonna go ahead and throw this bomb in there because Chris didn't throw it in there yet. We typically get into— We used to argue about: Can machines do everything that an analyst could do? Can we automate the task of an analyst? Or, do we need that human creativity side along with us today? I'm talking about the tools that we have at our disposal. If you were to have unlimited budget, be able to put all these tools
together: Do you think that we could get close to automating all the things that we typically do as
security professionals?
Jeff 16:42
If you're talking like, if I had a blank check in my hands today, what can I do tomorrow? I don't think we're quite there yet. I personally don't trust the systems that well, because we're not to the point where it's actually thinking, it's still just recognizing patterns. So, even if I were able to automate all the things to a certain confidence level, let's say 99%, there's still going to be an analyst in there for that 1%, or even to just make sure, in that 99% of things, it's not making wrong decisions, because one wrong decision can cost lots of money, or put users out and once their users start getting mad, then they start figuring out ways to get around control, and then, it comes up to the CEO, and then things come downhill, and then everything just goes sideways. Until I can get to that 99.9999999%, the infinite nines afterwards confidence that this thing is going to be right in its decision making all the time, I'm still going to want that analyst there, just to do random spot checks, just to make sure. I'm a firm believer in the human gut and we can't quantify what that gut feeling is. At least, I can't, I'm not smart enough to, but there is that feeling like, you can look at this data and you just get that feeling something's not right. Even though all the tools are telling you, "No, this is good. Everything's fine." You put it through everything known to man and it's like, "Nah, it's good." You still look at it and go, "I'm not sure," and it ends up meaning that you go off on this tangent that you couldn't even imagine you went on, looking through logs and data sources, you're like, "Oh, my God, this was bad." How did you arrive there? You don't even know how you got there. So, if you can't figure out how you got there, how are you going to program a system? How are you going to train a dataset to get to the point where even you're just kind of Easter egging around and you stumble upon that?
Ron 18:44
Security controls fail everywhere. They fail constantly and worst of all, they fail silently. That's why you need Attack IQ, the leading automated insights platform to continually validate your defenses. Better insights, better decisions, and real security outcomes. Get it all with Attack IQ. Plus, check out the Attack IQ Academy for free cybersecurity training, featuring the good people here at Hacker Valley Studio. Register today at Academy.AttackIq.com, and let them know Hacker Valley Studio sent you.
Chris 19:24
So, you know, I'm so glad that you mentioned that, because there's a book called Blink by Malcolm Gladwell and it's all about the processing of data that you can't even explain. I remember when I was doing threat intelligence, we had a certain threshold for what we would report on. But I had been in the game for so long, there was something that just seemed like it was important enough to mention, but it didn't meet any of our criteria. And I was like, "You know what? I'm going to push it out anyways. I'll push it out and send to the entire company." I remember my boss is like, "Hey, you know that didn't meet any of our thresholds, but you're 100% correct. How did you know to even push it out?" I said, "I don't know, I just knew." He was like, "Oh, you just did a blink." And I was like, "What the heck is that?" And he was like, "You gotta read the Blink, it's book by Malcolm Gladwell." And so, I read it and it's really incredible. They even talk about a story where there was a sculpture that was selling for millions and millions of dollars, and somebody was like, "That's not the sculpture." How did they know? They just had all of this information in the back of their mind, this experience, these different things. I'm going to side with Ron for a second. Just for a second, Ron, don't get too excited, because I do think you'd need people to make those decisions. But even if you were to look at it, at its base level, a human being is like a giant computer. They're taking information in, storing information, and connecting it in very weird ways. So, I think, eventually, we could get to a structure in which a computer can make that risk assessment, they can make that risk acceptance, but I do think we're quite a long way away from being able to do that. If you had to put a time limit on when we could have something that was that intelligent to make that level of decision: How far away do you think we are? And how do we get there?
Jeff 21:08
I think we'll get there rather rapidly with the increases in quantum computing, because some of the processing power that I've seen in some of the research papers coming out of Google's efforts and others, it's like, alright, this is starting to become spookily intelligent. That's what they've used, I think it was in conjunction with Amazon, that these researchers have started coming up with these new creative ways to start solving problems that humans hadn't even thought of. So, I would say within the next, I'll be conservative, 10 to 15 years. We could be there if the quantum revolution keeps going the way it is. Just because then, it's like, they're starting to talk of systems that could process the entirety of information on the internet within seconds, and it's like, "Oh, my God." When you can get to that level of processing speed, then the data matching capability becomes incredible, you can do things we can't even dream of, and that's what they're doing now. They're solving math problems in ways that humans have never thought of, they're creating art in ways that humans couldn't imagine. I mean, they just came out with, I think it's the Zeno bot. It's not technically an artificial life form, but it is made out of a weird frog cells, and it created a new means of reproduction that it just did itself. But then, they took that, and they fed it through an AI algorithm, and it came up with the optimal shape for these cells to help with that reproductive process. So, it's like Skynet’s thinking of better ways to reproduce. That is getting a little weird.
Ron 22:50
That's a good point. And you know what? We're already down the philosophical rabbit hole, we might as well keep going a little bit further. I think the big difference with that is, computers have the ability to change dynamically. They can reprogram themselves and just change if something happens, but for us as humans, it could take hundreds of thousands, or millions of years for us to adapt to the changes of our surroundings and the opportunities and threats around us. When you look at this new technology, and when you look at the way security is going from a defensive and offensive perspective, how do we make things harmonious? How do we drop this layer of needing to attack organizations right now, with that monetary aspect, but what can we do as a community to reduce the need to attack companies?
Jeff 23:46
Let's see, here's the interesting thing, and it's just my take on the interesting aspect of that question. We're somewhat limiting it in scope to our community, where there's always going to be those individuals out there in the universe who are going to get a hold of this technology, and are not bound by our moral codes, or our ethics, and they don't care if it wreaks havoc. Like in Batman, they just want to watch the world burn, right? What do we do with that? Like, our AI systems that we're creating may have bounds, but the ones that someone out there who just who doesn't care creates, they're gonna program the thing that has no bounds. How do we combat that? So, do we start loosening our restrictions on our AI so it can combat their AI? And it's this rabbit hole, it's just an escalating arms race between AIs. Where does it begin? And at what point are we unable to hold things back? Because there's going to be a certain point where the AIs are like, we can no longer put bounds on it. Do we segment our important systems, like our banking? What do we do as a society to prevent these systems from even being interface d with this AI if everything's now globally connected? How do we put bounds on these systems? Like I said, it's an interesting philosophical argument. It's one that I'm just
like, "Oh, God. Oh, god, it's time for a beer."
Chris 25:15
Yeah, so, the guardrails are completely off this conversation. We might as well just keep it going. When I think about the ultimate, negative, bad algorithm versus a good algorithm, I'm like, "Who would win?" Who would win in this case? I had the honor of interviewing Steven Kotler, and Steven Kotler is the foremost expert in flow states and flow state research. And I asked Steven, I said: Since you have all these bad hackers that want to enter flow just as bad as the good hackers want to enter flow, who wins in that case? How do we ensure that the right side enters flow better and comes out on top? And he said that he had done a little bit of research into that, but whenever you're doing something for good, you tend to enter flow state more readily. Now, bad folks can enter flow still, but for some reason, that good, that purposeful intent gives a slight advantage to the good person. When you're looking at things
like AI, if we had a complete monster villain that wanted to create AI for bad: How do we create AI for good that could beat that? Have you even thought about anything in that realm?
Jeff 26:32
I mean, it's an interesting discussion. I think it all comes down to, like you said, purpose. What's the purpose of the bad AI? What's it trying to do? Is it trying to hack our systems and steal the data? Is it trying to cause physical harm? Is it trying to destabilize an economy? Is it trying to shut down hospital systems? What is that AI trying to do? Is it like, the ultimate evil where it wants to eliminate all humans? Or, is it just this AI that's designed to analyze defenses, figure out a way to get through it, and then penetrate and turn it over to an analyst to be like, "Okay, I don't know what data you want, but I'm in. You tell me what you want me to exfiltrate, and I'll do the things."? It's almost like an intertwined system. So, I mean, in those cases, I think, because of what you just said, I might have the advantage because, like you said, the good guys tend to enter the flow more readily than the bad guys will. So, if we're in that scenario, I think blue would eventually win, it's not going to be a quick resolution, it's going to be a long drawn out thing, because it's bad versus good, if they're both equal. How do you determine if they're equal? That's another question. How do we determine capabilities? Is it: this side is utilizing Google's quantum computer and this side is utilizing Amazon's quantum computer? They can process this many correlations, they can do this many qubits, or whatever it is. How do you determine what capabilities exist in an AI to even have an advantage? Is it interconnection? The blue AI has access to all the interconnected systems, the other AI only has access to this one entry point. Does that mean that blue team has an advantage? Again, it's theoretical question. Maybe the red AI is literal Skynet. It's super ultra-genius level AI, all it needs is one hole and it can own you.
Chris 28:22
So, I have a bit of a philosophical question for you, and I can't remember what piece of sci-fi I was watching, but I guess the whole idea was that this AI basically came from the internet, all of our thoughts, our concerns, advertising, purchases, all this stuff. And so, it comes from all of us and it created this thing that I think ended up being almost like a horror movie. But that makes me wonder, is there a such thing as a unique thought? If you take humanity from the beginning of time until now and you put that entirety into an algorithm, could you still assimilate new and different knowledge? Is there any uniqueness to anything going forward? Because it does seem like a lot of things are derivatives of things we've already done, that's where a lot of creativity comes from. Is there such thing as unique thought going forward?
Jeff 29:17
Absolutely. And then, I keep coming back to the example of these computers working on these mathematical problems, that are coming up with relationships between systems that humans never thought of, or the AI systems that are creating art that we didn't even imagine was possible. It's because it's able to process information so rapidly, it's able to see things that we're not and come up with these novel questions. It's still in its infancy, but I absolutely believe there is still the capability for original thought. I mean, we haven't been where we are now for a very long time. Modern civilization. What? Maybe, if you want to call it like, you know, the 1800s to now? So, 220 some odd years of actual technology and higher-level thinking? Look at last 50 years, I mean, things just keep going and going, and AI wasn't even a thing 80 years ago, 100 years ago, now it is. Spaceflight. That certainly wasn't a thing 200 years ago, but there's hyperspace engines, new means of propulsion. These are all original
thoughts that we're coming up with. Do we leave this to computers to come up with all the original ideas for us once they are able to think on their own? Or, are we going to be working in tandem with them? Because there is something about the human mind, as much as you can simulate things, we are products of chaos and there isn't a system that I know of today, or that's being developed, that can simulate the kind of chaos and just random collisions of neurons and electrical signals in the human brain. Until it gets to that point, but I don't know if it ever will. So, there's always going to be that element that humans will have of just being the oddball in the universe. There's going to be things that we can think of that are just so completely random. How did we get there? I don't know, these two neurons just rub each other a certain way and I came up with this thought. How? No frickin idea because it's pure chaos.
Chris 31:24
Pure chaos. Love it. Jeff, appreciate the time and the attention in this incredible conversation. For the folks that want to stay up to date with you, and all the incredible things that you have going on in your world, what are the best ways that people can do that?
Jeff 31:37
You know, I can give you my LinkedIn profile, so you can post that whenever this podcast goes live, but that's the easiest way to get ahold of me. Just, you know, hit me up on LinkedIn. I will respond to anybody. It may not be that day, or that week, but I will get back to you because I love talking about this kind of stuff. It's the way we learn. It's the way we grow. Everybody thinks of things differently, so, the more we can start involving people, even outside our own profession, in these conversations, I think the better off we'll be.
Ron 32:03
Awesome. Well, we'll be sure to drop your LinkedIn profile in the show notes for everyone to stay up to date with you and all the things that are going on in your world. And with that, we'll see everyone next time. Take care, bye.
Hacker Valley Studio 32:15
If you found value in this content, it would mean the world to us if you shared it on social media, sent it to a friend, or talked about it over coffee.

Keeping Cyber Course Prices Equitable with Kenneth Ellington

November 29, 2022 Hacker Valley Studio

00:00:00