Skip Navigation
BlackBerry Blog

Wisdom, AI, Intelligence and Being Left of Bang

FEATURE / 02.21.18 / Scott Scheferman

When we say #ThinkBeyond, what we really mean is: Think anew. Outside of the box. From a new angle. From a new perspective. From a new hope and conviction. It means no longer allowing this self-perpetuating industry, the noisy vendor space, and the brutal media, from closing our minds and resigning us to phrases like this:

•    “Prevention is Dead”
•    “It’s a matter of when, not if”
•    “The best way to address the ransomware plague is to make sure you have good backups”
•    “The vulnerable end-user is the problem, we must train them to not click things”
•    “You need more alerts, more events, more visibility, more IOCs, more static signatures, more intelligence to be able to find the threats that have already compromised your environment”

The irony of the word ‘intelligence’ in our industry is that it has come to mean an empty set of quickly expiring, low-confidence, weakly-attributed, slow-to-operationalize, cloud-dependent, noise. Even so, it has become a virtual currency between C-level intelligence communities, traded almost like challenge coins, in a quid-pro-quo manner.

This dynamic defeats the whole purpose of ‘intelligence’ to begin with, much like when monetized search results killed the dream of free information for the world during the early 2000’s. What we need to do in the industry is to shift focus to a different kind of intelligence that has wisdom as its goal.

Wisdom is the ability to see through the noise and understand the core problems we face as leaders in an organization. Wisdom is also the ability to truly democratize intelligence. By the end of this blog I hope to convey nothing if not the simple notion that in the year 2018, Artificial Intelligence (AI) provides us with a new hope, and allows us to disrupt this ‘currency of malware intelligence’ by subsuming and classifying the entire universe of all binaries that have been or will be into the future, from here on out.

A bold statement, yes, but let’s explore…

Industry Core Challenges

Surely, confidently, emphatically, the Number One problem we are facing is quite simply a criminal market leveraging ransom to extort our organizations out of cash. The second most-obvious problem we face is a severe lack of talent and resources to help us deal with this problem. This was echoed in Davos at WEF summit, and at the recent US Senate Intelligence Hearings.

Everything else we deal with has pretty much always been that way: AAA, exploitable bugs, theft of IP, governance, compliance, policy, audits, cyber risk, patch management, device inventory, firewalls… the list goes on, but nothing new under the sun. Sure, we have more Things in the Internet of Things (IoT) realm. Sure, the cloud. Yet, fundamentally, our roles as leaders have always had these cyber challenges, and our security controls are designed to mitigate the risks associated with them.

But ransomware - and the exponentially widening skills gap to address it - embodies the core challenge of what we face more than anything when we look at our risk curve today.

Like any evolution, sometimes it takes a revolution to change the tide. In nearly every issue the human race is challenged with, we have recently revolutionized the way we are attacking the problem. This revolution is without a doubt, Data Science. And while there is many a buzzword being thrown about in our cyber industry, the truth is we have been slow to adapt, yet eager to perpetuate the problem, increase the noise, and throw more dollars at the problem instead of more wisdom.

What would wisdom tell us to do when dealing with ransomware (indeed all malware) in the middle of this fourth industrial revolution? The answer is almost deceptive in its simplicity: Wisdom would ask, “can I prevent ransomware from ever executing in the first place?” Wisdom would ask, “can I leverage a predictive AI in order to fundamentally understand the difference between good and bad software better than the entire human race will ever be able to do?”

If yes, then wisdom would ask, “can I do so with a high enough confidence to allow that predictive AI to make an autonomous decision on every piece of software that will ever run for the next several years?” And if the answer scientifically proves out to a ‘yes’, then, “should I build a prevention technology to leverage the confidence coming out of that predictive AI so that I never have to worry about static signatures, heuristics, behavior, sandbox detonations, obfuscated callbacks, or cloud-dependencies again, in order to prevent malware from executing?”

And surely the answer is a yes! 

Mission Critical: Staying Left of Bang

Much of my 20+ year career has been supporting the Department of Defense (DoD), as well as other federal agencies, in the field of cybersecurity. The best analogy to come out of that entire career is captured in the commonly used phrase “Left of Bang”. For the FBI, this means preventing the bomb from going off before it does. Why? Because that is how lives are saved.

It isn’t because you care about who is behind the bomb attack. It isn’t because you love the trail of breadcrumbs that provides indicators of a pending attack. It isn’t because you enjoy staying up all day and night over long weekends when the threat level is raised. It isn’t because you get paid a lot of money either (you don’t). It’s because you want to save lives.

Now, sometimes you don’t get to be left of bang, and the bomb goes off, and it makes a mess. Then your role changes to cleaning up, doing forensics, piecing things together, and notifying family members of the victims. Removing the emotion from this scenario, if I may, the raw effort it takes to ‘clean up’ after a bomb goes off is almost immeasurable. You can rebuild, but you can never take back the lost time. The harm to the public has been done. The reputations, careers, dreams, and sense of security will never be the same again for anyone involved. This is why the FBI’s mission is to be “left of bang.”

It is also why Cylance brought a predictive AI to the cyber problem space, because cleaning up, restoring, notifying victims, staying up all weekend, and having confidence that the threat is contained are all very painful things. We at Cylance know, because we do hundreds of incident response engagements a year, and thousands of compromise assessments, and we see and share first hand in the pain of what happens “Right of Bang,” - after the bomb goes off. We were intimately involved and the primary hero when the nation’s biggest risk to national security occurred during the OPM breach. That bomb is still going off, and the blast radius continues to expand.

In our industry, the rub has always been that there are too many malware samples and classes to keep up with - that the entire human race has been unable to analyze, classify, create signatures for, and operationalize those signatures in a window of time small enough to prevent that same file from running somewhere else.

Worse, we’ve always required a “Patient Zero” in order to even know that there is a new malware campaign unfolding - someone has to be the sacrificial lamb for us all. And it is not just ‘one’ Patient Zero in a campaign. During WannaCry there were tens of thousands of Patient Zero’s all getting hit and suffering billions of dollars of damage and downtime. It is these outlier events that create our new reality in 2018: we are measured against how well we hold up to the exception, not the ‘mean.’

Bridging the Cybersecurity Skills Gap

If you look at the fact that there are over one million cyber jobs that go unfilled, that we have had a zero percent unemployment rate for the last several years, and if you think about the fact that this will triple to a 3.5 million person skills gap by 2021, it is easy to see that this human latency, this human restriction, this unscalable engine of human analysis, simply cannot keep up with the pace of the malware economy and attack automation that has enabled the cyber kill chain to be shorted to minutes. To exclaim that we have reduced the mean dwell time from lots of hundreds of days down to just over a hundred days means nothing. Zero. Nada.

At Cylance, our consulting teams have seen entire targeted attacks take place over a weekend, much less a year. Dwell time is not a measure of risk, it is only a measure of ineffective prevention and related visibility into an actor’s persistence.

And so we go back and ask ourselves: How are we to deal with our two biggest challenges like malware/ransomware, and a lack of skills to keep up with it?

The answer to both challenges is singular: via a predictive model that both a) makes up for the skills gap and super-humanizes the resources we still have and b) predicts by well over a year, any and every piece of malicious software that will ever try to execute on any client or server, anywhere in the world, prior to any Patient Zero - and in some cases predicting so far in advance that the malware hasn’t even been conceived yet by the authors of the campaign.

Now, if that sounds crazy, it absolutely is. But ask yourself: how crazy is it that a SpaceX rocket can autonomously land on a floating/moving platform at sea after re-entering the atmosphere? How crazy is it that a car can apply its own brakes to avoid a collision better than we can, even if we are fully paying attention and studying the road ahead? How crazy is it that AI can beat the best of we humans at chess, GO, dog-fighting pilot simulations, surgery, predicting cancer, or predicting what we as individuals will want to watch on TV a year from now better than we ourselves even can?

Predicting the Future with AI

This is a crazy world, and it is out of necessity that we are evolving the AI around us to keep up. It just so happens that an AI exists that has predicted the entire universe of possible good and bad software for the next several years, and has done so at a provable, mathematical confidence level that only the very best narrow AI’s on earth enjoy.

So, while it is easy for us to return to the awkward comfort of blaming our end users, embracing defeatist attitudes, and complaining that we will never get ahead of the noise coming from our endpoints, we need to continue to always think outside the box, embrace the revolution upon us, and leverage the technology that can truly make a difference. We need to embrace wisdom, which only has value to the extent we allow that wisdom to affect our decisions.

Before we close, let us examine one of the fears that sometimes holds us back when it comes to embracing AI: the fear that it might displace us as analysts or threaten our job security in our current role. Folks, I’ll just say it as bluntly as I possibly can: get over that fear as soon as you possibly can. It is holding you back from the world we are heading into at breakneck speed. There is no going back. AI is here to stay and is evolving at the speed of computing on the eve of quantum computing.

This is where we all truly do need to take a deep breath, step back, and think. We need to proactively counter this fear inside us. The greatest single threat to your job security is not that AI will take your job, but that you will be left behind when your peers learn to leverage the inevitable wave of AI that is crashing over us all while you remain stagnant in your skills and outlook.

Automating the Kill Chain

Right now, we have a narrow window of time as defenders where the good guys have better AI than the bad guys. While advances are being made every day to automate the kill-chain and leverage machine learning for bad, those advances pale in comparison to AI’s ability to help us get ahead for the first time in 30 years and restore time itself to our advantage by predicting malware weeks, months, years into the future.

We live in an era where by way of algorithmic science we can achieve more in 50ms and with less than 100mb of memory than we could in the last 20+ years of human malware analysis and the next two years ahead as well. And we can do that with more confidence than the entire human malware analysis engine does too.

Crazy? You bet. Scary? Kinda. But here we are in 2018 and it is happening because machines solve Big Data better than we do. They have won, and we have won along with them.

Powered by tens of thousands of computing cores, these machines leverage the millions of features that can be extracted from the 4.8 * 10^344 (4.8E344) possible bit permutations in a single portable executable (PE), and then chose the thousands of them that yield sufficient confidence to determine whether the PE is malicious. Then they do that across billions of files to create a model that is then shrunken down to a few algorithms that can run in the 60mbs of memory on your laptop. This allows us to autonomously prevent the execution of any malicious file from running for the next several months - or years - to come.

That should blow your mind as a human. That also blows the minds of the entire criminal economy, making it much more expensive, time-consuming and profitable to for the average cybercriminal. You want this AI on your side.

Meanwhile, much of the legacy antivirus (AV) industry is trying hard to leverage machine learning (ML) automation to help them analyze and classify malware and create signatures for it, and then share those signatures from the cloud. If we’re lucky as humans, we might train that machine learning on 300 human-knowable features (vs. a thousand-fold more that Cylance’s AI looks at), and gain exponential efficiencies in doing so. But even the best ML based on this legacy approach is still entirely too slow to keep up with an ever-increasing flow of new malicious files - both in quantity and diversity.

There is no prediction happening here, and that is why there is no pre-execution prevention for files that have never been seen before. That is why the bomb keeps going off in our collective enterprise, even though we just bought the latest “we use ML to do the signature/classification process” thing, even though we just resigned and bought post-execution EDR.

Learning from the Past

In 2014, an AI was created that has finally allowed for just two classifications for software: Good or Bad. It is this binary classification that allows an autonomous decision to be made pre-execution via an algorithm that has no connection or dependence on the cloud. The math doesn’t care about attribution, TTP’s, motive, IOCs, sleeper timeouts or even signatures.

As I write this, a new Rapid Ransomware campaign is hitting that our IR team is getting calls about. I grabbed one of the hashes (125C2BCB0CD05512391A695F907669B2F55A8B69C9D4DF2CE1B6C9C5A1395B61 ) and ran it against a local model from March 29th, 2016. Sure enough, that model, and the endpoint protection product leveraging it, would have prevented this file from ever executing, a full 671 days before this campaign hit – likely many months before this campaign was even conceived by the bad guys, before any code was compiled, and almost two years before the first patient zero in the wild.

Encouraged, I grabbed the other 14 hashes that exist in the wild we know about as humans right now, and guess what? Cylance blocked them all almost two years ago, all without any cloud, any heuristics, any detonation, any EDR, any anomaly detection - just with a simple 50ms static analysis.

This story plays out every time a new ransomware campaign comes by, and it is the reason our customers can go home at a normal hour on Friday afternoons when these campaigns hit. That is ultimately the power of predictive AI: being where the bad guys are before they get there, so that the bomb never goes off.

How do you want the next 671 days from now to feel? If all you did was install Cylance’s predictive AI and walked away and never touched anything - never did any updates, never connected to the cloud, never even remembered you installed it - you would be protected from new, as-yet-unknown, as-yet unnamed ransomware campaigns almost two years from now.

Time to #ThinkBeyond.

About Scott Scheferman

Scott Scheferman wears many hats at Cylance, working between the white spaces on the org chart to ensure timely delivery of our Consulting Services, effective messaging around the value of predictive AI in the context of cybersecurity operations and risk, research around how the Temporal Predictive Advantaged (TPA) of Cylance’s AI affects the broader malware economy, and public speaking at conferences and seminars around the country. 

Scott Scheferman

About Scott Scheferman

Senior Director Professional Services Consultant