By 2030, our world is expected to have 29 billion connected Internet of Things (IoT) devices. Today, there are approximately 15.14 billion. This hyperconnectivity has significantly expanded organizations’ attack surface and increased the number of cyber-physical risks.
In a recent episode of the CEO.digital podcast, Dataminr Corporate Solutions Advisor Jack Carraway explains how to combat such risks, the role that artificial intelligence (AI) plays in doing so and practical steps security and risk leaders and teams can take to protect their business.
Listen to the full podcast below to hear what Jack has to say or read the transcript that follows, which has been edited for clarity and length.
Indeed there has been an exponential growth in connectivity. We live and work in a world of networks and a densely connected network of networks of people to people and devices to devices. I read recently that, within the next three years, there will be approximate 30 billion active Internet of Things (IoT) connections.
That's a tremendous surface area for a possible attack, a tremendous surface area to protect. And so this connectivity means that the essential information needed to respond to threats and to make critical decisions is increasing at an unprecedented scale. That means that we'll see dynamic risk environments with increased attack surfaces, higher unpredictability and understanding of the risk landscape, which creates new demands for data that is relevant and efficiently delivered—data that cuts through the noise.
Now, as for how all of this has changed the nature of risk, I would say that the risks themselves are the same. We've always had wars and rumors of wars and pestilence and famine and fire and flood. The difference now, I think, is that our highly interconnected networks generate network effects, and that means risks are more frequent, more impactful and cascade rapidly across different risk domains—with risk in one domain generating new risks for another.
Right now, I think we're facing significant near- and medium-term risks that could have an impact at a scale and ferocity that none of us have seen before, across the interrelated domains of economics, politics, public health, and the environment. If we think about politics, we've all observed over the last number of years a comparative democratic decline and authoritarian resurgence that itself has given rise to new conflict.
And what does that conflict generate beyond the obvious threats to physical safety? It produces refugees, and refugee flows produce increased domestic instability in the receiving countries. There are also disruptions to supply chains, energy production, and energy and food delivery. The latter can lead to undernourishment and malnutrition for affected populations. Think about, for example, the disruptions in Ukrainian wheat exports due to Russia's invasion of Ukraine.
“We now live in a networked world where risks in one domain quickly cascade into others and at an unprecedented speed and scale. And these systems are so complex that we can't identify all the relevant data or naturally intuit these cascading consequences by ourselves."
I think also the environment is a good lens through which to see these cascading risk effects. Two years ago, warming in the Arctic caused a weakening of a swirling low pressure area that then pushed the cold air farther south than usual. That led to an abnormally cold spell in Texas that caused rolling power outages, production plant shutdowns, and manufacturing delays—and a global shortage of—semiconductor chips.
I’ll share one last environmental anecdote. Three years ago, China experienced the highest rainfall in 60 years. That caused the Yangtze River to flood. The flooding put a dam at risk of collapse, which forced the authorities to destroy the dam, which thereby disrupted cargo ships down the river.
Within the Shanghai port, this disrupted the global supply chain including the export of personal protective equipment (PPE) for health workers battling COVID-19. So it became a case of excess rain in China leading to PPE shortages in the U.S.
I offer these examples to emphasize and give context to the fact that we now live in a networked world where risks in one domain quickly cascade into others and at an unprecedented speed and scale. And these systems are so complex that we can't identify all the relevant data or naturally intuit these cascading consequences by ourselves. We need a technical solution to make these challenges tractable.
We also need to understand and articulate in advance what sort of indicators would be relevant for the type of risk someone is managing. First, identifying what the relevant signal is, and then being able to detect that signal, that's only possible through a technical solution—especially given the interconnectivity and extraordinary amount of information available—and that technical solution is only possible through artificial intelligence (AI).
There's a specific and broader connotation to the term hybrid threat. The first is in the international security space, and the other is used more broadly in the corporate sector. For example, NATO (North Atlantic Treaty Organization) understands hybrid threats as a combination of military and non-military, as well as covert and overt means, to achieve strategic objectives. That can include using disinformation, cyber attacks, economic pressure or deployment of irregular armed groups.
There may be no conventional uniformed forces or use of direct fires, but we may see coercive diplomacy, economic coercion, disinformation operation, cyber attacks against critical infrastructure, etc. In this sense, hybrid threats blur the lines between war and peace and aim to destabilize and undermine political structures and societies. You can also think of this as hybrid warfare or gray zone warfare. The focus is on adversaries seeking destabilization for strategic political purposes, and though we may have a new name for it, the phenomenon is really not new at all.
An example of a hybrid threat in this international security space would be the NotPetya cyber attack in 2017. This is the attack from the Sandworm APT (advanced persistent threat) group, which is a cyber warfare unit within the Russian GRU (Russian military intelligence). They targeted Ukrainian critical infrastructure focusing on energy companies, the power grid, airports and the financial sector, and they approached it through a back door in an update of a very popular Ukrainian tax preparation software.
“We often see cyber attacks affecting physical infrastructure or physical infrastructure attacks impacting industrial control systems.”
NotPetya claimed to be ransomware, but it's actually not even ransomware because there was no recovery capability. Most of the impact was felt in Ukraine, but the effects were also felt throughout the world. The estimate is, I think, something like $10 billion in total economic impact. And among the significant infrastructural impacts in Ukraine, the radiation monitoring at Chernobyl went offline. Worldwide, it affected the pharmaceutical industry and retail, food production and hospital management companies.
NotPetya even compromised the medical transcription capability at a United States hospital. And it had a big impact on transportation. For example, it shut down India's largest port for a short time, and it brought the shipping company Maersk effectively to a standstill for a while. Frustratingly, Microsoft had already released patches for the underlying vulnerability several months before. So hence the importance of security updates.
That was the first sense of hybrid threat, which we're talking about in the international security space. Outside of national security circles, the term hybrid threat more loosely means a threat against one risk area that unintentionally impacts another, as we often see cyber attacks affecting physical infrastructure or physical infrastructure attacks impacting industrial control systems. It’s not necessarily because a threat actor attempted to influence adversary strategic political decision making. Now, it may be a blurry line because it may be perhaps a domestic terrorist group trying to, or an extremist group trying to achieve some political objective.
An example that would come to mind would be the Colonial Pipeline attack. This was a ransomware attack two years ago by the DarkSide group on a U.S. oil pipeline that provided nearly half of all fuel used on the east coast of the United States. This impacted their industrial control systems, and so therefore they had to shut down the pipeline until the ransom was paid. This led to significant fuel shortages and societal disruption. Flights were redirected and filling stations were running out of gas, which resulted in price increases. I recall on one single day, 90% of the gas stations in Washington, D.C., were out of gas.
This attack was likely enabled from a breached employee password that was found on the dark web, hence the importance of strong digital asset risk discovery. That is also an integral part of the Dataminr solution for cyber.
Yes, absolutely. We’ve had a very heavy and longstanding footprint in the physical security space, including: physical security, personnel security, employee safety, risk management, executive protection, and business resiliency.
But as Datamir is an AI company, we've now turned our focus also to cyber so that we're able to identify all aspects of risk relevant to cyber. So that's vulnerabilities, threats and criticality of assets, for example, that we would discover on the deep and dark web.
Strong collaboration between the cyber and physical security functions is key to effectively mitigating cyber-physical risks. These two functions connect, interrelate and rely upon each other for basic “business as usual” functions. This kind of cross-functionality with cyber has long been employed in the insider threat management space where you have regular and formalized cooperation between physical security, cybersecurity, HR, legal and compliance, and public authorities.
I think that approach can serve as a good example for best practices integration because, the bottom line is that significant cyber risks to businesses’ physical infrastructure, and of cyber's dependence upon physical infrastructure, mean that there is an enhanced need for converged cyber-physical solutions—and that these firms require a timely and consistent enterprise-wide risk picture. Meeting this need is, again, one of the primary reasons that Dataminr is now applying its AI solution to cyber.
AI plays an essential role, but let's take a step back and put it all in context, especially because of today’s impressive and very public achievements of large language models (LLMs), there's a lot of hype around AI. When it comes to AI, there's this odd combination in the public discourse of utopianism and the apocalyptic.
I think we need to take a breath and recognize that AI is a powerful tool, but that it's neither going to let us beat swords into plow shares tomorrow, nor will it become sentient to decide to kill all of us. Yes, it can facilitate bad actors, disinformation and deep fakes. It can make phishing emails more plausible, it can poison the corpus of the AI training data itself, and it may well put millions of jobs at risk. These are all significant concerns that we have to recognize and manage.
“It's imperative to find solutions that overcome our most critically limited resources: our time, our attention, and our cognitive capacity.”
And at the same time, it can do wonderful things, like DeepMind predicting the structure of almost every known protein in the human body, which is extraordinary. It can be and has been an essential tool for saving lives.
I would say that given the hybrid threats and the interconnected nature of risks that cascade at an unprecedented speed and intensity, it's imperative to find solutions that overcome our most critically limited resources: our time, our attention, and our cognitive capacity. We need solutions that will scour the world of available sources to discover, combine, compile, and correlate that information to deliver real-time risk information. Only AI can do that.
But there's a lot of hype around AI and today it seems like every other vendor now claims to be an AI company, but it's simply not true.
Dataminr is a true AI company that's been delivering results—as in saving lives and protecting organizations’ assets—for over 10 years. But when I say we’re a true AI company, let me elaborate on that a little bit. Our AI platform pulls data from over 800,000 public information sources, including the deep and dark web; code repositories; IoT sensors; global, regional and alternative social media; industry blogs; and public advisories and disclosures.
It then runs that data through natural language processing for text in more than 150 languages, via computer vision for image, video and logo recognition and through audio processing for broadcasts, recordings and scanners. It employs generative AI to caption the real-time alerts that it delivers to customers. And, it performs trillions of computations daily to deliver timely and relevant risk information necessary to save lives, protect property, ensure business resiliency, protect brands, manage supply chain risk, and identify and remediate cyber risk.
I would like to understand, before I offer a confident opinion, what the arguments are behind stopping and slowing down. Is the rationale that we need to because our technology is getting ahead of our safety controls and our anticipated control practices? If that’s the case, who would initiate that slowdown?
Are we in fact moving ahead of our ability to manage these risks? Which particular risks? Are we concerned about risks from out of control automated systems? I'm not in the least bit concerned—and this is my personal opinion—that AI systems will ever become sentient because they do not and will never have a first person perspective, nor will my calculator or my phone.
Is there a real possibility that AI could automate procedures that exceed human control and deliver unintended escaped consequences? Yes. And that's also factored into design. So when we talk about a pause, let's be more specific. A pause in what area for what reason? What is lacking? Is it time that we need to establish these controls or not? I don't have a direct answer because I think the question requires a bit more unpacking.
I think that there are certain essential characteristics that all security operations centers (SOCs), in an ideal world, would have. I think it's most useful to think in terms of function rather than in terms of a place or a specific organization.
A SOC can be a centralized location or not, but it would involve all risk management functions in the enterprise—and they would be connected to a single pane-of-glass solution that provides each of them, depending on responsibility, expertise and permissions, a single reference window of truth about both internal and external threats. That includes everything from open sources to internal telemetry. Their software needs and location requirements would be determined by the type of risk they are managing and the timeframe in which they have to manage it, from tactical to strategic.
All of the risk managers would organizationally report up, at least as a dotted line, to a chief risk officer (CRO) and that CRO would be a C-suite position. They would constantly communicate, collaborate and coordinate, developing an overall perspective on the organization's risk posture that allows them to collectively develop risk tolerance standards, identify vulnerabilities and agree upon priorities.
These risk managers would also cooperate and run through sets of threat scenarios to identify unexpected and cascading consequences of negative events. That collective and comprehensive perspective would be formally factored into all of the enterprise's strategic decision making.
And by the way, we could do all of this today, almost all of it now. Again, I didn't speak as much to the tactical orientation of the SOC of the future, but rather the practices and policies and procedures of risk management as embodied in an instance of a SOC. But I think that this collaboration and coordination across risk management areas is going to be essential for managing the risks that are here and that are coming.
If we're speaking specifically about Dataminr, it's ultimately up to the user to interpret the information. But, what we provide goes a long way in helping customers do that because the information we deliver has been collected, compiled, and correlated to fit the customer’s specific risk management needs and it’s multimodal AI.
For example, let's say that Dataminr’s AI platform detects a picture from a public social media account of an accident at an intersection; it shows only the logo of a truck that was in the accident. We’re able to put this information together to deliver a real-time alert that says at this time, at this intersection, this company (which we know because of the logo on the truck) had an accident.
How important that accident is for the company, what it means for the driver and the driver’s standing in the company, and all of these other considerations, the significance, we can't determine that, that's up to the customer. But what we can do is, we can ask, what do you care about? And given all of this information that we're collecting and all of these alerts that we can provide, we have very sophisticated topical filtering, keyword framing and geofencing that allow us to really drill down into the specific relevant information that the customer needs—and accordingly, deliver the right, most relevant information.
We're talking about a technology solution, but the interrelationship, cooperation and coordination between risk management functions, particularly cyber and physical, means organizations need to think about the convergence of risks and what to do about them. First and foremost, like most things in security management, convergence is about plans, policies, processes, and preparation.
There is a necessary role here for a technical solution, and Dataminr is particularly well positioned to fill that role, especially with its long history in the physical security space. But the first recommendation I would have has to do with internal organizational dynamics. Here I’m thinking about the chief information security officer (CISO) and the chief security officer (CSO). I would say to them, if you haven't started, then begin the conversation about cyber-physical security convergence. I realize everyone is busy, but a member of the cyber or physical team should initiate the conversation.
The cyber and physical teams should then do a few things:
I don't think being alarmist is helpful, but we have to recognize that it's the nature of security because it's primarily a cost function. Therefore there's the challenge of showing value and often there's the additional challenge of having to show value counterfactually. Had we not had these closed circuit cameras in place, had we not had this data loss prevention (DLP) solution in place, then these things would have happened.
It's difficult. People have an intuitive sense and a best practice notion of what sorts of controls we should have in place. But there's always that challenge because security operations is a cost center.
The latter can be an opportunity to raise the profile of the security function. Make it a formal process because if you can get executive buy-in for this comprehensive look, that will allow you to talk to a lot of other different areas within the company. It will improve your understanding. It will enhance the profile of security. Senior management and the board will, because of their interactions with security, be more likely to think about security. It puts security in a much better position to advocate for the resources that it needs because it has taken a deliberative broad ecosystem approach to assessing its security posture.
I think security is at the top of people's minds most often when it has to be at the top of their minds. But there are so many things to manage, so many concerns, that it's essential that the senior management and the board have a mindset for security because when executive leadership is security conscious, that can create a culture of security that permeates throughout the organization. If an organization does not have a culture of security, then it will be subjected to, and it will suffer from, a wide array of disruption and potentially collapse.
See how organizations like yours use Dataminr Pulse’s AI-powered solutions to strengthen business resilience:
Jack Carraway is a Corporate Solutions Advisor at Dataminr, responsible for helping customers mature their security programs. Throughout his career, Jack has helped governments and corporations identify, detect, interpret and mitigate risks related to geopolitics, insider threats, counterintelligence, corporate security and cybersecurity. He also has deep expertise in program design, development, and management and in advising senior executives on complex risk matters. He began his career in the U.S. Army, and has held various security and risk roles at companies such as Booz Allen Hamilton and JPMorgan Chase. Jack earned a BA from The College of William & Mary.