This is a summary of an article written for SC Magazine by Fortinet’s Global Security Strategist, Derek Manky. The entire article can be accessed here.

Cybercriminals and cybersecurity professionals are caught in a perpetual game of one-upmanship when it comes to developing and deploying tools to either defend digital resources or exploit them. The latest entry in this ongoing cyber arms race is the use of automation, machine learning, and ultimately, Artificial Intelligence. 

This isn’t just science fiction. A recent report by Nokia, for example, showed that AI-powered botnets are being used to find specific vulnerabilities in Android devices and then exploit those vulnerabilities by loading data-stealing malware that is usually only detected after the damage has been done.

The Digital Transformation Challenge For Business

Digital transformation (DX) has managed to completely upended years of security strategy for cybersecurity professionals. However, due to the expansion of the cybersecurity skills gap, organizations simply cannot afford to scale their security infrastructure to address their growing attack surface.

Solving this challenge requires turning over lower order decisions and tedious processes to automated systems that require fewer eyes and hands. At the same time, traditionally isolated legacy security devices not only need to be exchanged for integrated systems that extend visibility and control across all network environments, but they need to include things like machine learning and AI to close gaps, correlate threat intelligence, and coordinate responses at digital speeds.

The Digital Transformation Opportunity For Cybercriminals

At the same time, DX has been one of the greatest boons to the cybercriminal community by multiplying the potential attack surface exponentially. AI and machine learning are just as helpful here as they are for enterprise networks.

As with their victims, maintaining ROI for a cybercriminal enterprise requires lowering overheard while increasing the efficiency and effectiveness of tools designed to penetrate defense systems.

For example, integrated malware that can run on a variety of devices and environments and deliver a variety of exploits and payloads is critical can be very effective. However, by leveraging automation and machine learning, that malware can autonomously determine which payloads will be most successful without exposing itself through constant communications back to its C2 server. The result is more effective data attempts without increasing overhead.

AI Takes Everything To The Next Level

Attacks that leverage self-learning technologies can quickly assess vulnerabilities, select or adapt malware, and actively counter security efforts to stop them. Combining AI with emerging threats like swarmbots will enable an attack to be broken down into its functional elements, assign them to different members of a swarm, and use interactive communications across the swarm to accelerate the rate at which an attack can occur.

The only effective defense against such AI-enhanced attacks strategies are solutions that use those same strategies.

“AI will allow businesses to deploy a self-defending security solution that can detect threats, close gaps, reconfigure devices and respond to threats without human intervention.”

– Derek Manky, SC Magazine, May 30, 2019

Because so many vendors see all the potential revenue associated with AI, many have been willing to claim AI functionality where it doesn’t actually exist, which can leave those enterprises looking to “fight fire with fire” in a quandary as to which solutions they should select.

To cut through the confusion, IT teams need to ask security vendors a handful of questions to determine whether their AI solution is even worth considering:

  1. How many years have you spent developing this AI? AI requires years of careful training. Any vendor who has not used standards-based training over the course of years to train their AI system is offering a less than ideal solution.
  2. How many nodes are used to process data and make decisions? Generally speaking, true AI requires millions of nodes combined with massive amounts of data feeds to generate accurate defense solutions.
  3. How good is the data you are feeding your AI? Feeding an AI with good data is harder than it sounds. Massive data sets of reliable and constantly available data is absolutely necessary for effective AI.

Fighting Fire With Fire

Not all AI is the same. Solutions that claim to provide AI but that don’t meet the requirements listed above are likely to introduce more challenges into your network.

“Risk-based decision-making engines that are intelligent enough to take humans out of the loop not only need to be able to execute the “OODA loop” (Observe, Orient, Decide and Act) for the vast majority of situations it encounters, but also actually suggest courses of action when a problem is discovered rather than merely relying on pre-defined ones.”

– Derek Manky, SC Magazine, May 30, 2019

Locating tools that can meet that standard requires time and careful analysis. Only then can you confidently turn over critical security processes so your valuable cybersecurity experts can concentrate on those difficult decisions where human cognition and intervention are most required. 

This is a summary of an article written for SC Magazine entitled, Winning the cybercrime arms race with AI, written by Fortinet’s Global Security Strategist, Derek Manky and published on Security Week on May 30, 2019.

Learn more about FortiGuard Labs and the FortiGuard Security Services portfolioSign up for our weekly FortiGuard Threat Brief. 

Read about the FortiGuard Security Rating Service, which provides security audits and best practices.

Sourced from Fortinet

Recommended Posts