Skip navigation
Black Hat Conference
Machine learning in cybersecurity still requires human intervention according to researchers at Black Hat monsitj/Thinkstock
<p>Machine learning in cybersecurity still requires human intervention, according to researchers at Black Hat.</p>

At Black Hat, Machine Learning Helps Scale Security — And Threats

Machine learning in cybersecurity has a lot of promise, but there are pitfalls, according to researchers at Black Hat this week.

As researchers and vendors apply machine learning to spot security vulnerabilities, cybercriminals are using the same techniques to train bots to outsmart detection tools, according to presentations this week at Black Hat in Las Vegas.

Machine learning in cybersecurity is still early days, but researchers say that it could significantly reduce the opportunity for and limit the damage of cyberattacks. But it will come with several challenges.

One of these challenges is the need for updated data. According to Sophos data scientist Hillary Sanders, who presented research at Black Hat on Wednesday, cybersecurity machine learning researchers face two main problems when acquiring data to train models.

“First, any available data is necessarily old and potentially outdated in comparison to the environment the model will face on deployment. Second, researchers may not even have access to relevant past data, often due to privacy concerns,” she writes in “Garbage In, Garbage Out: How Purportedly Great ML Models Can be Screwed Up by Bad Data.”

She argues that in a cybersecurity context,"training and test data used to create and evaluate systems matters greatly.” In her research, Sanders explores how analyzing the sensitivity of models to differences in training and testing data can develop better training datasets that will perform more reliably on deployment. You can see the full report here.

As the human-side of security garnered a lot of attention this week at Black Hat, McAfee introduced its Human-Machine Teaming concept: “the concerted combination of human efforts with smart technology.”

In other words, machine learning won’t do much to improve security operations without the help of humans. In working with 451 Research, McAfee argues machine learning helps chief security officers (CSOs) get the most out of human and security product assets.

According to the report, 71 percent of advanced Security Operations Centers (SOCs) use human-machine teaming to close cybersecurity investigations in one week or less. McAfee said successful cybersecurity teams are three times as likely to automate threat investigation and devote 50 percent more time to actual threat hunting.

The report suggests that machine learning will be most helpful in automatically flagging suspicious behavior and then automatically making high-value investigation and response data available, which will in turn give IT security teams the ability to “rapidly dismiss alerts and accelerate solutions that thwart new threats.”

McAfee is using machine learning in its own security solutions, including the McAfee ATD v4.0 software, which it says is able to better identify malicious markers that may be hidden thanks to machine learning.

While security vendors tout their own machine learning advancements, adversaries are also using machine learning models with success. New research from Symantec presented at Black Hat shows that scammers are using machine learning tools to mine social media data, and ultimately target executives with fraudulent emails that look like they came from an internal source. By using machine learning, criminals can increase the success rate, Symantec researchers say.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish