
Microsoft immediately alerted its customer, and the attack was thwarted before the intruder could go too far.
Chalk it up to a new generation of artificially intelligent software that can adapt to hackers’ ever-evolving tactics. Microsoft, Alphabet Inc’s Google, Amazon.com Inc and various start-ups are moving away entirely from using older “rules-based” technology designed to respond to specific types of intrusions and Deploy machine-learning algorithms that crunch massive amounts of data at login. behavior and previous attacks to evade and deter hackers.
“Machine learning is a very powerful technology for security – it is dynamic, whereas rule-based systems are very rigid,” says Don Song, a professor at the University of California, Berkeley’s Artificial Intelligence Research Lab. “It’s a very manual-intensive process to convert them, whereas machine learning is automatic, dynamic and you can easily retrain it.”
Hackers themselves are famously adaptable, of course, so they too can use machine learning to create new pranks and overwhelm new defenses. For example, they can find out how companies train their systems and use the data to evade or corrupt algorithms. Big cloud service companies are painfully aware that the enemy is a moving target, but they argue that new technology will help tip the balance in favor of the good guys.
“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the overall amount of damage and restore systems more quickly,” says Stefan Schmidt, Amazon’s chief information security officer. He acknowledges that it is impossible to prevent all intrusions but says that his industry will “get incrementally better at security systems and make it more difficult for attackers.”
Before machine learning, security teams used blunter tools. For example, if someone at headquarters tried to log in from an unfamiliar location, they were denied access. Or spam emails containing various misspellings of the word “Viagra” were blocked. Such systems often work.
But they also flag a lot of legitimate users – as in preventing anyone from using their credit card while on holiday. According to Azure CTO Mark Russinovich, the Microsoft system designed to protect customers from fake logins had a false positive rate of 2.8%. This may not sound like much, but it was deemed unacceptable because Microsoft’s large customers can generate billions of logins.

View Full Image
To do a better job of figuring out who’s legitimate and who isn’t, Microsoft technology learns from the data of each company that uses it, customizing protection for that customer’s specific online behavior and history. Since launching the service, the company has managed to bring the false positive rate down to .001%. This is the system that drove out the intruder in Romania.
Training these security algorithms falls to people like Ram Shankar Shiv Kumar, a Microsoft manager who goes by the title of data cowboy. Shiv Kumar joined Microsoft from Carnegie Mellon six years ago because his sister was a fan of Grey’s Anatomy, a medical drama set in Seattle. He manages a team of about 18 engineers who develop machine learning algorithms and then make sure they’re smart and fast enough to thwart hackers and the companies that pay big bucks for Microsoft cloud services. Work seamlessly with the software system. Shiv Kumar is one of those people who gets a call when the algorithms detect an attack. He is awakened in the middle of the night, only to find that Microsoft’s in-house “Red Team” of hackers were responsible.
The challenge is tough. Millions of people log into Google’s Gmail alone every day. Mark Risher, director of product management, says, “We need to see the amount of data to be sure whether it’s you or an impostor growing at a rate that’s too large for humans to write down the rules one by one.” Is.” Attacks on Google’s customers.
Google now checks for security breaches even after a user has logged in, which helps catch hackers who initially look like real users. With machine learning being able to analyze many different pieces of data, catching unauthorized logins is no longer just a yes or no matter. Rather, Google tracks various aspects of the user’s behavior throughout the session. Someone who initially looks legit may later show signs that they are not who they say they are, allowing Google’s software to give them enough time to prevent further damage.
Amazon’s Macie service uses machine learning to find sensitive data among corporate information from customers like Netflix and then see who is accessing it and when, alerting the company to suspicious activity.
In addition to using machine learning to secure their own networks and cloud services, Amazon and Microsoft are providing the technology to customers. Amazon’s GuardDuty monitors customers’ systems for malicious or unauthorized activity. At times the service finds employees doing things they shouldn’t – such as installing bitcoin mining software on their work PCs.
Dutch insurance company NN Group NV uses Microsoft’s Advanced Threat Protection to manage access to its 27,000 employees and close partners while keeping everyone else out. Earlier this year, Wilko Jansson, the company’s manager of workplace services, showed employees a new feature in Microsoft’s Office cloud software that blocks so-called CxO spamming, by posing spammers as a senior executive and Instruct the receiver to transfer funds or share personal information. ,
Ninety minutes after the performance, the Security Operations Center called to report that someone had attempted a precise attack on the CEO of NN Group. “We were like ‘Oh, this facility could have already prevented this from happening,'” Jansen says.
Machine learning security systems do not work in all instances, especially when there is insufficient data to train them. And researchers and companies constantly worry that they can be exploited by hackers.
For example, they can mimic users’ activity in order to circumvent algorithms that screen for specific behavior. Or hackers can tamper with the data used to train the algorithms and distort it for their own ends – so-called poisoning. That’s why it’s so important for companies to keep their algorithmic criteria secret and change formulas regularly, says Battista Biggio, professor at the Pattern Recognition and Applications Lab of Cagliari in Sardinia, Italy.
So far, these threats have featured more in research papers than in real life. But this is likely to change. As Biggio wrote in a paper last year: “Security is an arms race, and the security of machine learning and pattern recognition systems is no exception.”
Catch all business news, market news, breaking news events and latest news updates on Live Mint. Download Mint News app to get daily market updates.
more less