Wager Mage
Photo: Pavel Danilyuk
AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts.
Aziz in Arabic is derived from the root ʕ-z-z with a meaning of "strong, powerful" and the adjective has acquired its meaning of "dear, darling,...
Read More »
Betting for Beginners: 7 Tips to Start Off Right Do Your Research. ... Be Safe. ... Make the Most of Rewards and Bonuses. ... Playing Favorites...
Read More »AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts. It's imperative that CISOs and all security team leaders stay aware of the lurking AI dangers. AI defined When people talk about AI in cybersecurity, the terms machine learning and deep learning tend to be used interchangeably with artificial intelligence. Michelle Cantos Michelle Cantos What's the difference? "AI is a big subfield of computer science, and it examines the ability of a program or machine to accomplish tasks that normally require human intelligence like perception, reasoning, abstraction and learning," explained Michelle Cantos, strategic intelligence analyst for security vendor FireEye. According to Sridhar Muppidi, vice president and CTO for IBM Security, machine learning is a big part of AI, which is primarily used to extract anomalies and outliers from giant data haystacks and to evaluate risk. "It involves training a model on a specific element of data using algorithms," he said. "Deep learning is a type of self-learning or on-the-fly learning." Using AI for good The main ways AI is being used for good today is for "predictive analytics, intelligence consolidation and to act as a trusted advisor that can respond automatically," IBM Security's Muppidi said. Jon Oltsik, senior principal analyst at the consultancy Enterprise Strategy Group (ESG), sees it the same. In an email interview, he said, "Most organizations I speak to or research are adding AI [and machine learning] as a layer of defense." AI processes large amounts of data much faster than humans. "The huge volume of data we observe on a daily basis would bury any normal researcher," Cantos said. "But AI applications, specifically machine learning, help us do the heavy lifting to dig analysts out from under the data." "I would characterize it as a 'helper app,'" said Oltsik, "in that it is used to supplement human analysis." Machine learning models can "learn from previous calculations and sort of adapt to new environments where they can perform trend analysis, make predictions and even examine past behaviors to see how threat actors might change in the future," Cantos noted. Machine learning can illuminate relationships within the data set that a human might not see. For instance, a cybersecurity application can be used in the security operations center (SOC) where it can help triage alerts by identifying which ones need to be dealt with immediately, which are probably false positives and that sort of thing, Cantos added. Sridhar Muppidi Sridhar Muppidi While most of the industry seems focused on providing the value of AI and machine learning quickly, it's important to point out that it needs to be safeguarded. "Many customers are realizing this right now and want some level of protection, but it's not mainstream yet," said IBM Security's Muppidi. "Much like we scan the application for vulnerabilities, we need to scan AI models for potential blind spots or open threat issues. There's a growing realization that we can't trust it 100%." This realization is critical because AI can be mistrained or tricked as part of an attack. "State-sponsored actors and hackers will try to compromise systems, whether that involves an AI-enabled system or not," Cantos said. "But nation states tend to have more resources and funds to devote to this problem. It means they can develop more sophisticated tools to target new, higher-level, sophisticated environments like AI-enabled systems." As we rely on AI-driven decision making, it's important to realize it's a model that's "fundamentally uncanny, in that we can observe that it works, but we don't always fully understand why," said Michael Tiffany, co-founder and president of cybersecurity vendor White Ops, which specializes in botnet mitigation. "This brings up a whole new form of vulnerability. If it's suddenly mistrained, there's no way to test it." Tiffany also noted that some "practical attacks within this domain" have taken place, "increasing the noise in a system still establishing what a baseline is." In other words, by raising the "noise floor," hackers can obscure anomalies that might otherwise catch the eye of the security team and their tools. AI dangers AI is already widely used for fraud -- including for operating botnets out of infected computers that work solely as internet traffic launderers, Tiffany said. But myriad other ways exist for AI to be harnessed. Jon Oltsik Jon Oltsik Oltsik of ESG said that he's not yet seen AI for targeted attacks. "It's more embedded in things like disinformation through bots, sock puppets" and the like. Many parties are particularly well-suited to use machine learning for evil purposes. "These are the people who are already doing mass exploitation, which is the willy-nilly compromise of millions of computers," Tiffany said. "If you have the ability to build a botnet, it means you have an infection vector that potentially works for millions of computers. And if you can infect millions of computers, you can do lots of interesting things with them -- one of which is to learn from them." A big, but sometimes overlooked, truth when it comes to the use of AI is that, unlike corporate America, cybercriminals don't have to care about or comply with the General Data Protection Regulation, privacy regulations -- or laws and regulations of any kind, for that matter. This allows them to do vastly more invasive data collection, according to Tiffany. "This means they can develop an information advantage," he pointed out. "If you can infect a million computers and make your malware learn from them, you can make some extraordinarily lifelike malware because you have a training set that almost no legitimate AI researcher would ever have." Michael Tiffany Michael Tiffany Cybercriminals are already using AI; it isn't something looming on the horizon. "But it's not like all the bad guys get together to compare notes. There's a complex ecosystem of different criminal groups of different levels of sophistication," Tiffany said. "The hardest to detect operations are the ones that are doing the best job of learning." Years ago, one of the ways you could differentiate between a real human web visitor and a bot was that bots tended to look robotic. They repeated the same actions with similar time patterns. "There was an evolutionary period when people started tuning their bots to look more random -- rather than having them work 24/7," Tiffany explained. "But you could still identify their populations because, although they might be installed on a diverse number of computers, all of the bots fundamentally behaved like each other, and classical AI techniques could cluster them together." Every compromised computer is owned by a different person with his or her individual habits of moving a mouse, using other input devices, sleeping and other usage patterns. "If each bot is training off their pet human, if you will, then the bots will not only become more lifelike -- they'll become uniquely lifelike. That's where we are today on the bot evolutionary scale," Tiffany said. Staffan Truvé Staffan Truvé Is the reality of malware bots learning from their hosts not enough doom for you? Consider two words: autonomous weapons. As AI systems continue to get smarter, AI dangers multiply. "Criminals and rogue states are building autonomous weapons," said Staffan Truvé, CTO for Recorded Future, an internet technology company specializing in real-time threat intelligence. "And they won't be following any international conventions. Autonomous weapons are the area we should be most worried about with AI." Another real concern is that researchers may unintentionally unleash AI demons. "If you look back at the original Robert Morris internet worm from 1988, he had no idea when he wrote it that it would get out of control," Truvé said. "Analogously, we could see a researcher or some group launch an uncontrollable botnet. It's not unlikely, and it could start spreading. So we need to do research on these kinds of systems; otherwise we won't be able to defend ourselves against them. Others will inevitably make similar mistakes."
Math is a huge part of darts. Unlike other sports mathematical calculations are needed in every aspect of the darts playing process. First you must...
Read More »
What Is Cash Bonus? A cash bonus refers to a lump sum of money awarded to an employee, either occasionally or periodically, for good performance....
Read More »
A Fivefold Accumulator is one bet on five selections. All five selections must be successful to have a return. A Super Yankee or Canadian consists...
Read More »
Fantasy sports winnings of at least $600 are reported to the IRS. If it turns out to be your lucky day and you take home a net profit of $600 or...
Read More »
FanDuel and DraftKings lead the industry in one-day fantasy football leagues. Users play for real money in contests starting at a $0.01 commitment....
Read More »
Here are the 10 most common numbers to be drawn on the five white balls for the past seven years. 63: Drawn 73 times. 21: Drawn 73 times. 69: Drawn...
Read More »