ep public safety

Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
How AI Is Transforming Defense and Intelligence Technologies

How AI Is Transforming Defense and Intelligence Technologies

A Harvard Belfer Center study commissioned by the Intelligence Advanced Research Projects Agency (IARPA), Artificial Intelligence and National Security, predicted last May that AI will be as transformative to national defense as nuclear weapons, aircraft, computers and biotech.

Advances in AI will enable new capabilities and make others far more affordable – not only to the U.S., but to adversaries as well, raising the stakes as the United States seeks to preserve its hard-won strategic overmatch in the air, land, sea, space and cyberspace domains.

The Pentagon’s Third Offset Strategy seeks to leverage AI and related technologies in a variety of ways, according to Robert Work, former deputy secretary of defense and one of the strategy’s architects. In a forward to a new report from the market analytics firm Govini, Work says the strategy “seeks to exploit advances in AI and autonomous systems to improve the performance of Joint Force guided munitions battle networks” through:

  • Deep learning machines, powered by artificial neural networks and trained with big data sets
  • Advanced human-machine collaboration in which AI-enabled learning machines help humans make more timely and relevant combat decisions
  • AI devices that allow operators of all types to “plug into and call upon the power of the entire Joint Force battle network to accomplish assigned missions and tasks”
  • Human-machine combat teaming of manned and unmanned systems
  • Cyber- and electronic warfare-hardened, network-enabled, autonomous and high-speed weapons capable of collaborative attacks

“By exploiting advances in AI and autonomous systems to improve the warfighting potential and performance of the U.S. military,” Work says, “the strategy aims to restore the Joint Force’s eroding conventional overmatch versus any potential adversary, thereby strengthening conventional deterrence.”

Spending is growing, Govini reports, with AI and related defense program spending increasing at a compound annual rate of 14.5 percent from 2012 to 2017, and poised to grow substantially faster in coming years as advanced computing technologies come on line, driving down computational costs.

But in practical terms, what does that mean? How will AI change the way defense technology is managed, the way we gather and analyze intelligence or protect our computer systems?

Charlie Greenbacker, vice president of analytics at In-Q-Tel in Arlington, Va., the intelligence community’s strategic investment arm, sees dramatic changes ahead.

“The incredible ability of technology to automate parts of the intelligence cycle is a huge opportunity,” he said at an AI summit produced by the Advanced Technology Academic Research Center and Intel in November. “I want humans to focus on more challenging, high-order problems and not the mundane problems of the world.”

The opportunities are possible because of the advent of new, more powerful processing techniques, whether by distributing those loads across a cloud infrastructure, or using specialty processors purpose-built to do this kind of math. “Under the hood, deep learning is really just algebra,” he says. “Specialized processing lets us do this a lot faster.”

Computer vision is one focus of interest – learning to identify faces in crowds or objects in satellite or other surveillance images – as is identifying anomalies in cyber security or text-heavy data searches. “A lot of folks spend massive amounts of time sifting through text looking for needles in the haystack,” Greenbacker continued.

The Air Force is looking at AI to help more quickly identify potential cyber attacks, said Frank Konieczny, chief technology officer in the office of the Air Force chief information officer, speaking at the CyberCon 2017 in November. “We’re looking at various ways of adjusting the network or adjusting the topology based upon threats, like software-defined network capabilities as well as AI-based analysis,” he said.

Marty Trevino Jr., a former technical director and strategist for the National Security Agency, now chief data/analytics officer at intelligence specialist at Red Alpha, a tech firm based in Annapolis Junction, Md. “We are all familiar with computers beating humans in complex games – chess, Go, and so on,” Trevino says. “But experiments are showing that when humans are mated with those same computers, they beat the computer every time. It’s this unique combination of man and machine – each doing what its brain does best – that will constitute the active cyber defense (ACD) systems of tomorrow.”

Machines best humans when the task is highly defined at speed and scale. “With all the hype around artificial intelligence, it is important to understand that AI is only fantastic at performing the specific tasks to which it is intended,” Trevino says. “Otherwise AI can be very dumb.”

Humans on the other hand, are better than machines when it comes to putting information in context. “While the human brain cannot match AI in specific realms,” he adds, “it is unmatched in its ability to process complex contextual information in dynamic environments. In cyber, context is everything. Context enables data-informed strategic decisions to be made.”

Artificial Intelligence and National Security
To prepare for a future in which artificial intelligence plays a heavy or dominant role in a warfare and military strategy-rich future, IARPA commissioned the Harvard Belfer Center to study the issue. The center’s August 2017 report, “Artificial Intelligence and National Security,” offers a series of recommendations, including:

  • Wargames and strategy – The Defense Department should conduct AI-focused wargames to identify potentially disruptive military innovations. It should also fund diverse, long-term strategic analyses to better understand the impact and implications of advanced AI technologies
  • Prioritize investment – Building on strategic analysis, defense and intelligence agencies should prioritize AI research and development investment on technologies and applications that will either provide sustainable strategic advantages or mitigate key risks
  • Counter threats – Because others will also have access to AI technology, investing in “counter-AI” capabilities for both offense and defense is critical to long-term security. This includes developing technological solutions for countering AI-enabled forgery, such as faked audio or video evidence
  • Basic research – The speed of AI development in commercial industry does not preclude specific security requirements in which strategic investment can yield substantial returns. Increased investment in AI-related basic research through DARPA, IARPA, the Office of Naval Research and the National Science Foundation, are critical to achieving long-term strategic advantage
  • Commercial development – Although DoD cannot expect to be a dominant investor in AI technology, increased investment through In-Q-Tel and other means can be critical in attaining startup firms’ interest in national security applications

Building Resiliency
Looking at cybersecurity another way, AI can also be used to rapidly identify and repair software vulnerabilities, said Brian Pierce, director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

“We are using automation to engage cyber attackers in machine time, rather than human time,” he said. Using automation developed under DARPA funding, he said machine-driven defenses have demonstrated AI-based discovery and patching of software vulnerabilities. “Software flaws can last for minutes, instead of as long as years,” he said. “I can’t emphasize enough how much this automation is a game changer in strengthening cyber resiliency.”

Such advanced, cognitive ACD systems employ the gamut of detection tools and techniques, from heuristics to characteristic and signature-based identification, says Red Alpha’s Trevino. “These systems will be self-learning and self-healing, and if compromised, will be able to terminate and reconstitute themselves in an alternative virtual environment, having already learned the lessons of the previous engagement, and incorporated the required capabilities to survive. All of this will be done in real time.”

Seen in that context, AI is just the latest in a series of technologies the U.S. has used as a strategic force multiplier. Just as precision weapons enabled the U.S. Air Force to inflict greater damage with fewer bombs – and with less risk – AI can be used to solve problems that might otherwise take hundreds or even thousands of people. The promise is that instead of eyeballing thousands of images a day or scanning millions of network actions, computers can do the first screening, freeing up analysts for the harder task of interpreting results, says Dennis Gibbs, technical strategist, Intelligence and Security programs at General Dynamics Information Technology. “But just because the technology can do that, doesn’t mean it’s easy. Integrating that technology into existing systems and networks and processes is as much art as science. Success depends on how well you understand your customer. You have to understand how these things fit together.”

In a separate project, DARPA collaborated with a Fortune 100 company that was moving more than a terabyte of data per day across its virtual private network, and generating 12 million network events per day – far beyond the human ability to track or analyze. Using automated tools, however, the project team was able to identify a single unauthorized IP address that successfully logged into 3,336 VPN accounts over seven days.

Mathematically speaking, Pierce said, “The activity associated with this address was close to 9 billion network events with about a 1 in 10 chance of discovery.” The tipoff was a flaw in the botnet that attacked the network: Attacks were staged at exactly 57-minute intervals. Not all botnets of course, will make that mistake. But even pseudo random timing can also be detected. He added: “Using advanced signal processing methods applied to billions of network events over weeks and months-long timelines, we have been successful at finding pseudo random botnets.”

On the flipside however, must be the recognition that AI superiority will not be a given in cyberspace. Unlike air, land, sea or space, cyber is a man-made warfare domain. So it’s fitting that the fight there could end up being machine vs. machine.

The Harvard Artificial Intelligence and National Security study notes emphatically that while AI will make it easier to sort through ever greater volumes of intelligence, “it will also be much easier to lie persuasively.” The use of Photoshop and other image editors is well understood and has been for years. But recent advances in video editing have made it reasonably easy to forge audio and video files.

A trio of University of Washington researchers announced in a research paper published in July that they had used AI to synthesize a photorealistic, lip-synced video of former President Barack Obama. While the researchers used real audio, it’s easy to see the dangers posed if audio is also manipulated.

While the authors describe potential positive uses of the technology – such as “the ability to generate high-quality video from audio [which] could significantly reduce the amount of bandwidth needed in video coding/transmission” – potential nefarious uses are just as clear.

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
Washington, Not Silicon Valley, Leads The Way in Cybersecurity

Washington, Not Silicon Valley, Leads The Way in Cybersecurity

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
GIS, Mobile-Alert Tech Shine During Eclipse

GIS, Mobile-Alert Tech Shine During Eclipse

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
DHS Nurtures Wearable Tech for Responders

DHS Nurtures Wearable Tech for Responders

Ten startups will be working on EMERGE 2016, the Department of Homeland Security Science and Technology Directorate’s  program supporting research and development of wearable technology for first responders. EMERGE 2016 expands on last year’s pilot that accelerated the delivery of the latest innovative wearable technologies for first responders by bringing startups, accelerators and strategic partners together in a common research and development effort.

“We need to find technologies for first responders that can be integrated directly into their existing gear,” DHS Under Secretary for Science and Technology Reginald Brothers said in a statement. “The entrepreneurial world is on the leading edge of those inventive solutions.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250