Defense

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Chief information security officers (CISOs) face a dizzying array of cybersecurity tools to choose from, each loaded with features and promised capabilities that are hard to measure or judge.

That leaves CISOs trying to balance unknown risks against growing costs, without a clear ability to justify the return on their cybersecurity investment. Not surprisingly, today’s high-threat environment makes it preferable to choose safe over sorry – regardless of cost. But is there a better way?

Some cyber insiders believe there is.

Margie Graves Acting U.S. Federal Chief Information Officer

Margie Graves
Acting U.S. Federal Chief Information Officer

Acting U.S. Federal Chief Information Officer (CIO) Margie Graves acknowledges the problem.

“Defining the measure of success is hard sometimes, because it’s hard to measure things that don’t happen,” Graves said. President’s Trump’s Executive Order on Cybersecurity asks each agency to develop its own risk management plan, she noted. “It should be articulated on that plan how every dollar will be applied to buying down that risk.”

There is a difference though, between a plan and an actual measure. A plan can justify an investment intended to reduce risk. But judgment, rather than hard knowledge, will determine how much risk is mitigated by any given tool.

The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) have been trying to develop a methodology measuring the actual value of a given cyber tool’s performance. Their NIPRNet/SIPRNET Cyber Security Architecture Review (NSCSAR – pronounced “NASCAR”) is a classified effort to define a framework for measuring cybersecurity performance, said DISA CIO and Risk Management Executive John Hickey.

“We just went through a drill of ‘what are those metrics that are actually going to show us the effectiveness of those tools,’ because a lot of times we make an investment, people want a return on that investment,” he told GovTechWorks in June. “Security is a poor example of what you are going after. It is really the effectiveness of the security tools or compliance capabilities.”

The NSCSAR review, conducted in partnership with NSA and the Defense Department, may point to a future means of measuring cyber defense capability. “It is a framework that actually looks at the kill chain, how the enemy will move through that kill chain and what defenses we have in place,” Hickey said, adding that NSA is working with DISA on an unclassified version of the framework that could be shared with other agencies or the private sector to measure cyber performance.

“It is a methodology,” Hickey explained. “We look at the sensors we have today and measure what functionality they perform against the threat.… We are tracking the effectiveness of the tools and capabilities to get after that threat, and then making our decisions on what priorities to fund.”

Measuring Security
NSS Labs Inc., independently tests the cybersecurity performance of firewalls and other cyber defenses, annually scoring products’ performances. The Austin, Texas, company evaluated 11 next-generation firewall (NGFW) products from 10 vendors in June 2017, comparing the effectiveness of their security performance, as well as the firewalls’ stability, reliability and total cost of ownership.

In the test, products were presumed to be able to provide basic packet filtering, stateful multi-layer inspection, network address translation, virtual private network capability, application awareness controls, user/group controls, integrated intrusion prevention, reputation services, anti-malware capabilities and SSL inspection. Among the findings:

  • Eight of 11 products tested scored “above average” in terms of both performance and cost-effectiveness; Three scored below
  • Overall security effectiveness ranged from as low as 25.8 percent, up to 99.9; average security effectiveness was 67.3 percent
  • Four products scored below 78.5 percent
  • Total cost of ownership ranged from $5 per protected megabit/second to $105, with an average of $22
  • Nine products failed to detect at least one evasion, while only two detected all evasion attempts

NSS conducted similar tests of advanced endpoint protection tools, data center firewalls, and web application firewalls earlier this year.

But point-in-time performance tests don’t provide a reliable measure of ongoing performance. And measuring the effectiveness of a single tool does not necessarily indicate how well it performs its particular duties as part of a suite of tools, notes Robert J. Carey, vice president within the Global Solutions division at General Dynamics Information Technology (GDIT). The former U.S. Navy CIO and Defense Department principal deputy CIO says that though these tests are valuable, they still make it hard to quantify and compare the performance of different products in an organization’s security stack.

The evolution and blurring of the lines between different cybersecurity tools – from firewalls to intrusion detection/protection, gateways, traffic analysis tools, threat intelligence, intrusion detection, anomaly detection and so on – mean it’s easy to add another tool to one’s stack, but like any multivariate function, it is hard to be sure of its individual contributions to threat protection and what you can do without.

“We don’t know what an adequate cyber security stack looks like. What part of the threat does the firewall protect against, the intrusion detection tool, and so on?” Carey says. “We perceive that the tools are part of the solution. But it’s difficult to quantify the benefit. There’s too much marketing fluff about features and not enough facts.”

Mike Spanbauer, vice president of research strategy at NSS, says this is a common concern, especially in large, managed environments — as is the case in many government instances. One way to address it is to replicate the security stack in a test environment and experiment to see how tools perform against a range of known, current threats while under different configurations and settings.

Another solution is to add one more tool to monitor and measure performance. NSS’ Cyber Advanced Warning System (CAWS) provides continuous security validation monitoring by capturing live threats and then injecting them into a test environment mirroring customers’ actual security stacks. New threats are identified and tested non-stop. If they succeed in penetrating the stack, system owners are notified so they can update their policies to stop that threat in the future.

“We harvest the live threats and capture those in a very careful manner and preserve the complete properties,” Spanbauer said. “Then we bring those back into our virtual environment and run them across the [cyber stack] and determine whether it is detected.”

Adding more tools and solutions isn’t necessarily what Carey had in mind. While that monitoring may reduce risk, it also adds another expense.

And measuring value in terms of return on investment, is a challenge when every new tool adds real cost and results are so difficult to define. In cybersecurity, though managing risk has become the name of the game, actually calculating risk is hard.

The National Institute of Standards and Technology (NIST) created the 800-53 security controls and the cybersecurity risk management framework that encompass today’s best practices. Carey worries that risk management delivers an illusion of security by accepting some level of vulnerability depending on level of investment. The trouble with that is that it drives a compliance culture in which security departments focus on following the framework more than defending the network and securing its applications and data.

“I’m in favor of moving away from risk management,” GDIT’s Carey says. “It’s what we’ve been doing for the past 25 years. It’s produced a lot of spend, but no measurable results. We should move to effects-based cyber. Instead of 60 shades of gray, maybe we should have just five well defined capability bands.”

The ultimate goal: Bring compliance into line with security so that doing the former, delivers the latter. But the evolving nature of cyber threats suggests that may never be possible.

Automated tools will only be as good as the data and intelligence built into them. True, automation improves speed and efficiency, Carey says. “But it doesn’t necessarily make me better.”

System owners should be able to look at their cyber stack and determine exactly how much better security performance would be if they added another tool or upgraded an existing one. If that were the case, they could spend most of their time focused on stopping the most dangerous threats – zero-day vulnerabilities that no tool can identify because they’ve never seen it before – rather than ensuring all processes and controls are in place to minimize risk in the event of a breach.

Point-in-time measures based on known vulnerabilities and available threats help, but may be blind to new or emerging threats of the sort that the NSA identifies and often keeps secret.

The NSCSAR tests DISA and NSA perform include that kind of advanced threat. Rather than trying to measure overall security, they’ve determined that breaking it down into the different levels of security makes sense. Says DISA’s Hickey: “You’ve got to tackle ‘what are we doing at the perimeter, what are we doing at the region and what are we doing at the endpoint.’” A single overall picture isn’t really possible, he says. Rather, one has to ask: “What is that situational awareness? What are those gaps and seams? What do we stop [doing now] in order to do something else? Those are the types of measurements we are looking at.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
How Employers Try to Retain Tech Talent

How Employers Try to Retain Tech Talent

As soon as Scott Algeier hires a freshly minted IT specialist out of college, a little clock starts ticking inside his head.

It’s not that he doesn’t have plenty to offer new hires in his role as director of the non-profit Information Technology-Information Sharing and Analysis Center (IT-ISAC) in Manassas, Va., nor that IT-ISAC cannot pay a fair wage. The issue is Algeier is in an all-out war for talent – and experience counts. Contractors, government agencies – indeed virtually every other employer across the nation – values experience almost as much as education and certifications.

As employees gain that experience, they see their value grow. “If I can get them to stay for at least three years, I consider that a win,” says Algeier. “We have one job where it never lasts more than two years. The best I can do is hire quality people right out of college, train them and hope they stick around for three years.”

The Military Context
An October 2016 White Paper from the Air Force University’s Research Institute says the frequency of churn is even more dire among those in the military, particularly in the Air Force which is undergoing a massive expansion of its cyber operations units.

The present demand for cybersecurity specialists in both the public and private sectors could undoubtedly lead the Air Force to be significantly challenged in retaining its most developed and experienced cyber Airmen in the years ahead, writes Air Force Major William Parker IV, author of the study.

“In the current environment, shortages in all flavors of cyber experts will increase, at least in the foreseeable future. Demand for all varieties of cybersecurity-skilled experts in both the private and public sectors is only rising.”

Meanwhile, it is estimated that today there are at least 30,000 unfilled cybersecurity jobs across the federal government, writes Parker. According to the International Information System Security Certification Consortium (ISC2), demand for cyber-certified professionals will continue to increase at 11 percent per year for the foreseeable future. Some estimates placed the global cyber workforce shortage at close to a million.

The military – both a primary trainer and employer in cyber — offers some interesting insight. A recent survey of Air Force cyber specialists choosing between re-enlistment or pursuit of opportunities in the civilian world indicates those who chose to reenlist were primarily influenced by job security and benefits, including health, retirement and education and training.

“For those Airmen who intended to separate, civilian job opportunities, pay and allowances, bonuses and special pays, promotion opportunities and the evaluation system contributed most heavily to their decisions [to leave the military],” Parker’s paper concluded.

Indeed, several airmen who expressed deep pride and love of serving in the Air Force stated they chose to separate because they felt their skills were not being fully utilized.

“Also, they were aware they had the ability to earn more income for their families in the private sector,” adds Parker. The re-enlistment bonuses the Air Force offered were not enough to make up the pay differences these airmen saw.

“It is also interesting that many of those who say that they will reenlist, included optimistic comments that they hope ‘someday’ they may be able to apply the cyber skills they have attained in the service of the nation.”

Tech companies present a different set of competitive stresses: competing with high pay, industrial glamor and attractive perks. Apple’s new Cupertino, Calif., headquarters epitomizes the age: an airy glass donut that looks like it just touched down from a galaxy far, far away, filled with cafés, restaurants, a wellness center, a child care facility and even an Eden-like garden inside the donut hole. Amazon’s $4 billion urban campus, anchored by the improbable “spheres,” in which three interlocking, multistory glass structures house treehouse meeting rooms, offices and collaborative spaces filled with trees, rare plants, waterfalls and a river that runs through it all.

While Washington, D.C., contractors and non-profits do not have campus rivers or stock option packages, they do have other ways to compete. At the forefront are the high-end missions in which both they and their customers perform. They also offer professional development, certifications, job flexibility and sometimes, the ability to work from home.

“We work with the intelligence community and the DoD,” says Chris Hiltbrand, vice president of Human Resources for General Dynamics Information Technology’s Intelligence Solutions Division. “Our employees have the opportunity to apply cutting-edge technologies to interesting and important missions that truly make a difference to our nation. It’s rewarding work.”

While sometimes people leave for pay packages from Silicon Valley, he admits, pay is rarely the only issue employees consider. Work location, comfort and familiarity, quality of work, colleagues, career opportunities and the impact of working on a worthwhile mission, all play a role.

“It’s not all about maximizing earning potential,” Hiltbrand says. “In terms of money, people want to be compensated fairly – relative to the market – for the work they do. We also look at other aspects of what we can offer, and that is largely around the customer missions we support and our reputation with customers and the industry.”

Especially for veterans, mission, purpose and service to the nation are real motivators. GDIT then goes a step further, supporting staff who are members of the National Guard or military reservists with extra benefits, such as paying the difference in salary when staff go on active duty.

Mission also factors in to the equation at IT-ISAC, Algeier says. “Our employees get to work with some of the big hitters in the industry and that experience definitely keeps them here longer than they might otherwise. But over time, that also has an inevitable effect.

“I get them here by saying: ‘Hey, look who you get to work with,’ he says. “And then within a few years, it’s ‘hey, look who they’re going to go work with.’”

Perks and Benefits
Though automation may seem like a way to replace people rather than entice them to stay, it can be a valuable, if unlikely retention tool.

Automated tools spare staff from the tedious work some find demoralizing (or boring), and save hours or even days for higher-level work, Algeier says. “That means they can now go do far more interesting work instead.” More time doing interesting work leads to happier employees, which in turn makes staff more likely to stay put.

Fitness and wellness programs are two other creative ways employers invest in keeping the talent they have. Gyms, wellness centers, an in-house yoga studio, exercise classes and even CrossFit boxes are some components. Since exercise helps relieve stress and stress can trigger employees to start looking elsewhere for work, it stands that reducing stress can help improve the strains of work and boost production. Keeping people motivated helps keep them from negative feelings that might lead them to seek satisfaction elsewhere.

Providing certified life coaches is another popular way employers can help staff, focusing on both personal and professional development. Indeed, Microsoft deployed life coaches at its Redmond headquarters more than a decade ago. They specialize in working with adults with Attention Deficit Hyperactivity Disorder (ADHD), and can help professionals overcome weaknesses and increase performance.

Such benefits used to be the domain of Silicon Valley alone, but not anymore. Fairfax, Va.-based boutique security company MKACyber, was launched by Mischel Kwon after posts as director of the Department of Homeland Security’s U.S. Computer Emergency Response Team and as vice president of public sector security solutions for Bedford, Mass.-based RSA. Kwon built her company with what she calls “a West Coast environment.”

The company provides breakfast, lunch and snack foods, private “chill” rooms, and operates a family-first environment, according to a job posting. It also highlights the company’s strong commitment to diversity and helps employees remain “life-long learners.”

Kwon says diversity is about more than just hiring the right mix of people. How you treat them is the key to how long they stay.

“There are a lot of things that go on after the hire that we have to concern ourselves with,” she said at a recent RSA conference.

Retention is a challenging problem for everyone in IT, Kwon says, but managers can do more to think differently about how to hire and keep new talent, beginning by focusing not just on raw technical knowledge, but also on soft skills that make a real difference when working on projects and with teams.

“We’re very ready to have people take tests, have certifications, and look at the onesy-twosy things that they know,” says Kwon. “What we’re finding though, is just as important as the actual content that they know, is their actual work ethic, their personalities. Do they fit in with other people? Do they work well in groups? Are they life-long learners? These types of personal skills are as important as technical skills,” Kwon says. “We can teach the technical skills. It’s hard to teach the work ethic.”

Flexible Work Schedules
Two stereotypes of the modern tech age are all-night coders working in perk-laden offices and fueled by free food, lattes and energy drinks. On the other hand are virtual meetings populated by individuals spread out across the nation or the globe, sitting in home offices or bedrooms, working on their laptops. For many, working from home is no longer a privilege. It’s either a right or at least, an opportunity to make work and life balance out. Have to wait for a plumber to fix the leaky sink? No problem: dial in remotely. In the District of Columbia, the government and many employers encourage regular telework as a means to reduce traffic and congestion — as well as for convenience.

For some, working from home also inevitably draws questions. IBM, for years one of the staunchest supporters of telework, now backtracks on the culture it built, telling workers they need to regularly be in the office if they want to stay employed. The policy shift follows similar moves by Yahoo!, among others.

GDIT’s Hiltbrand says because its staff works at company locations as well as on government sites, remote work is common.

“We have a large population of people who have full or part-time teleworking,” he says. “We are not backing away from that model. If anything, we’re trying to expand on that culture of being able to work from anywhere, anytime and on any device.”

Of course, that’s not possible for everyone. Staff working at military and intelligence agencies don’t typically have that flexibility. “But aside from that,” adds Hiltbrand, “we’re putting a priority on the most flexible work arrangements possible to satisfy employee needs.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
How the Air Force Changed Tune on Cybersecurity

How the Air Force Changed Tune on Cybersecurity

Peter Kim, chief information security officer (CISO) for the U.S. Air Force, calls himself Dr. Doom. Lauren Knausenberger, director of cyberspace innovation for the Air Force, is his opposite. Where he sees trouble, she sees opportunity. Where he sees reasons to say no, she seeks ways to change the question.

For Kim, the dialogue they’ve shared since Knausenberger left her job atop a private sector tech consultancy to join the Air Force, has been transformational.

“I have gone into a kind of rehab for cybersecurity pros,” he says. “I’ve had to admit I have a problem: I can’t lock everything down.” He knows. He’s tried.

The two engage constantly, debating and questioning whether decisions and steps designed to protect Air Force systems and data are having their intended effect, they said, sharing a dais during a recent AFCEA cybersecurity event in Crystal City. “Are the things we’re doing actually making us more secure or just generating a lot of paperwork?” asks Knausenberger. “We are trying to turn everything on its head.”

As for Kim, she added, “Pete’s doing really well on his rehab program.”

One way Knausenberger has turned Kim’s head has been her approach to security certification packages for new software. Instead of developing massive cert packages for every program – documentation that’s hundreds of pages thick and unlikely to every be read – she wants the Air Force to certify the processes used to develop software, rather than the programs.

“Why don’t we think about software like meat at the grocery?” she asked. “USDA doesn’t look at every individual piece of meat… Our goal is to certify the factory, not the program.”

Similarly, Knausenberger says the Air Force is trying now to apply similar requirements to acquisition contracts, accepting the idea that since finding software vulnerabilities is inevitable, it’s best to have a plan for fixing them rather than hoping to regulate them out of existence. “So you might start seeing language that says, ‘You need to fix vulnerabilities within 10 days.’ Or perhaps we may have to pay bug bounties,” she says. “We know nothing is going to be perfect and we need to accept that. But we also need to start putting a level of commercial expectation into our programs.”

Combining development, security and operations into an integrated process – DevSecOps, in industry parlance – is the new name of the game, they argue together. The aim: Build security in during development, rather than bolting it on at the end.

The takeaways from the “Hack-the-Air-Force” bug bounty programs run so far, in that every such effort yields new vulnerabilities – and that thousands of pages of certification didn’t prevent them. As computer power becomes less costly and automation gets easier, hackers can be expected to use artificial intelligence to break through security barriers.

Continuous automated testing is the only way to combat their persistent threat, Kim said.

Michael Baker, CISO at systems integrator, General Dynamics Information Technology, agrees. “The best way to find the vulnerabilities – is to continuously monitor your environment and challenge your assumptions, he says. “Hackers already use automated tools and the latest vulnerabilities to exploit systems. We have to beat them to it – finding and patching those vulnerabilities before they can exploit them. Robust and assured endpoint protection, combined with continuous, automated testing to find vulnerabilities and exploits, is the only way to do that.”

I think we ought to get moving on automated security testing and penetration,” Kim added. “The days of RMF [risk management framework] packages are past. They’re dinosaurs. We’ve got to get to a different way of addressing security controls and the RMF process.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
JOMIS Will Take E-Health Records to the Frontlines

JOMIS Will Take E-Health Records to the Frontlines

The Defense Department Military Health System Genesis electronic health records (EHR) system went live last October at Madigan Army Medical Center (Wash.), the biggest step so far in modernizing DOD’s vast MHS with a proven commercial solution. Now comes the hard part: Tying that system in with operational medicine for deployed troops around the globe.

War zones, ships at sea and aeromedical evacuations each present a new set of challenges for digital health records. Front-line units lack the bandwidth and digital infrastructure to enable cloud-based health systems like MHS Genesis. Indeed, when bandwidth is constrained, health data ranks last on the priority list, falling below command and control, intelligence and other mission data.

The Joint Operational Medicine Information Systems (JOMIS) program office oversees DOD’s operational medicine initiatives, including the legacy Theater Medical Information Program – Joint system used in today’s operational theaters of Iraq and Afghanistan, as well as aboard ships and in other remote locales.

“One of the biggest pain points we have right now is the issue of moving data from the various roles of care, from the first responder [in the war zone] to the First Aid station to something like Landstuhl (Germany) Regional Medical Center, to something in the U.S.,” Navy Capt. Dr. James Andrew Ellzy told GovTechWorks. He is deputy program executive officer (functional) for JOMIS, under the Program Executive Office, Defense Healthcare Management Systems (PEO DHMS).

PEO DHMS defines four stages or “roles,” once a patient begins to receive care. Role One is for first responders; Role Two: Forward resuscitative care; Role Three: Theater hospitals; and Role Four: Service-based medical facilities.

“Most of those early roles right now, are still using paper records,” Ellzy said. Electronic documentation begins once medical operators are in an established location. “Good records usually start the first place that has a concrete slab.”

Among the changes MHS Genesis will bring is consolidation. The legacy AHLTA (Armed Forces Health Longitudinal Technology Application – Theater) solution and its heavily modified theater-level variant AHLTA-T, incorporate separate systems for inpatient and outpatient support.

MHS Genesis however, will provide a single record regardless of patient status.

For deployed medical units, that’s important. Set up and maintenance for AHLTA’s outpatient records and the Joint Composite Health Care System have always been challenging.

“In order to set up the system, you have to have the technical skillset to initialize and sustain these systems,” said Ryan Loving, director of Health IT Solutions for military health services and the VA at General Dynamics Information Technology’s (GDIT) Health and Civilian Solutions Division. “This is a bigger problem for the Army than the other services, because the system is neither operated nor maintained until they go downrange. As a result, they lack the experience to be experts in setup and sustainment.”

JOMIS’ ultimate goal according to Stacy A. Cummings, who heads PEO DHMS, is to provide a virtually seamless representation of MHS Genesis deployed locations.

“For the first time, we’re bringing together inpatient and outpatient, medical and dental records, so we’re going to have a single integrated record for the military health system,” Cummings said at the HIMSS 2018 health IT conference in March. Last year, she told Government CIO magazine, “We are configuring the same exact tool for low-and no-communications environments.”

Therein lies the challenge, said GDIT’s Loving. “Genesis wasn’t designed for this kind of austere environment. Adapting to the unique demands of operational medicine will require a lot of collaboration with military health, with service-specific tactical networks, and an intimate understanding of those network environments today and where they’re headed in the future.”

Operating on the tactical edge – whether doing command and control or sharing medical data – is probably the hardest problem to solve, said Tom Sasala, director of the Army Architecture Integration Center and the service’s Chief Data Officer. “The difference between the enterprise environment and the tactical environment, when it comes to some of the more modern technologies like cloud, is that most modern technologies rely on an always-on, low-latency network connection. That simply doesn’t exist in a large portion of the world – and it certainly doesn’t exist in a large portion of the Army’s enterprise.”

Military units deploy into war zones and disaster zones where commercial connectivity is either highly compromised or non-existent. Satellite connectivity is limited at best. “Our challenge is how do we find commercial solutions that we cannot just adopt, but [can] adapt for our special purposes,” Sasala said.

MHS Genesis is like any modern cloud solution in that regard. In fact, it’s based on Cerner Millennium, a popular commercial EHR platform. So while it may be perfect for garrison hospitals and clinics – and ideal for sharing medical records with other agencies, civilian hospitals and health providers – the military’s operational requirements present unique circumstances unimagined by the original system’s architects.

Ellzy acknowledges the concern. “There’s only so much bandwidth,” he said. “So if medical is taking some of it, that means the operators don’t have as much. So how do we work with the operators to get that bandwidth to move the data back and forth?”

Indeed, the bandwidth and latency standards available via satellite links weren’t designed for such systems, nor fast enough to accommodate their requirements. More important, when bandwidth is constrained, military systems must line up for access, and health data is literally last on the priority list. Even ideas like using telemedicine in forward locations aren’t viable. “That works well in a hospital where you have all the connectivity you need,” Sasala said. “But it won’t work so well in an austere environment with limited connectivity.”

The legacy AHLTA-T system has a store-and-forward capability that allows local storage while connectivity is constrained or unavailable, with data forwarded to a central database once it’s back online. Delays mean documentation may not be available at subsequent locations when patients are moved from one level of care to the next.

The challenge for JOMIS will be to find a way to work in theater and then connect and share saved data while overcoming the basic functional challenges that threaten to undermine the system in forward locations.

“I’ll want the ability to go off the network for a period of time,” Ellzy said, “for whatever reason, whether I’m in a place where there isn’t a network, or my network goes down or I’m on a submarine and can’t actually send information out.”

AHLTA-T manages the constrained or disconnected network situation by allowing the system to operate on a stand-alone computer (or network configuration) at field locations, relying on built-in store-and-forward functionality to save medical data locally until it can be forwarded to the Theater Medical Data Store and Clinical Data Repository. There, it can be accessed by authorized medical personnel worldwide.

Engineering a comparable JOMIS solution will be complex and involve working around and within the MHS Genesis architecture, leveraging innovative warfighter IT infrastructure wherever possible. “We have to adapt Genesis to the store-and-forward architecture without compromising the basic functionality it provides,” said GDIT’s Loving.

Ellzy acknowledges compromises necessary to make AHLTA-T work, led to unintended consequences.

“When you look at the legacy AHLTA versus the AHLTA-T, there are some significant differences,” he said. Extra training is necessary to use the combat theater version. That shouldn’t be the case with JOMIS. “The desire with Genesis,” Ellzy said, “is that medical personnel will need significantly less training – if any – as they move from the garrison to the deployed setting.”

Reporter Jon Anderson contributed to this report.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250