Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Just about every federal employee has a mobile device: Many carry two – one for work and one for personal use. Yet by official policy, most federal workers cannot access work email or files from a personal phone or tablet. Those with government-owned devices usually are limited to using it for email, calendar or Internet searches.

Meanwhile, many professionals use a work or personal phone to do a myriad of tasks. In a world where more than 70 percent of Internet traffic includes a mobile device, government workers are frequently taking matters into their own hands.

According to a recent FedScoop study of 168 federal employees and others in the federal sector, only 35 percent said their managers supported the use of personal mobile devices for official business. Yet 74 percent said they regularly use personally-owned tablets to get their work done. Another 49 percent said they regularly used personal smartphones.

In other words, employees routinely flout the rules – either knowingly or otherwise – to make themselves more productive.

“They’re used to having all this power in their hand, being able to upgrade and download apps, do all kinds of things instantaneously, no matter where they are,” says Michael Wilkerson, senior director for end-user computing and mobility at VMWare Federal, the underwriter for the research study conducted by FedScoop. “The workforce is getting younger and employees are coming in with certain expectations.”

Those expectations include mobile. At the General Services Administration (GSA), where more than 90 percent of jobs are approved for telework and where most staff do not have permanent desks or offices, each employee is issued a mobile device and a laptop. “There’s a philosophy of anytime, anywhere, any device,” says Rick Jones, Federal Mobility 2.0 Manager at GSA. Employees can log into virtual desktop infrastructure to access any of their work files from any device. “Telework is actually a requirement at GSA. You are expected to work remotely one or two days a week,” he says, so the agency is really serious about making employees entirely independent of conventional infrastructure. “We don’t even have desks,” he says. “You need to register for a cube in advance.”

That kind of mobility is likely to increase in the future, especially as fifth-generation (5G) mobile services come into play. With more wireless connections installed more densely, 5G promises data speeds that could replace conventional wired infrastructure, save wiring costs and increase network flexibility – all while significantly increasing the number of mobile-enabled workers.

Shadow IT
When Information Technology (IT) departments don’t give employees the tools and applications they need or want to get work done, they’re likely to go out and find it themselves, using cloud-based apps they can download to their phones, tablets and laptops.

Rajiv Gupta, president of Skyhigh Networks of Campbell, Calif., which provides a cloud-access security service, says his company found that users in any typical organization – federal, military or commercial –access more than 1,400 cloud-based services, often invisibly to IT managers. Such uses may be business or personal, but either can have an impact on security if devices are being used for both. Staff may be posting on Facebook, Twitter and LinkedIn, any of which could be personal but could also be official or in support of professional aims. Collaboration tools like Basecamp, Box, DropBox or Slack are often easy means of setting up unofficial work groups to share files when solutions like SharePoint come up short. Because such uses are typically invisible to the organization, he says, they create a “more insidious situation” – the potential for accidental information leaks or purposeful data ex-filtrations by bad actors inside the organization.

“If you’re using a collaboration service like Microsoft 365 or Box, and I share a file with you, what I’m doing is sharing a link – there’s nothing on the network that I can observe to see the files moving,” he says. “More than 50 percent of all data security leaks in a service like 365 is through these side doors.”

The organization may offer users the ability to use OneDrive or Slack, but if users perceive those as difficult or the access controls as unwieldly (user authentication is among mobile government users’ biggest frustrations, according to the VMWare/FedScoop study), they will opt for their own solutions, using email to move data out of the network and then collaborating beyond the reach of the IT and security staff.

While some such instances may be nefarious – as in the case of a disgruntled employee for example – most are simply manifestations of well-meaning employees trying to get their work done as efficiently as possible.

“So employees are using services that you and I have never even heard of,” Gupta says, services like Zippyshare, Footlocker and Findspace. Since most of these are simply classified as “Internet services,” standard controls may not be effective in blocking them, because shutting down the whole category is not an option, Gupta says. “If you did you would have mutiny on your hands.” So network access controls need to be narrowly defined and operationalized through whitelisting or blacklisting of sites and apps.

Free services are a particular problem because employees don’t see the risk, says Sean Kelley, chief information security officer at the Environmental Protection Agency (EPA). At an Institute for Critical Infrastructure conference in May, he said the problem traces back to the notion that free or subscription services aren’t the same as information technology. “A lot of folks said, well, it’s cloud, so it’s not IT,” he said. “But as we move from network-based security to data security, we need to know where our data is going.”

The Federal Information Technology Acquisition Reform Act was supposed to empower chief information officers (CIOs) by giving them more control over such purchases. But regulating free services and understanding the extent to which users may be using them is extremely difficult, whether in government or the private sector. David Summitt, chief information security officer (CISO) at the Moffit Cancer Center in Tampa, Fla., described an email he received from a salesman representing a cloud service provider. The email contained a list of more than 100 Moffit researchers who were using his company’s technology – all unbeknownst to the CISO. His immediate reply: “I said thank you very much – they won’t be using your service tomorrow.” Then he shut down access to that domain.

Controlling Mobile Use
Jon Johnson, program manager for enterprise mobility at GSA acknowledges that even providing access to email opens the door to much wider use of mobile technology. “I too download and open documents to read on the Metro,” he said. “The mobile devices themselves do make it more efficient to run a business. The question is, how can a CIO create tools and structures so their employees are more empowered to execute their mission effectively, and in a way that takes advantage not only of the mobile devices themselves, but also helps achieve a more efficient way of operating the business?”

Whether agencies choose to whitelist approved apps or blacklist high-risk ones, Johnson said, every agency needs to nail down the solution that best applies to its needs. “Whether they have the tools that can continually monitor those applications on the end point, whether they use vetting tools,” he said, each agency must make its own case. “Many agencies, depending on their security posture, are going to have those applications vetted before they even deploy their Enterprise Mobility Management (EMM) onto that device. There is no standard for this because the security posture for the Defense Information Systems Agency (DISA) and the FBI are different from GSA and the Department of Education.

“There’s always going to be a tradeoff between the risk of allowing your users to use something in a way that you may not necessarily predict versus locking everything down,” says Johnson.

Johnson and GSA have worked with a cross-agency mobile technology tiger team for years to try to nail down standards and policies that can make rolling out a broader mobile strategy easier on agency leaders. “Mobility is more than carrier services and devices,” he says. “We’ve looked at application vetting, endpoint protection, telecommunication expense management and emerging tools like virtual mobile interfaces.” He adds they’ve also examined the evolution of mobile device management solutions to more modern enterprise mobility management systems that take a wider view of the mobile world.

Today, agencies are trying to catch up to the private sector and overcome the government’s traditionally limited approach to mobility. At the United States Agency for International Development (USAID), Lon Gowan, chief technologist and special advisor to the CIO, says even though half the agency’s staff are in far-flung remote locations, many of them austere. “We generally treat everyone as a mobile worker,” Gowan says.

Federal agencies remain leery of adopting bring-your-own-device policies, just as many federal employees are leery of giving their agencies access to their personal information. While older mobile device management software gave organizations the ability to monitor activity and wipe entire devices; today’s enterprise management solutions enable devices to effectively be split, containing both personal and business data. And never the twain shall meet.

“We can either allow a fully managed device or one that’s self-owned, where IT manages a certain portion of it,” says VMWare’s Wilkerson. “You can have a folder that has a secure browser, secure mail, secure apps and all of that only works within that container. You can set up secure tunneling so each app can do its own VPN tunnel back to the corporate enterprise. Then, if the tunnel gets shut down or compromised, it shuts off the application, browser — or whatever — is leveraging that tunnel.

Another option is to use mobile-enabled virtual desktops where applications and data reside in a protected cloud environment, according to Chris Barnett, chief technology officer for GDIT’s Intelligence Solutions Division. “With virtual desktops, only a screen image needs to be encrypted and communicated to the mobile device. All the sensitive information remains back in the highly-secure portion of the Enterprise. That maintains the necessary levels of protection while at the same time enabling user access anywhere, anytime.”

When it comes to classified systems, of course, the bar moves higher as risks associated with a compromise increase. Neil Mazuranic of DISA’s, Mobility Capabilities branch chief in the DoD Mobility Portfolio Management Office, says his team can hardly keep up with demand. “Our biggest problem at DISA at the secret level and top secret level, is that we don’t have enough devices to go around,” he says. “Demand is much greater than the supply. We’re taking actions to push more phones and tablets out there.” But capacity will likely be a problem for a while.

The value is huge however, because the devices allow senior leaders “to make critical, real-world, real-time decisions without having to be tied to a specific place,” he says. “We want to stop tying people to their desks and allow them to work wherever they need to work, whether it’s classified work or unclassified.”

DISA is working on increasing the numbers of classified phones using Windows devices that provide greater ability to lock down security than possible with iOS or Android devices. By using products not in the mainstream, the software can be better controlled. In the unclassified realm, DISA secures both iOS and Android devices using managed solutions allowing dual office and personal use. For iOS, a managed device solution establishes a virtual wall in which some apps and data are managed and controlled by DISA, while others are not.

“All applications that go on the managed side of the devices, we evaluate and make sure they’re approved to use,” DISA’s Mazuranic told GovTechWorks. “There’s a certain segment that passes with flying colors and that we approve, and then there are some questionable ones that we send to the authorizing official to accept the risk. And there are others that we just reject outright. They’re just crazy ones.”

Segmenting the devices, however, gives users freedom to download apps for their personal use with a high level of assurance that those apps cannot access the controlled side of the device. “On the iOS device, all of the ‘for official use only’ (FOUO) data is on the managed side of the device,” he said. “All your contacts, your email, your downloaded documents, they’re all on the managed side. So when you go to the Apple App Store and download an app, that’s on the unmanaged side. There’s a wall between the two. So if something is trying to get at your contacts or your data, it can’t, because of that wall. On the Android device, it’s similar: There’s a container on the device, and all the FOUO data on the device is in that container.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
TechNet Asia-Pacific 250×250
Cyber Resilience Workshop 250×250

Upcoming Events

USNI News: 250×250
gdit cloud 250×250
Nextgov Newsletter 250×250
Milcom 300×250
Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

It’s no longer a question of whether the federal government is going to fully embrace cloud computing. It’s how fast.

With the White House pushing for cloud services as part of its broader cybersecurity strategy and budgets getting squeezed by the administration and Congress, chief information officers (CIOs) are coming around to the idea that the faster they can modernize their systems, the faster they’ll be able to meet security requirements. And that once in the cloud, market dynamics will help them drive down costs.

“The reality is, our data-center-centric model of computing in the federal government no longer works,” says Chad Sheridan, CIO at the Department of Agriculture’s Risk Management Agency. “The evidence is that there is no way we can run a federal data center at the moderate level or below better than what industry can do for us. We don’t have the resources, we don’t have the energy and we are going to be mired with this millstone around our neck of modernization for ever and ever.”

Budget pressure, demand for modernization and concern about security all combine as a forcing function that should be pushing most agencies rapidly toward broad cloud adoption.

Joe Paiva, CIO at the International Trade Administration (ITA), agrees. He used an expiring lease as leverage to force his agency into the cloud soon after he joined ITA three years ago. Time and again the lease was presented to him for a signature and time and again, he says, he tore it up and threw it away.

Finally, with the clock ticking on his data center, Paiva’s staff had to perform a massive “lift and shift” operation to keep services running. Systems were moved to the Amazon cloud.  Not a pretty transition, he admits, but good enough to make the move without incident.

“Sometimes lift and shift actually makes sense,” Paiva told federal IT specialists at the Advanced Technology Academic Research Center’s (ATARC) Cloud and Data Center Summit. “Lift and shift actually gets you there, and for me that was the key – we had to get there.”

At first, he said, “we were no worse off or no better off.” With systems and processes that hadn’t been designed for cloud, however, costs were high. “But then we started doing the rationalization and we dropped our bill 40 percent. We were able to rationalize the way we used the service, we were able to start using more reserve things instead of ad hoc.”

That rationalization included cutting out software and services licenses that duplicated other enterprise solutions. Microsoft Office 365, for example, provided every user with a OneDrive account in the cloud. By getting users to save their work there, meant his team no longer had to support local storage and backup, and the move to shared virtual drives instead of local ones improved worker productivity.

With 226 offices around the world, offloading all that backup was significant. To date, all but a few remote locations have made the switch. Among the surprise benefits: happier users. Once they saw how much easier things were with shared drives that were accessible from anywhere, he says, “they didn’t even care how much money we were saving or how much more secure they were – they cared about how much more functional they suddenly became.”

Likewise, Office 365 provided Skype for Business – meaning the agency could eliminate expensive stand-alone conferencing services – another benefit providing additional savings.

Cost savings matter. Operating in the cloud, ITA’s annual IT costs per user are about $15,000 – less than half the average for the Commerce Department as a whole ($38,000/user/year), or the federal government writ large ($39,000/user/year), he said.

“Those are crazy high numbers,” Paiva says. “That is why I believe we all have to go to the cloud.”

In addition to Office 365, ITA uses Amazon Web Services (AWS) for infrastructure and Salesforce to manage the businesses it supports, along with several other cloud services.

“Government IT spending is out of freaking control,” Paiva says, noting that budget cuts provide incentive for driving change that might not come otherwise. “No one will make the big decisions if they’re not forced to make them.”

Architecture and Planning
If getting to the cloud is now a common objective, figuring out how best to make the move is unique to every user.

“When most organizations consider a move to the cloud, they focus on the ‘front-end’ of the cloud experience – whether or not they should move to the cloud, and if so, how will they get there,” says Srini Singaraju, chief cloud architect at General Dynamics Information Technology, a systems integrator. “However, organizations commonly don’t give as much thought to the ‘back-end’ of their cloud journey: the new operational dynamics that need to be considered in a cloud environment or how operations can be optimized for the cloud, or what cloud capabilities they can leverage once they are there.”

Rather than lift and shift and then start looking for savings, Singaraju advocates planning carefully what to move and what to leave behind. Designing systems and processes to take advantage of its speed and avoiding some of the potential pitfalls not only makes things go more smoothly, it saves money over time.

“Sometimes it just makes more sense to retire and replace an application instead of trying to lift and shift,” Singaraju says. “How long can government maintain and support legacy applications that can pose security and functionality related challenges?”

The challenge is getting there. The number of cloud providers that have won provisional authority to operate under the 5-year-old Federal Risk and Authorization Management Program (FedRAMP) is still relatively small: just 86 with another 75 still in the pipeline. FedRAMP’s efforts to speed up the process are supposed to cut the time it takes to earn a provisional authority to operate (P-ATO) from as much as two years to as little as four months. But so far only three cloud providers have managed to get a product through FedRAMP Accelerated – the new, faster process, according to FedRAMP Director Matt Goodrich. Three more are in the pipeline with a few others lined up behind those, he said.

Once an agency or the FedRAMP Joint Authorization Board has authorized a cloud solution, other agencies can leverage their work with relatively little effort. But even then, moving an application from its current environment is an engineering challenge. Determining how to manage workflow and the infrastructure needed to make a massive move to the cloud work is complicated.

At ITA, for example, Paiva determined that cloud providers like AWS, Microsoft Office 365 and Salesforce had sufficient security controls in place that they could be treated as a part of his internal network. That meant user traffic could be routed directly to them, rather than through his agency’s Trusted Internet Connection (TIC). That provided a huge infrastructure savings because he didn’t have to widen that TIC gateway to accommodate all that routine work traffic, all of which in the past would have stayed inside his agency’s network.

Rather than a conventional “castle-and-moat” architecture, Paiva said he had to interpret the mandate to use the TIC “in a way that made sense for a borderless network.”

“I am not violating the mandate,” he said. “All my traffic that goes to the wild goes through the TIC. I want to be very clear about that. If you want to go to www-dot-name-my-whatever-dot-com, you’re going through the TIC. Office 365? Salesforce? Service Now? Those FedRAMP-approved, fully ATO’d applications that I run in my environment? They’re not external. My Amazon cloud is not external. It is my data center. It is my network. I am fulfilling the intent and letter of the mandate – it’s just that the definition of what is my network has changed.”

Todd Gagorik, senior manager for federal services at AWS, said this approach is starting to take root across the federal government. “People are beginning to understand this clear reality: If FedRAMP has any teeth, if any of this has any meaning, then let’s embrace it and actually use it as it’s intended to be used most efficiently and most securely. If you extend your data center into AWS or Azure, those cloud environments already have these certifications. They’re no different than your data center in terms of the certifications that they run under. What’s important is to separate that traffic from the wild.”

ATARC has organized a working group of government technology leaders to study the network boundary issue and recommend possible changes to the policy, said Tom Suder, ATARC president. “When we started the TIC, that was really kind of pre-cloud, or at least the early stages of cloud,” he said. “It was before FedRAMP. So like any policy, we need to look at that again.” Acting Federal CIO Margie Graves is a reasonable player, he said, and will be open to changes that makes sense, given how much has changed since then.

Indeed, the whole concept of a network’s perimeter has been changed by the introduction of cloud services, Office of Management and Budget’s Grant Schneider, the acting federal chief information security officer (CISO), told GovTechWorks earlier this year.

Limiting what needs to go through the TIC and what does not could have significant implications for cost savings, Paiva said. “It’s not chump change,” he said. “That little architectural detail right there could be billions across the government that could be avoided.”

But changing the network perimeter isn’t trivial. “Agency CIOs and CISOs must take into account the risks and sensitivities of their particular environment and then ensure their security architecture addresses all of those risks,” says GDIT’s Singaraju. “A FedRAMP-certified cloud is a part of the solution, but it’s only that – a part of the solution. You still need to have a complete security architecture built around it. You can’t just go to a cloud service provider without thinking all that through first.”

Sheridan and others involved in the nascent Cloud Center of Excellence sees the continued drive to the cloud as inevitable. “The world has changed,” he says. “It’s been 11 years since these things first appeared on the landscape. We are in exponential growth of technology, and if we hang on to our old ideas we will not continue. We will fail.”

His ad-hoc, unfunded group includes some 130 federal employees from 48 agencies and sub-agencies that operate independent of vendors, think tanks, lobbyists or others with a political or financial interest in the group’s output. “We are a group of people who are struggling to drive our mission forward and coming together to share ideas and experience to solve our common problems and help others to adopt the cloud,” Sheridan says. “It’s about changing the culture.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
TechNet Asia-Pacific 250×250
Cyber Resilience Workshop 250×250

Upcoming Events

USNI News: 250×250
gdit cloud 250×250
Nextgov Newsletter 250×250
Milcom 300×250
Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Chief information security officers (CISOs) face a dizzying array of cybersecurity tools to choose from, each loaded with features and promised capabilities that are hard to measure or judge.

That leaves CISOs trying to balance unknown risks against growing costs, without a clear ability to justify the return on their cybersecurity investment. Not surprisingly, today’s high-threat environment makes it preferable to choose safe over sorry – regardless of cost. But is there a better way?

Some cyber insiders believe there is.

Margie Graves Acting U.S. Federal Chief Information Officer

Margie Graves
Acting U.S. Federal Chief Information Officer

Acting U.S. Federal Chief Information Officer (CIO) Margie Graves acknowledges the problem.

“Defining the measure of success is hard sometimes, because it’s hard to measure things that don’t happen,” Graves said. President’s Trump’s Executive Order on Cybersecurity asks each agency to develop its own risk management plan, she noted. “It should be articulated on that plan how every dollar will be applied to buying down that risk.”

There is a difference though, between a plan and an actual measure. A plan can justify an investment intended to reduce risk. But judgment, rather than hard knowledge, will determine how much risk is mitigated by any given tool.

The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) have been trying to develop a methodology measuring the actual value of a given cyber tool’s performance. Their NIPRNet/SIPRNET Cyber Security Architecture Review (NSCSAR – pronounced “NASCAR”) is a classified effort to define a framework for measuring cybersecurity performance, said DISA CIO and Risk Management Executive John Hickey.

“We just went through a drill of ‘what are those metrics that are actually going to show us the effectiveness of those tools,’ because a lot of times we make an investment, people want a return on that investment,” he told GovTechWorks in June. “Security is a poor example of what you are going after. It is really the effectiveness of the security tools or compliance capabilities.”

The NSCSAR review, conducted in partnership with NSA and the Defense Department, may point to a future means of measuring cyber defense capability. “It is a framework that actually looks at the kill chain, how the enemy will move through that kill chain and what defenses we have in place,” Hickey said, adding that NSA is working with DISA on an unclassified version of the framework that could be shared with other agencies or the private sector to measure cyber performance.

“It is a methodology,” Hickey explained. “We look at the sensors we have today and measure what functionality they perform against the threat.… We are tracking the effectiveness of the tools and capabilities to get after that threat, and then making our decisions on what priorities to fund.”

Measuring Security
NSS Labs Inc., independently tests the cybersecurity performance of firewalls and other cyber defenses, annually scoring products’ performances. The Austin, Texas, company evaluated 11 next-generation firewall (NGFW) products from 10 vendors in June 2017, comparing the effectiveness of their security performance, as well as the firewalls’ stability, reliability and total cost of ownership.

In the test, products were presumed to be able to provide basic packet filtering, stateful multi-layer inspection, network address translation, virtual private network capability, application awareness controls, user/group controls, integrated intrusion prevention, reputation services, anti-malware capabilities and SSL inspection. Among the findings:

  • Eight of 11 products tested scored “above average” in terms of both performance and cost-effectiveness; Three scored below
  • Overall security effectiveness ranged from as low as 25.8 percent, up to 99.9; average security effectiveness was 67.3 percent
  • Four products scored below 78.5 percent
  • Total cost of ownership ranged from $5 per protected megabit/second to $105, with an average of $22
  • Nine products failed to detect at least one evasion, while only two detected all evasion attempts

NSS conducted similar tests of advanced endpoint protection tools, data center firewalls, and web application firewalls earlier this year.

But point-in-time performance tests don’t provide a reliable measure of ongoing performance. And measuring the effectiveness of a single tool does not necessarily indicate how well it performs its particular duties as part of a suite of tools, notes Robert J. Carey, vice president within the Global Solutions division at General Dynamics Information Technology (GDIT). The former U.S. Navy CIO and Defense Department principal deputy CIO says that though these tests are valuable, they still make it hard to quantify and compare the performance of different products in an organization’s security stack.

The evolution and blurring of the lines between different cybersecurity tools – from firewalls to intrusion detection/protection, gateways, traffic analysis tools, threat intelligence, intrusion detection, anomaly detection and so on – mean it’s easy to add another tool to one’s stack, but like any multivariate function, it is hard to be sure of its individual contributions to threat protection and what you can do without.

“We don’t know what an adequate cyber security stack looks like. What part of the threat does the firewall protect against, the intrusion detection tool, and so on?” Carey says. “We perceive that the tools are part of the solution. But it’s difficult to quantify the benefit. There’s too much marketing fluff about features and not enough facts.”

Mike Spanbauer, vice president of research strategy at NSS, says this is a common concern, especially in large, managed environments — as is the case in many government instances. One way to address it is to replicate the security stack in a test environment and experiment to see how tools perform against a range of known, current threats while under different configurations and settings.

Another solution is to add one more tool to monitor and measure performance. NSS’ Cyber Advanced Warning System (CAWS) provides continuous security validation monitoring by capturing live threats and then injecting them into a test environment mirroring customers’ actual security stacks. New threats are identified and tested non-stop. If they succeed in penetrating the stack, system owners are notified so they can update their policies to stop that threat in the future.

“We harvest the live threats and capture those in a very careful manner and preserve the complete properties,” Spanbauer said. “Then we bring those back into our virtual environment and run them across the [cyber stack] and determine whether it is detected.”

Adding more tools and solutions isn’t necessarily what Carey had in mind. While that monitoring may reduce risk, it also adds another expense.

And measuring value in terms of return on investment, is a challenge when every new tool adds real cost and results are so difficult to define. In cybersecurity, though managing risk has become the name of the game, actually calculating risk is hard.

The National Institute of Standards and Technology (NIST) created the 800-53 security controls and the cybersecurity risk management framework that encompass today’s best practices. Carey worries that risk management delivers an illusion of security by accepting some level of vulnerability depending on level of investment. The trouble with that is that it drives a compliance culture in which security departments focus on following the framework more than defending the network and securing its applications and data.

“I’m in favor of moving away from risk management,” GDIT’s Carey says. “It’s what we’ve been doing for the past 25 years. It’s produced a lot of spend, but no measurable results. We should move to effects-based cyber. Instead of 60 shades of gray, maybe we should have just five well defined capability bands.”

The ultimate goal: Bring compliance into line with security so that doing the former, delivers the latter. But the evolving nature of cyber threats suggests that may never be possible.

Automated tools will only be as good as the data and intelligence built into them. True, automation improves speed and efficiency, Carey says. “But it doesn’t necessarily make me better.”

System owners should be able to look at their cyber stack and determine exactly how much better security performance would be if they added another tool or upgraded an existing one. If that were the case, they could spend most of their time focused on stopping the most dangerous threats – zero-day vulnerabilities that no tool can identify because they’ve never seen it before – rather than ensuring all processes and controls are in place to minimize risk in the event of a breach.

Point-in-time measures based on known vulnerabilities and available threats help, but may be blind to new or emerging threats of the sort that the NSA identifies and often keeps secret.

The NSCSAR tests DISA and NSA perform include that kind of advanced threat. Rather than trying to measure overall security, they’ve determined that breaking it down into the different levels of security makes sense. Says DISA’s Hickey: “You’ve got to tackle ‘what are we doing at the perimeter, what are we doing at the region and what are we doing at the endpoint.’” A single overall picture isn’t really possible, he says. Rather, one has to ask: “What is that situational awareness? What are those gaps and seams? What do we stop [doing now] in order to do something else? Those are the types of measurements we are looking at.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
TechNet Asia-Pacific 250×250
Cyber Resilience Workshop 250×250

Upcoming Events

USNI News: 250×250
gdit cloud 250×250
Nextgov Newsletter 250×250
Milcom 300×250
New Cyber Standards for IoT Ease – But Won’t Solve – Security Challenge

New Cyber Standards for IoT Ease – But Won’t Solve – Security Challenge

The first independent standard for cybersecurity for the Internet of Things (IoT) was approved earlier this month, following two years of debate and discussion over how to measure and secure such devices.

The American National Standards Institute (ANSI) approved UL 2900-1, General Requirements for Software Cybersecurity for Network-Connectable Products, as a standard on July 5. The effort was spearheaded by Underwriters Laboratories (UL), which is preparing two more standards to follow: UL 2900-2-1, which defines requirements for network-connectable components of healthcare systems, and UL 2900-2-2, which does the same for industrial control systems.

The three establish the first standard security protocols for software-controlled IoT devices, such as access controls, industrial controls for lighting and mechanical systems, internet-connected medical devices and more. They also offer potential answers to major worries about the lack of security built into such devices thus far.

The Internet of Things promises unparalleled opportunities to track, control and manage everything from lights and security cameras to pacemakers and medication delivery systems. But concerns about security, driven by real-world events in which unsecured IoT devices were co-opted in coordinated botnet attacks, have raised anxiety levels about the risks posed by connecting so many devices to the internet.

Those concerns have prompted government leaders from the Pentagon to Congress to call on industry to embrace security standards as a mark of quality and establish voluntary independent testing programs to assure customers that products are safe. The underlying warning: Either industry figures out how to police itself or government regulators will step in to fill the void.

Whether that is enough to inspire more companies to step up to the standards challenge remains unclear. “The market – that is, individual, corporate and government customers – has yet to put a price on IoT security in the same way that other markets have to determine the relative value of energy-efficient appliances or crash-worthy automobiles,” said Chris Turner, solutions architect with systems integrator General Dynamics Information Technology. “The market would benefit from standards. They’d help vendors back up product claims and integrators speed up adoption and implementation, which in turn would increase security and probably drive down prices, as well.”

Steven Walker Acting director of DARPA

Steven Walker
Acting director of DARPA

A standards regimen could change that equation, suggests Steven Walker, acting director of the Defense Advanced Research Agency (DARPA).

“What if customers were made aware of unsecure products and the companies that made them?” he asked at the AFCEA Defensive Cyber Symposium in June. “I’m pretty sure customers would buy the more secure products.”

As recently as Oct. 21, 2016, the Mirai botnet attack crippled Internet services provider Dyn via an international network of security cameras that launched an onslaught of bogus data requests on Dyn servers, peaking at about 1.2 terabytes/s. The attack brought down many of the most popular sites on the Internet.

Kevin Fu, director of the Archimedes Center for Medical Device Security and the Security and Privacy Research Group at the University of Michigan and the co-founder and chief scientist at Virta Labs, a startup medical device security firm, told the House Energy and Commerce Committee that the underlying problem is one of market failure.

“We are in this sorry and deteriorating state because there is almost no cost to a manufacturer for deploying [IoT] products with poor security to consumers,” he said at a November hearing. “Has a consensus body or federal agency issued a meaningful IoT security standard? Not yet. Is there a national testing lab to verify and assess the pre-market security of IoT devices? No. Is there a tangible cost to any company that puts an insecure IoT device into the market? I don’t think so.”

Could UL 2900 answer that need? Though Fu isn’t quite ready to endorse it, he did suggest the concept is sound.

“We know from the mathematician Gödel that it’s impossible to have both a sound and complete set of standards for any non-trivial problem,” Fu told GovTechWorks. “However, standards are important to improve security and simplify the problem to make it more tractable. No approach will completely solve security, but standards, sound engineering principles and experience gained through failure are necessary ingredients for reasonable defense.”

Developing the Standard
UL 2900 provides guidelines for how to evaluate and test connected products, including a standard approach to software analysis, efforts to root out embedded malware and process and control requirements for establishing IoT security risk controls in the architecture, design and long-term risk management of the product.

Rather than focus on hardware devices first, UL focused on software after initial conversations with the Department of Homeland Security (DHS), said Ken Modeste, leader of cybersecurity services, at UL. “One of DHS’s biggest challenges was their software supply chain,” he said. DHS was concerned about commercial software products running on computer systems, as well as industrial control software running the agencies operations technology, such as air conditioning, lighting and building or campus security systems.

Examining the problem, UL officials found clear similarities between the systems and sensors used in factory automation, enterprise building automation and security technology. “The majority of these cyber concerns – 90 percent – were in software,” Modeste told GovTechWorks. “So we realized, if we can create a standard for software, we can apply that to many, many products.”

UL invited representatives from industry, government and academia to participate in developing the standard. “We started looking at industry standards that make software better,” Modeste said. “A firmware file has a multitude of components. How can those be broken down and understood? How can they be protected?”

Participants studied every imaginable attack vector that threat actors could use to compromise a product, and then incorporated each into the testing process. Recognizing that new threats and vulnerabilities arise all the time, the testing and process was designed to be fluid and to incorporate follow-up testing after initial approval.

At first, Industry was slow to respond. “I thought we’d have more support early on,” Modeste said. “But there was an initial reluctance. It took a while for us to engage and get them to see the advantages.”

Now it seems interest is on the rise. Among the first movers with the standard: Electric Imp, an IoT software firm based in Los Altos, Calif., and Cambridge, U.K., which provides a cloud-based industrial IoT platform for fully integrating hardware, operating system, APIs, cloud services and security in a single flexible, scalable package. The Electric Imp platform is the first IoT platform to be independently certified to UL 2900-2-2.

Hugo Fiennes, co-founder and CEO at Electric Imp and former leader of Apple’s iPhone hardware development efforts (generations one through four), said:

“For security, UL has come at it at the right angle, because they’re not prescriptive,” Fiennes told GovTechWorks. “There are many ways to get security, depending on the application’s demands, latency requirements, data throughput requirements and everything like that. [But] the big problem has been that there has been no stake in the ground so far, nothing that says, ‘this is a reasonable level of security that shows a reasonable level of due diligence has been performed by the vendor.’”

What UL did was to study the problems of industrial control systems, look at the art of the possible, and then codify that in a standard established by a recognizable, independent third-party organization.

“It can’t be overstated how important that is,” Fiennes said. UL derives its trust from the fact that it is independent of other market players and forces.

Although UL 2900 “is not the be all and end all last word on cybersecurity for IoT,” Fiennes said, “it provides a good initial step for vendors.”

“They haven’t said this is one standard forever, because that’s not how security works,” he said. “They’ve said IoT security is a moving target, here is the current standard. We will test to it, we’ll give you a certificate and then you will retest and maintain compliance after.” The certification lasts a year, after which new and emerging threats must be considered in addition to those tested previously.

“This doesn’t absolve the people selling security products, platforms and security stacks from due diligence,” Fiennes warned. Firms must be vigilant and remain ready and able to react quickly to threats. “But it’s better than nothing. And we were in a state before where there was nothing.”  He noted that his product’s UL certification expires after a year, at which point some requirements are likely to change and the certification will have to be renewed.

Still, for customers seeking proof that a product has met a minimum baseline, this is the only option short of devoting extensive in-house resources to thoroughly test products on their own. Few have such resources.

“Auto makers and other large-scale manufacturers can afford that kind of testing because they can spread the cost out across unit sales in the hundreds of thousands,” says GDIT’s Turner. “But for government integration projects, individually testing every possible IoT product is cost-prohibitive. It’s just not practical. So reputable third-party testing could really help speed up adoption of these new technologies and the benefits they bring.”

Standards have value because they provide a baseline measure of confidence.

For Electric Imp, being able to tell customers that UL examined its source code, ran static analysis, performed fuzz testing and penetration testing and examined all of its quality and design controls, has made a difference.

For UL and Modeste, the notion that it will not be able to solve the IoT security problem with a single standard, proved something of an “aha moment.”

“Within cybersecurity, you have to recognize you can’t do everything at once,” he said. “You need a foundation, and then you can go in and take it step-by-step. Nothing anyone comes up with in one step will make you 100 percent cyber secure. It might take 10 years to come up with something perfect and then soon after, it will be obsolete. So it’s better to go in steps,” Modeste added. “That will make us increasingly secure over time.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
TechNet Asia-Pacific 250×250
Cyber Resilience Workshop 250×250

Upcoming Events

USNI News: 250×250
gdit cloud 250×250
Nextgov Newsletter 250×250
Milcom 300×250
Secure, High-Quality Software Doesn’t Happen By Accident

Secure, High-Quality Software Doesn’t Happen By Accident

Time, cost and security are all critical factors in developing new software. Late delivery can undermine the mission; rising costs can jeopardize programs; security breaches and system failures can disrupt entire institutions. Yet systematically reviewing system software for quality and security is far from routine.

“People get in a rush to get things built,” says Bill Curtis, founding executive director of Consortium of IT Software Quality (CISQ), where he leads the development of automatable standards that measure software size and quality. “They’re either given schedules they can’t meet or the business is running around saying … ‘The cost of the damage on an outage or a breach is less than what we’ll lose if we don’t get this thing out to market.’”

In the government context, pressures can arise from politics and public attention, as well as contract and schedule.

It shouldn’t take “a nine-digit defect – a defect that goes over 100 million bucks – to change attitudes,” Curtis says. But sometimes that’s what it takes.

Software defects and vulnerabilities come in many forms. The Common Weakness Enumeration (CWE) lists more than 705 types of security weaknesses organized into categories such as “Insecure Interaction Between Components” or “Risky Resource Management.” CWE’s list draws on contributions from participants ranging from Apple and IBM to the National Security Agency and the National Institute for Standards and Technology.

By defining these weaknesses, CWE – and its sponsor, the Department of Homeland Security’s Office of Cybersecurity and Communications – seek to raise awareness about bad software practices by:

  • Defining common language for describing software security weaknesses in architecture, design, or code
  • Developing a standard measuring stick for software security tools targeting such weaknesses
  • Providing a common baseline for identifying, mitigating and preventing weaknesses

Software weaknesses can include inappropriate linkages, defunct code that remains in place (at the risk of being accidentally reactivated later) and avoidable flaws – known vulnerabilities that nonetheless find their way into source code.

“We’ve known about SQL injections [as a security flaw] since the 1990s,” Curtis says. “So why are we still seeing them? It’s because people are in a rush. They don’t know. They weren’t trained.”

Educated Approach
Whether students today get enough rigor and process drilled into them while they’re learning computer languages and logic is open to debate. Curtis favors a more rigorous engineering approach for example, worrying that too many self-taught programmers lack critical underlying skills. Indeed, 2016 survey of 56,033 developers conducted by Stack Overflow, a global online programmer community, found 13 percent claimed they were entirely self-taught. Even among the 62.5 percent who had studied computer science and earned a bachelor’s or master’s degree, the majority also said some portion of their training was self-taught. The result is that some underlying elements of structure, discipline or understanding can be lost, increasing the risk of problems.

Having consistent, reliable processes and tools for examining and ensuring software quality could make a big difference.

Automated software developed to identify weak or risky architecture and code can help overcome that, says Curtis, a 38-year veteran of software engineering and development in industry and academia. Through a combination of static and dynamic reviews, developers can obtain a sense of the overall quality of their code and alerts about potential system weaknesses and vulnerabilities. The lower the score, the riskier the software.

CISQ is not a panacea. It can screen 22 of the 25 Most Dangerous Software Errors as defined by CWE and the SANS Institute, identifying both code-level and architectural-level errors.

By examining system architecture, Curtis says, CISQ delivers a comprehensive review. “We’ve got to be able to do system-level analysis,” Curtis says. “It’s not enough just to find code-level bugs or code-unit-level bugs. We’ve got to find the architectural issues, where somebody comes in through the user interface and slips all the way around the data access or authentication routines. And to do that you have to be able to analyze the overall stack.”

Building on ISO/IEC 25010, an international standard for stating and evaluating software quality requirements, CISQ establishes a process for measuring software quality against four sets of characteristics: security, reliability, performance efficiency and maintainability. These are “nonfunctional requirements,” in that they are peripheral to the actual mission of any given system, yet they are also the source of many of the most damaging security breaches and system failures.

Consider, for example, a 2012 failed software update to servers belonging to Jersey City, N.J. financial services firm Knight Capital Group. The update was supposed to replace old code that had remained in the system – unused – for eight years. The new code, which updated and repurposed a “flag” from the old code, was tested and proven to work correctly and reliably. Then the trouble started.

According to a Securities and Exchange Commission filing, a Knight technician copied the new code to only seven of the eight required servers. No one realized the old code had not been removed from the eighth server nor that the new code had not been added. While the seven updated servers operated correctly, the repurposed flag caused the eighth server to trigger outdated and defective software. The defective code instantly triggered millions of “buy” orders totaling 397 million shares in just 45 minutes. Total lost as a result: $460 million.

“A disciplined software configuration management approach would have stopped that failed deployment on two fronts,” said Andy Ma, senior software architect with General Dynamics Information Technology. “Disciplined configuration management means making sure dead code isn’t waiting in hiding to be turned on by surprise, and that strong control mechanisms are in place to ensure that updates are applied to all servers, not just some. That kind of discipline has to be instilled throughout the IT organization. It’s got to be part of the culture.”

Indeed, had the dead code been deleted, the entire episode would never have happened, Curtis says. Yet it is still common to find dead code hidden in system software. Indeed, as systems grow in complexity, such events could become more frequent. Large systems today utilize three to six computer languages and have constant interaction between different system components.

“We’re past the point where a single person can understand these large complex systems,” he says. “Even a team cannot understand the whole thing.”

As with other challenges where large data sets are beyond human comprehension, automation promises better performance than humans can muster. “Automating the deployment process would have avoided the problem Knight had – if they had configured their tools to update all eight servers,” said GDIT’s Ma. “Automated tools also can perform increasingly sophisticated code analysis to detect flaws. But they’re only as good as the people who use them. You have to spend the time and effort to set them up correctly.”

Contracts and Requirements
For acquisition professionals, such tools could be valuable in measuring quality performance. Contracts can be written to incorporate such measures, with contractors reporting on quality reviews on an ongoing basis. Indeed, the process lends itself to agile development, says Curtis, who recommends using the tools at least once every sprint. That way, risks are flagged and can be fixed immediately. “Some folks do it every week,” he says.

J. Brian Hall, principal director, Developmental Test and Evaluation in the Office of the Secretary of Defense, said at a conference in March that the concept of adding a security quality review early in the development process is still a relatively new idea. But Pentagon operational test and evaluation officials have determined systems to be un-survivable in the past – specifically because of cyber vulnerabilities discovered during operational testing. So establishing routine testing earlier in the process is essential.

The Joint Staff updated systems survivability performance parameters earlier this year and now include a cybersecurity component, Hall said in March. “This constitutes the first real cybersecurity requirements for major defense programs,” he explained. “Those requirements ultimately need to translate into contract specifications so cybersecurity can be engineered in from program inception.”

Building cyber into the requirements process is important because requirements drive funding, Hall said. If testing for cybersecurity is to be funded, it must be reflected in requirements.

The Defense Department will update its current guidance on cyber testing in the development, test and evaluation environment by year’s end, he said.

All this follows the November 2016 publication of Special Publication 800-160, a NIST/ISO standard that is “the playbook for how to integrate security into the systems engineering process,” according to one of its principal authors, Ron Ross, a senior fellow at NIST. That standard covers all aspects of systems development, requirements and life-cycle management.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
TechNet Asia-Pacific 250×250
Cyber Resilience Workshop 250×250

Upcoming Events

USNI News: 250×250
gdit cloud 250×250
Nextgov Newsletter 250×250
Milcom 300×250