Cloud / Data Center

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

It’s no longer a question of whether the federal government is going to fully embrace cloud computing. It’s how fast.

With the White House pushing for cloud services as part of its broader cybersecurity strategy and budgets getting squeezed by the administration and Congress, chief information officers (CIOs) are coming around to the idea that the faster they can modernize their systems, the faster they’ll be able to meet security requirements. And that once in the cloud, market dynamics will help them drive down costs.

“The reality is, our data-center-centric model of computing in the federal government no longer works,” says Chad Sheridan, CIO at the Department of Agriculture’s Risk Management Agency. “The evidence is that there is no way we can run a federal data center at the moderate level or below better than what industry can do for us. We don’t have the resources, we don’t have the energy and we are going to be mired with this millstone around our neck of modernization for ever and ever.”

Budget pressure, demand for modernization and concern about security all combine as a forcing function that should be pushing most agencies rapidly toward broad cloud adoption.

Joe Paiva, CIO at the International Trade Administration (ITA), agrees. He used an expiring lease as leverage to force his agency into the cloud soon after he joined ITA three years ago. Time and again the lease was presented to him for a signature and time and again, he says, he tore it up and threw it away.

Finally, with the clock ticking on his data center, Paiva’s staff had to perform a massive “lift and shift” operation to keep services running. Systems were moved to the Amazon cloud.  Not a pretty transition, he admits, but good enough to make the move without incident.

“Sometimes lift and shift actually makes sense,” Paiva told federal IT specialists at the Advanced Technology Academic Research Center’s (ATARC) Cloud and Data Center Summit. “Lift and shift actually gets you there, and for me that was the key – we had to get there.”

At first, he said, “we were no worse off or no better off.” With systems and processes that hadn’t been designed for cloud, however, costs were high. “But then we started doing the rationalization and we dropped our bill 40 percent. We were able to rationalize the way we used the service, we were able to start using more reserve things instead of ad hoc.”

That rationalization included cutting out software and services licenses that duplicated other enterprise solutions. Microsoft Office 365, for example, provided every user with a OneDrive account in the cloud. By getting users to save their work there, meant his team no longer had to support local storage and backup, and the move to shared virtual drives instead of local ones improved worker productivity.

With 226 offices around the world, offloading all that backup was significant. To date, all but a few remote locations have made the switch. Among the surprise benefits: happier users. Once they saw how much easier things were with shared drives that were accessible from anywhere, he says, “they didn’t even care how much money we were saving or how much more secure they were – they cared about how much more functional they suddenly became.”

Likewise, Office 365 provided Skype for Business – meaning the agency could eliminate expensive stand-alone conferencing services – another benefit providing additional savings.

Cost savings matter. Operating in the cloud, ITA’s annual IT costs per user are about $15,000 – less than half the average for the Commerce Department as a whole ($38,000/user/year), or the federal government writ large ($39,000/user/year), he said.

“Those are crazy high numbers,” Paiva says. “That is why I believe we all have to go to the cloud.”

In addition to Office 365, ITA uses Amazon Web Services (AWS) for infrastructure and Salesforce to manage the businesses it supports, along with several other cloud services.

“Government IT spending is out of freaking control,” Paiva says, noting that budget cuts provide incentive for driving change that might not come otherwise. “No one will make the big decisions if they’re not forced to make them.”

Architecture and Planning
If getting to the cloud is now a common objective, figuring out how best to make the move is unique to every user.

“When most organizations consider a move to the cloud, they focus on the ‘front-end’ of the cloud experience – whether or not they should move to the cloud, and if so, how will they get there,” says Srini Singaraju, chief cloud architect at General Dynamics Information Technology, a systems integrator. “However, organizations commonly don’t give as much thought to the ‘back-end’ of their cloud journey: the new operational dynamics that need to be considered in a cloud environment or how operations can be optimized for the cloud, or what cloud capabilities they can leverage once they are there.”

Rather than lift and shift and then start looking for savings, Singaraju advocates planning carefully what to move and what to leave behind. Designing systems and processes to take advantage of its speed and avoiding some of the potential pitfalls not only makes things go more smoothly, it saves money over time.

“Sometimes it just makes more sense to retire and replace an application instead of trying to lift and shift,” Singaraju says. “How long can government maintain and support legacy applications that can pose security and functionality related challenges?”

The challenge is getting there. The number of cloud providers that have won provisional authority to operate under the 5-year-old Federal Risk and Authorization Management Program (FedRAMP) is still relatively small: just 86 with another 75 still in the pipeline. FedRAMP’s efforts to speed up the process are supposed to cut the time it takes to earn a provisional authority to operate (P-ATO) from as much as two years to as little as four months. But so far only three cloud providers have managed to get a product through FedRAMP Accelerated – the new, faster process, according to FedRAMP Director Matt Goodrich. Three more are in the pipeline with a few others lined up behind those, he said.

Once an agency or the FedRAMP Joint Authorization Board has authorized a cloud solution, other agencies can leverage their work with relatively little effort. But even then, moving an application from its current environment is an engineering challenge. Determining how to manage workflow and the infrastructure needed to make a massive move to the cloud work is complicated.

At ITA, for example, Paiva determined that cloud providers like AWS, Microsoft Office 365 and Salesforce had sufficient security controls in place that they could be treated as a part of his internal network. That meant user traffic could be routed directly to them, rather than through his agency’s Trusted Internet Connection (TIC). That provided a huge infrastructure savings because he didn’t have to widen that TIC gateway to accommodate all that routine work traffic, all of which in the past would have stayed inside his agency’s network.

Rather than a conventional “castle-and-moat” architecture, Paiva said he had to interpret the mandate to use the TIC “in a way that made sense for a borderless network.”

“I am not violating the mandate,” he said. “All my traffic that goes to the wild goes through the TIC. I want to be very clear about that. If you want to go to www-dot-name-my-whatever-dot-com, you’re going through the TIC. Office 365? Salesforce? Service Now? Those FedRAMP-approved, fully ATO’d applications that I run in my environment? They’re not external. My Amazon cloud is not external. It is my data center. It is my network. I am fulfilling the intent and letter of the mandate – it’s just that the definition of what is my network has changed.”

Todd Gagorik, senior manager for federal services at AWS, said this approach is starting to take root across the federal government. “People are beginning to understand this clear reality: If FedRAMP has any teeth, if any of this has any meaning, then let’s embrace it and actually use it as it’s intended to be used most efficiently and most securely. If you extend your data center into AWS or Azure, those cloud environments already have these certifications. They’re no different than your data center in terms of the certifications that they run under. What’s important is to separate that traffic from the wild.”

ATARC has organized a working group of government technology leaders to study the network boundary issue and recommend possible changes to the policy, said Tom Suder, ATARC president. “When we started the TIC, that was really kind of pre-cloud, or at least the early stages of cloud,” he said. “It was before FedRAMP. So like any policy, we need to look at that again.” Acting Federal CIO Margie Graves is a reasonable player, he said, and will be open to changes that makes sense, given how much has changed since then.

Indeed, the whole concept of a network’s perimeter has been changed by the introduction of cloud services, Office of Management and Budget’s Grant Schneider, the acting federal chief information security officer (CISO), told GovTechWorks earlier this year.

Limiting what needs to go through the TIC and what does not could have significant implications for cost savings, Paiva said. “It’s not chump change,” he said. “That little architectural detail right there could be billions across the government that could be avoided.”

But changing the network perimeter isn’t trivial. “Agency CIOs and CISOs must take into account the risks and sensitivities of their particular environment and then ensure their security architecture addresses all of those risks,” says GDIT’s Singaraju. “A FedRAMP-certified cloud is a part of the solution, but it’s only that – a part of the solution. You still need to have a complete security architecture built around it. You can’t just go to a cloud service provider without thinking all that through first.”

Sheridan and others involved in the nascent Cloud Center of Excellence sees the continued drive to the cloud as inevitable. “The world has changed,” he says. “It’s been 11 years since these things first appeared on the landscape. We are in exponential growth of technology, and if we hang on to our old ideas we will not continue. We will fail.”

His ad-hoc, unfunded group includes some 130 federal employees from 48 agencies and sub-agencies that operate independent of vendors, think tanks, lobbyists or others with a political or financial interest in the group’s output. “We are a group of people who are struggling to drive our mission forward and coming together to share ideas and experience to solve our common problems and help others to adopt the cloud,” Sheridan says. “It’s about changing the culture.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Just about every federal employee has a mobile device: Many carry two – one for work and one for personal use. Yet by official policy, most federal workers cannot access work email or files from a personal phone or tablet. Those with government-owned devices usually are limited to using it for email, calendar or Internet searches.

Meanwhile, many professionals use a work or personal phone to do a myriad of tasks. In a world where more than 70 percent of Internet traffic includes a mobile device, government workers are frequently taking matters into their own hands.

According to a recent FedScoop study of 168 federal employees and others in the federal sector, only 35 percent said their managers supported the use of personal mobile devices for official business. Yet 74 percent said they regularly use personally-owned tablets to get their work done. Another 49 percent said they regularly used personal smartphones.

In other words, employees routinely flout the rules – either knowingly or otherwise – to make themselves more productive.

“They’re used to having all this power in their hand, being able to upgrade and download apps, do all kinds of things instantaneously, no matter where they are,” says Michael Wilkerson, senior director for end-user computing and mobility at VMWare Federal, the underwriter for the research study conducted by FedScoop. “The workforce is getting younger and employees are coming in with certain expectations.”

Those expectations include mobile. At the General Services Administration (GSA), where more than 90 percent of jobs are approved for telework and where most staff do not have permanent desks or offices, each employee is issued a mobile device and a laptop. “There’s a philosophy of anytime, anywhere, any device,” says Rick Jones, Federal Mobility 2.0 Manager at GSA. Employees can log into virtual desktop infrastructure to access any of their work files from any device. “Telework is actually a requirement at GSA. You are expected to work remotely one or two days a week,” he says, so the agency is really serious about making employees entirely independent of conventional infrastructure. “We don’t even have desks,” he says. “You need to register for a cube in advance.”

That kind of mobility is likely to increase in the future, especially as fifth-generation (5G) mobile services come into play. With more wireless connections installed more densely, 5G promises data speeds that could replace conventional wired infrastructure, save wiring costs and increase network flexibility – all while significantly increasing the number of mobile-enabled workers.

Shadow IT
When Information Technology (IT) departments don’t give employees the tools and applications they need or want to get work done, they’re likely to go out and find it themselves, using cloud-based apps they can download to their phones, tablets and laptops.

Rajiv Gupta, president of Skyhigh Networks of Campbell, Calif., which provides a cloud-access security service, says his company found that users in any typical organization – federal, military or commercial –access more than 1,400 cloud-based services, often invisibly to IT managers. Such uses may be business or personal, but either can have an impact on security if devices are being used for both. Staff may be posting on Facebook, Twitter and LinkedIn, any of which could be personal but could also be official or in support of professional aims. Collaboration tools like Basecamp, Box, DropBox or Slack are often easy means of setting up unofficial work groups to share files when solutions like SharePoint come up short. Because such uses are typically invisible to the organization, he says, they create a “more insidious situation” – the potential for accidental information leaks or purposeful data ex-filtrations by bad actors inside the organization.

“If you’re using a collaboration service like Microsoft 365 or Box, and I share a file with you, what I’m doing is sharing a link – there’s nothing on the network that I can observe to see the files moving,” he says. “More than 50 percent of all data security leaks in a service like 365 is through these side doors.”

The organization may offer users the ability to use OneDrive or Slack, but if users perceive those as difficult or the access controls as unwieldly (user authentication is among mobile government users’ biggest frustrations, according to the VMWare/FedScoop study), they will opt for their own solutions, using email to move data out of the network and then collaborating beyond the reach of the IT and security staff.

While some such instances may be nefarious – as in the case of a disgruntled employee for example – most are simply manifestations of well-meaning employees trying to get their work done as efficiently as possible.

“So employees are using services that you and I have never even heard of,” Gupta says, services like Zippyshare, Footlocker and Findspace. Since most of these are simply classified as “Internet services,” standard controls may not be effective in blocking them, because shutting down the whole category is not an option, Gupta says. “If you did you would have mutiny on your hands.” So network access controls need to be narrowly defined and operationalized through whitelisting or blacklisting of sites and apps.

Free services are a particular problem because employees don’t see the risk, says Sean Kelley, chief information security officer at the Environmental Protection Agency (EPA). At an Institute for Critical Infrastructure conference in May, he said the problem traces back to the notion that free or subscription services aren’t the same as information technology. “A lot of folks said, well, it’s cloud, so it’s not IT,” he said. “But as we move from network-based security to data security, we need to know where our data is going.”

The Federal Information Technology Acquisition Reform Act was supposed to empower chief information officers (CIOs) by giving them more control over such purchases. But regulating free services and understanding the extent to which users may be using them is extremely difficult, whether in government or the private sector. David Summitt, chief information security officer (CISO) at the Moffit Cancer Center in Tampa, Fla., described an email he received from a salesman representing a cloud service provider. The email contained a list of more than 100 Moffit researchers who were using his company’s technology – all unbeknownst to the CISO. His immediate reply: “I said thank you very much – they won’t be using your service tomorrow.” Then he shut down access to that domain.

Controlling Mobile Use
Jon Johnson, program manager for enterprise mobility at GSA acknowledges that even providing access to email opens the door to much wider use of mobile technology. “I too download and open documents to read on the Metro,” he said. “The mobile devices themselves do make it more efficient to run a business. The question is, how can a CIO create tools and structures so their employees are more empowered to execute their mission effectively, and in a way that takes advantage not only of the mobile devices themselves, but also helps achieve a more efficient way of operating the business?”

Whether agencies choose to whitelist approved apps or blacklist high-risk ones, Johnson said, every agency needs to nail down the solution that best applies to its needs. “Whether they have the tools that can continually monitor those applications on the end point, whether they use vetting tools,” he said, each agency must make its own case. “Many agencies, depending on their security posture, are going to have those applications vetted before they even deploy their Enterprise Mobility Management (EMM) onto that device. There is no standard for this because the security posture for the Defense Information Systems Agency (DISA) and the FBI are different from GSA and the Department of Education.

“There’s always going to be a tradeoff between the risk of allowing your users to use something in a way that you may not necessarily predict versus locking everything down,” says Johnson.

Johnson and GSA have worked with a cross-agency mobile technology tiger team for years to try to nail down standards and policies that can make rolling out a broader mobile strategy easier on agency leaders. “Mobility is more than carrier services and devices,” he says. “We’ve looked at application vetting, endpoint protection, telecommunication expense management and emerging tools like virtual mobile interfaces.” He adds they’ve also examined the evolution of mobile device management solutions to more modern enterprise mobility management systems that take a wider view of the mobile world.

Today, agencies are trying to catch up to the private sector and overcome the government’s traditionally limited approach to mobility. At the United States Agency for International Development (USAID), Lon Gowan, chief technologist and special advisor to the CIO, says even though half the agency’s staff are in far-flung remote locations, many of them austere. “We generally treat everyone as a mobile worker,” Gowan says.

Federal agencies remain leery of adopting bring-your-own-device policies, just as many federal employees are leery of giving their agencies access to their personal information. While older mobile device management software gave organizations the ability to monitor activity and wipe entire devices; today’s enterprise management solutions enable devices to effectively be split, containing both personal and business data. And never the twain shall meet.

“We can either allow a fully managed device or one that’s self-owned, where IT manages a certain portion of it,” says VMWare’s Wilkerson. “You can have a folder that has a secure browser, secure mail, secure apps and all of that only works within that container. You can set up secure tunneling so each app can do its own VPN tunnel back to the corporate enterprise. Then, if the tunnel gets shut down or compromised, it shuts off the application, browser — or whatever — is leveraging that tunnel.

Another option is to use mobile-enabled virtual desktops where applications and data reside in a protected cloud environment, according to Chris Barnett, chief technology officer for GDIT’s Intelligence Solutions Division. “With virtual desktops, only a screen image needs to be encrypted and communicated to the mobile device. All the sensitive information remains back in the highly-secure portion of the Enterprise. That maintains the necessary levels of protection while at the same time enabling user access anywhere, anytime.”

When it comes to classified systems, of course, the bar moves higher as risks associated with a compromise increase. Neil Mazuranic of DISA’s, Mobility Capabilities branch chief in the DoD Mobility Portfolio Management Office, says his team can hardly keep up with demand. “Our biggest problem at DISA at the secret level and top secret level, is that we don’t have enough devices to go around,” he says. “Demand is much greater than the supply. We’re taking actions to push more phones and tablets out there.” But capacity will likely be a problem for a while.

The value is huge however, because the devices allow senior leaders “to make critical, real-world, real-time decisions without having to be tied to a specific place,” he says. “We want to stop tying people to their desks and allow them to work wherever they need to work, whether it’s classified work or unclassified.”

DISA is working on increasing the numbers of classified phones using Windows devices that provide greater ability to lock down security than possible with iOS or Android devices. By using products not in the mainstream, the software can be better controlled. In the unclassified realm, DISA secures both iOS and Android devices using managed solutions allowing dual office and personal use. For iOS, a managed device solution establishes a virtual wall in which some apps and data are managed and controlled by DISA, while others are not.

“All applications that go on the managed side of the devices, we evaluate and make sure they’re approved to use,” DISA’s Mazuranic told GovTechWorks. “There’s a certain segment that passes with flying colors and that we approve, and then there are some questionable ones that we send to the authorizing official to accept the risk. And there are others that we just reject outright. They’re just crazy ones.”

Segmenting the devices, however, gives users freedom to download apps for their personal use with a high level of assurance that those apps cannot access the controlled side of the device. “On the iOS device, all of the ‘for official use only’ (FOUO) data is on the managed side of the device,” he said. “All your contacts, your email, your downloaded documents, they’re all on the managed side. So when you go to the Apple App Store and download an app, that’s on the unmanaged side. There’s a wall between the two. So if something is trying to get at your contacts or your data, it can’t, because of that wall. On the Android device, it’s similar: There’s a container on the device, and all the FOUO data on the device is in that container.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
The ABCs of 2018 Federal IT Modernization: I to Z

The ABCs of 2018 Federal IT Modernization: I to Z

In part two of GovTechWorks’ analysis of the Trump Administration’s federal IT modernization plan, we examine the likely guiding impact of the Office of Management and Budget, the manner in which agencies’ infrastructures might change, and the fate of expensive legacy systems.

The White House IT modernization plan released in December seeks a rapid overhaul of IT infrastructure across federal civilian agencies, with an emphasis on redefining the government’s approach to managing its networks and securing its data. Here, in this second part of our two-part analysis, is what you need to know from I to Z (for A-H, click here):

I is for Infrastructure
Modernization boils down to three things: Infrastructure, applications and security. Imagine if every government agency managed its own telephone network or international logistics office, rather than outsourcing such services. IT services are essentially the same. Agencies still need expertise to connect to those services – they still have telecom experts and mail room staff – but they don’t have to manage the entire process.

Special exceptions will always exist for certain military, intelligence (or other specialized) requirements. Increasingly, IT services are becoming commodity services purchased on the open market. Rather than having to own, manage and maintain all that infrastructure, agencies will increasingly buy infrastructure as a service (IaaS) in the cloud — netting faster, perpetually maintained and updated equipment at a lower cost. To bring maximum value – and savings – out of those services, they’ll have to invest in integration and support services to ensure their systems are not only cost effective, but also secure.

J is for JAB, the Joint Authorization Board
The JAB combines expertise at General Services Administration (GSA), Department of Homeland Security (DHS) and the Department of Defense (DOD). It issues preliminary authority to operate (ATO) for widely used cloud services. The JAB will have a definitive role in prioritizing and approving commercial cloud offerings for the highest-risk federal systems.

K is for Keys
The ultimate solution for scanning encrypted data for potential malicious activity is to unencrypt that data for a thorough examination. This involves first having access to encryption keys for federal data and then, securing those keys to ensure they don’t get in the wrong hands. In short, these keys are key to the federal strategy of securing both government data and government networks.

L is for Legacy
The government still spends 70 percent of its IT budget managing legacy systems. That’s down from as much as 85 percent a few years ago, but still too much. In a world where volumes of data continue to expand exponentially and the cost of computer processing power continues to plunge, how long can we afford overspending on last year’s (or last decade’s) aging (and less secure) technology.

M is for Monopsony
A monopoly occurs when one source controls the supply of a given product, service or commodity. A monopsony occurs when a single customer controls the consumption of products, services or commodities. In a classical monopsony, the sole customer dictates terms to all sellers.

Despite its size, the federal government cannot dictate terms to information technology vendors. It can consolidate its purchasing power to increase leverage, and that’s exactly what the government will do in coming years. The process begins with networking services as agencies transition from the old Networx contract to the new Enterprise Information Services vehicle.

Look for it to continue as agencies consolidate purchasing power for commodity software services, such as email, continuous monitoring and collaboration software.

The government may not ultimately wield the full market power of a monopsony, but it can leverage greater negotiating power by centralizing decision making and consolidating purchase and licensing agreements. Look for that to increase significantly in the years ahead.

N is for Networks
Networks used to be the crown jewels of the government’s information enterprise, providing the glue that held systems together and enabling the government to operate. But if the past few years proved anything, it’s that you can’t keep the bad guys out. They’re already in, looking around, waiting for an opportunity.

Networks are essential infrastructure, but will increasingly be virtualized in the future, exist in software and protect encrypted data travelling on commercial fiber and stored much of the time, in commercial data centers (generically referred to as the cloud). You may not keep the bad guys out, but you can control what they get access to.

O is for OMB
The Office of Management and Budget has oversight over much of the modernization plan. The agency is mentioned 127 times in the White House plan, including 47 times in its 50 recommendations. OMB will either be the responsible party or the receiving party, for work done by others on 34 of those 50 recommendations.

P is for Prioritization
Given the vast number of technical, manpower and security challenges that weigh down modernization efforts, prioritizing programs that can deliver the greatest payoff, are essential. In addition, agencies are expected to prioritize and focus their modernization efforts on high-value assets that pose the greatest vulnerabilities and risks. From those lists, by June 30, the DHS must identify six to receive centralized interventions that include staffing and technical support.

The aim is to prioritize where new investment, talent infusions and security policies will make the greatest difference. To maximize that effort, DHS may choose projects that can expand to include other systems and agencies.

OMB must also review and prioritize any impediments to modernization and cloud adoption.

Q is for Quick Start
Technology is not often the most complicated part of many modernization efforts. Finding a viable acquisition strategy that won’t put yesterday’s technology in the government’s hands tomorrow, is often harder. That’s why the report directs OMB to assemble an Acquisition Tiger Team to develop a “quick start” acquisition package to help agencies more quickly license technology and migrate to the cloud.

The aim: combine market research, acquisition plans, readily identified sources and templates for both requests for quotes (RFQs) and Independent Government Cost Estimate (IGCE) calculations — which would be based on completed acquisitions. The tiger team will also help identify qualified small and disadvantaged businesses to help agencies meet set-aside requirements.

R is for Recommendations
There are 50 recommendations in the White House IT modernization report with deadlines ranging from February to August, making the year ahead a busy one for OMB, DHS and GSA, the three agencies responsible for most of the work. A complete list of the recommendations is available here.

T is for the TIC
The federal government developed the Trusted Internet Connection as a means of controlling the number of on and off ramps between government networks and the largely unregulated internet. But in a world now dominated by cloud-based software applications, remote cloud data centers, mobile computing platforms and web-based interfaces that may access multiple different systems to deliver information in context, the TIC needs to be rethought.

“The piece that we struggled with is the Trusted Internet Connections (TIC) initiative – that is a model that has to mature and get solved,” former Federal CIO Tony Scott told Federal News Radio. “It’s an old construct that is applied to modern-day cloud that doesn’t work. It causes performance, cost and latency issues. So the call to double down and sort that out is important. There has been a lot of good work that has happened, but the definitive solution has not been figured out yet.”

The TIC policy is the heart and soul of the government’s perimeter-based security model. Already, some agencies chose to bypass the TIC for certain cloud-based services, such as for Office 365, trusting Microsoft’s security and recognizing that if all that data had to go through an agency’s TIC, performance would suffer.

To modernize TIC capabilities, policies, reference architectures and associated cloud security authorization baselines, OMB must update TIC policies so agencies have a clear path forward to build out data-level protections and more quickly migrate to commercial cloud solutions. A 90-day sprint is to begin in mid-February, during which projects approved by OMB will pilot proposed changes in TIC requirements.

OMB must determine whether all data traveling to and from agency information systems hosted by commercial cloud providers warrants scanning by DHS, or whether only some information needs to be scanned. Other considerations under review: Expanding the number of TIC access points in each agency and a model for determining how best to implement intrusion detection and prevention capabilities into cloud services.

U is for Updating the Federal Cloud Computing Strategy
The government’s “Cloud First” policy is now seven years old. Updates are in order. By April 15, OMB must provide additional guidance on both appropriate use cases and operational security for cloud environments. All relevant policies on cloud migration, infrastructure consolidation and shared services will be reviewed.

In addition, OMB has until June to develop standardized contract language for cloud acquisition, including clauses that define consistent requirements for security, privacy and access to data. Establishing uniform contract language will make it easier to compare and broker cloud offerings and ensure government requirements are met.

V is for Verification
Verification or authentication of users’ identities is at the heart of protecting government information. Are you who you say you are? Key to securing information systems is ensuring that access is granted to only users who can be identified and verified as deserving access.

OMB has until March 1 to issue for public comment new identity policy guidance and to recommend identity service areas suitable for shared services. GSA must provide a business case for consolidating existing identity services to improve usability and drive secure access and enable cloud-based collaboration service that will enhance the ability to easily share and collaborate across agencies, which can be cumbersome today.

W, X, Y, Z is for Wrapping it All Up
The Federal Government is shifting to a consolidated IT model that will change the nature of IT departments and the services they buy. Centralized offerings for commodity IT – whether email, office tools and other common software-as-a-service offerings or virtual desktops and web hosting – will be the norm. As much as possible, the objective is to get agencies on the same page, using the same security services, the same collaboration services, the same data services and make those common (or in some cases shared) across multiple agencies.

Doing so promises to reduce needed manpower and licensing costs by eliminating duplication of effort and increased market leverage to drive down prices. But getting there will not be easy. Integration and security pose unique challenges in a government context, requiring skill, experience and specific expertise. On the government side, policy updates will only solve some of the challenges. Acquisition regulations must also be updated to support wider adoption of commercial cloud products.

Some agencies will need more help than others. Cultural barriers will continue to be major hurdles. Inevitably, staff will have to develop new skills as old ones disappear. Yet even in the midst of all that upheaval, some things don’t change. “In the end, IT modernization is really all about supporting the mission,” says Stan Tyliszczak, chief engineer at systems integrator General Dynamics Information Technology. “It’s about helping government employees complete their work, protecting the privacy of our citizens and ensuring both have timely access to the information and services they need. IT has always made those things better and easier, and modernization is only necessary to continue that process. That much never changes.”

 

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
The ABCs of 2018 Federal IT Modernization: A to H

The ABCs of 2018 Federal IT Modernization: A to H

The White House issued its IT modernization plan last December and followed it with an ambitious program that could become a proving ground for rapidly overhauling IT infrastructure, data access and customer service. After years of talking about IT modernization, cybersecurity and migration to the cloud, federal agencies are now poised to ramp up the action.

Here, in A-B-C form, is what you need to know from A to H:

A is for Agriculture
The U.S. Department of Agriculture (USDA) will be a sort of proving ground for implementing the Trump administration’s vision for the future high-tech, high-performance, customer-satisfying government. USDA announced in December 2017 it will collapse 39 data centers into one (plus a backup), and consolidate 22 independent chief information officers under a single CIO with seven deputies. The aim: reinvent the agency as a modern, customer-centered organization and provide its leaders with instant access to a wealth of agency data.

B is for Better Citizen Services
“It is imperative for the federal government to leverage … innovations to provide better service for its citizens in the most cost-effective and secure manner,” the report states – in just its third sentence. Yes, modernization should ultimately save money by reducing the billions spent to keep aging systems operational. And yes, it should help overcome the patchwork of cybersecurity point solutions now used to protect federal networks, systems and data.

USDA Secretary Sonny Purdue’s experience modernizing government IT during two terms as governor of Georgia from 2003-2010 convinced him he could achieve similar results on the federal level. “He really saw, in reinventing Georgia government, how IT modernization and delivering better customer service benefitted not only employees, but the people of the state,” Deputy Secretary of Agriculture Steve Censky said in a TV interview.

Among the agency’s goals: Increase access to information throughout the agency by means of online service portals and advanced application program interfaces.

C is for Centers of Excellence
USDA won’t be going it alone. Under the direction of the Office of Science and Technology Policy, the agency will be the first to engage with a new set of experts at the General Services Administration (GSA). GSA is on an accelerated course to create five Centers of Excellence, leveraging both public and private sector expertise to develop best practices and standards that agencies can use for:

  • Cloud adoption
  • IT infrastructure optimization
  • Customer experience
  • Service delivery analytics
  • Contact centers

Jack Wilmer, White House senior advisor for Cybersecurity and IT Modernization, says the idea is to provide each agency’s modernization effort with the same core concepts and approach – and the best available experts. “We’re trying to leverage private sector expertise, bringing them in a centralized fashion, making them available to government agencies as they modernize,” he told Government Matters.

While GSA planned to award contracts to industry partners by the end of January – just 45 days after its initial solicitation – by March 5, no contracts had been awarded. Phase 1 contracts for assessment, planning and some initial activities should be finalized soon. Phase 2 awards for cloud migration, infrastructure optimization and customer experience are expected by the end of the year, Joanne Collins Smee, acting director of GSA’s Technology Transformation Service and deputy commissioner of the Federal Acquisition Service, said at a March 1 AFCEA event in Washington, D.C.

D is for Data Centers
While all data centers won’t close down, many more will soon disappear. Modernization is about getting the government out of the business of managing big infrastructure investments and instead, to leverage commercial cloud infrastructure and technology wherever possible. But don’t think your agency’s data won’t be in a data center somewhere.

“What is the cloud, anyway? Isn’t it really someone else’s data center, available on demand?” says Stan Tyliszczak, chief engineer at systems integrator General Dynamics Information Technology (GDIT). “Moving to the cloud means getting out of the business of running that data center yourself.”

The White House splits its cloud strategy into two buckets:

  • “Bring the government to the cloud.” Put government data and applications in privately-owned and operated infrastructure, where it is protected through encryption and other security technologies. This is public cloud, where government data sits side by side with private data in third-party data centers.
  • “Bring the cloud to the government.” Putting government data and applications on vendor-owned infrastructure, but located in government-owned facilities, as the Intelligence Community Information Technology Enterprise (IC ITE) does with the IC’s Commercial Cloud Services (C2S) contract with Amazon Web Services.

Figuring out what makes sense when, depends on your use case and for most agencies, will mean a combination of on premise solutions, shared government services and commercial services in public clouds. “That’s the Hybrid cloud model everyone’s talking about. But it’s not a trivial exercise. Melding those together is the challenge,” Tyliszczak says. “That’s what integrators are for.”

E is for Encryption
Government cybersecurity efforts have historically focused on defending the network and its perimeter, rather than the data that travels on that network. As cloud services are integrated into conventional on premise IT solutions, securing the data has become essential. At least 47 percent of federal network traffic is encrypted today – frustrating agency efforts to monitor what’s crossing network perimeters.

“Rather than treating Federal networks as trusted entities to be defended at the perimeter,” the modernization report advised, “agencies should shift their focus to placing security protections closer to data.”

To do that, the government must improve the way it authenticates devices and users on its networks, securing who has access and how, and encrypting data both at rest and in transit.

“Now you’re starting to obfuscate whether your sensors can actually inspect the content of that data,” notes Eric White, Cybersecurity program director at GDIT’s Health and Civilian Solutions Division. “Because it’s now encrypted, you add another layer of complexity to know for sure whether it’s the good guys or the bad guys moving data in and out of your network.”

White notes that the Department of Homeland Security (DHS) is charged with solving this encryption dilemma, balancing the millions of dollars in investment in high-end network-monitoring sensors, such as those associated with the Einstein program, against protecting individual privacy. Enabling those sensors to see through or decipher encrypted data without undermining the security of the data – or the privacy of individuals – is a critical priority. DHS has commissioned research to develop potential solutions, including virtualizing sensors for cloud environments; relocating sensors to the endpoints of encrypted tunnels; creating man-in-the-middle solutions that intercept data in motion; or providing the sensors with decryption keys.

F is for FedRAMP

The Federal Risk and Authorization Management Program (FedRAMP) remains the critical process for ensuring private-sector cloud offerings meet government security requirements. Look for updates to FedRAMP baselines that could allow tailoring of security controls for low-risk systems, address new approaches to integrated cloud services with federal Trusted Internet Connection (TIC) services and consider common features or capabilities that could be incorporated into higher-risk systems with FedRAMP “high” baselines.

Importantly, the report directs the General Services Administration (GSA), which manages FedRAMP, to come up with new solutions that make it easier for a software-as-a-service (SaaS) products already authorized for use in one agency to be accepted for use in another. Making the process for issuing an authority to operate (ATO) faster and easier to reuse has long been a goal of both cloud providers and government customers. This is particularly critical for shared services, in which one agency provides its approved commercial solution to another agency.

G is for GSA
Already powerfully influential as a buyer and developer for other agencies, GSA stands to become even more influential as the government moves to consolidate networks and other IT services into fewer contracts and licensing agreements, and to increase the commonality of solutions across the government.

This is especially true among smaller agencies that lack the resources, scale and expertise to effectively procure and manage their own IT services.

H is for Homeland Security
DHS is responsible for the overall cybersecurity of all federal government systems. The only federal entity mentioned more frequently in the White House modernization report is the Office of Management and Budget, which is the White House agency responsible for implementing the report’s guidance.

DHS was mandated to issue a report by Feb. 15, identifying the common weaknesses of the government’s highest-value IT assets and recommend solutions for reducing risk and vulnerability government-wide. By May 15, the agency must produce a prioritized list of systems “for government-wide intervention” and will provide a host of advisory and support services to help secure government systems. DHS also owns and manages the National Cybersecurity Protection System (NCPS) and the EINSTEIN sensor suites that capture and analyze network flow, detect intruders and scan the data coming in and out of government systems to identify potentially malicious activity and, in the case of email, blocking and filtering threatening content.

Look for next week’s edition of GovTechWorks for Part 2: Modernization from I to Z. In Part 2, we outline how infrastructure among government agencies will be impacted and streamlined by modernization, as well as discuss the fate of legacy systems and their maintenance budgets, and the major role the Office of Management and Budget will play in overall implementation.

Next week: Part 2, Modernization I-Z.

 

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
How the Air Force Changed Tune on Cybersecurity

How the Air Force Changed Tune on Cybersecurity

Peter Kim, chief information security officer (CISO) for the U.S. Air Force, calls himself Dr. Doom. Lauren Knausenberger, director of cyberspace innovation for the Air Force, is his opposite. Where he sees trouble, she sees opportunity. Where he sees reasons to say no, she seeks ways to change the question.

For Kim, the dialogue they’ve shared since Knausenberger left her job atop a private sector tech consultancy to join the Air Force, has been transformational.

“I have gone into a kind of rehab for cybersecurity pros,” he says. “I’ve had to admit I have a problem: I can’t lock everything down.” He knows. He’s tried.

The two engage constantly, debating and questioning whether decisions and steps designed to protect Air Force systems and data are having their intended effect, they said, sharing a dais during a recent AFCEA cybersecurity event in Crystal City. “Are the things we’re doing actually making us more secure or just generating a lot of paperwork?” asks Knausenberger. “We are trying to turn everything on its head.”

As for Kim, she added, “Pete’s doing really well on his rehab program.”

One way Knausenberger has turned Kim’s head has been her approach to security certification packages for new software. Instead of developing massive cert packages for every program – documentation that’s hundreds of pages thick and unlikely to every be read – she wants the Air Force to certify the processes used to develop software, rather than the programs.

“Why don’t we think about software like meat at the grocery?” she asked. “USDA doesn’t look at every individual piece of meat… Our goal is to certify the factory, not the program.”

Similarly, Knausenberger says the Air Force is trying now to apply similar requirements to acquisition contracts, accepting the idea that since finding software vulnerabilities is inevitable, it’s best to have a plan for fixing them rather than hoping to regulate them out of existence. “So you might start seeing language that says, ‘You need to fix vulnerabilities within 10 days.’ Or perhaps we may have to pay bug bounties,” she says. “We know nothing is going to be perfect and we need to accept that. But we also need to start putting a level of commercial expectation into our programs.”

Combining development, security and operations into an integrated process – DevSecOps, in industry parlance – is the new name of the game, they argue together. The aim: Build security in during development, rather than bolting it on at the end.

The takeaways from the “Hack-the-Air-Force” bug bounty programs run so far, in that every such effort yields new vulnerabilities – and that thousands of pages of certification didn’t prevent them. As computer power becomes less costly and automation gets easier, hackers can be expected to use artificial intelligence to break through security barriers.

Continuous automated testing is the only way to combat their persistent threat, Kim said.

Michael Baker, CISO at systems integrator, General Dynamics Information Technology, agrees. “The best way to find the vulnerabilities – is to continuously monitor your environment and challenge your assumptions, he says. “Hackers already use automated tools and the latest vulnerabilities to exploit systems. We have to beat them to it – finding and patching those vulnerabilities before they can exploit them. Robust and assured endpoint protection, combined with continuous, automated testing to find vulnerabilities and exploits, is the only way to do that.”

I think we ought to get moving on automated security testing and penetration,” Kim added. “The days of RMF [risk management framework] packages are past. They’re dinosaurs. We’ve got to get to a different way of addressing security controls and the RMF process.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250