Analytics

Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
The ABCs of 2018 Federal IT Modernization: I to Z

The ABCs of 2018 Federal IT Modernization: I to Z

In part two of GovTechWorks’ analysis of the Trump Administration’s federal IT modernization plan, we examine the likely guiding impact of the Office of Management and Budget, the manner in which agencies’ infrastructures might change, and the fate of expensive legacy systems.

The White House IT modernization plan released in December seeks a rapid overhaul of IT infrastructure across federal civilian agencies, with an emphasis on redefining the government’s approach to managing its networks and securing its data. Here, in this second part of our two-part analysis, is what you need to know from I to Z (for A-H, click here):

I is for Infrastructure
Modernization boils down to three things: Infrastructure, applications and security. Imagine if every government agency managed its own telephone network or international logistics office, rather than outsourcing such services. IT services are essentially the same. Agencies still need expertise to connect to those services – they still have telecom experts and mail room staff – but they don’t have to manage the entire process.

Special exceptions will always exist for certain military, intelligence (or other specialized) requirements. Increasingly, IT services are becoming commodity services purchased on the open market. Rather than having to own, manage and maintain all that infrastructure, agencies will increasingly buy infrastructure as a service (IaaS) in the cloud — netting faster, perpetually maintained and updated equipment at a lower cost. To bring maximum value – and savings – out of those services, they’ll have to invest in integration and support services to ensure their systems are not only cost effective, but also secure.

J is for JAB, the Joint Authorization Board
The JAB combines expertise at General Services Administration (GSA), Department of Homeland Security (DHS) and the Department of Defense (DOD). It issues preliminary authority to operate (ATO) for widely used cloud services. The JAB will have a definitive role in prioritizing and approving commercial cloud offerings for the highest-risk federal systems.

K is for Keys
The ultimate solution for scanning encrypted data for potential malicious activity is to unencrypt that data for a thorough examination. This involves first having access to encryption keys for federal data and then, securing those keys to ensure they don’t get in the wrong hands. In short, these keys are key to the federal strategy of securing both government data and government networks.

L is for Legacy
The government still spends 70 percent of its IT budget managing legacy systems. That’s down from as much as 85 percent a few years ago, but still too much. In a world where volumes of data continue to expand exponentially and the cost of computer processing power continues to plunge, how long can we afford overspending on last year’s (or last decade’s) aging (and less secure) technology.

M is for Monopsony
A monopoly occurs when one source controls the supply of a given product, service or commodity. A monopsony occurs when a single customer controls the consumption of products, services or commodities. In a classical monopsony, the sole customer dictates terms to all sellers.

Despite its size, the federal government cannot dictate terms to information technology vendors. It can consolidate its purchasing power to increase leverage, and that’s exactly what the government will do in coming years. The process begins with networking services as agencies transition from the old Networx contract to the new Enterprise Information Services vehicle.

Look for it to continue as agencies consolidate purchasing power for commodity software services, such as email, continuous monitoring and collaboration software.

The government may not ultimately wield the full market power of a monopsony, but it can leverage greater negotiating power by centralizing decision making and consolidating purchase and licensing agreements. Look for that to increase significantly in the years ahead.

N is for Networks
Networks used to be the crown jewels of the government’s information enterprise, providing the glue that held systems together and enabling the government to operate. But if the past few years proved anything, it’s that you can’t keep the bad guys out. They’re already in, looking around, waiting for an opportunity.

Networks are essential infrastructure, but will increasingly be virtualized in the future, exist in software and protect encrypted data travelling on commercial fiber and stored much of the time, in commercial data centers (generically referred to as the cloud). You may not keep the bad guys out, but you can control what they get access to.

O is for OMB
The Office of Management and Budget has oversight over much of the modernization plan. The agency is mentioned 127 times in the White House plan, including 47 times in its 50 recommendations. OMB will either be the responsible party or the receiving party, for work done by others on 34 of those 50 recommendations.

P is for Prioritization
Given the vast number of technical, manpower and security challenges that weigh down modernization efforts, prioritizing programs that can deliver the greatest payoff, are essential. In addition, agencies are expected to prioritize and focus their modernization efforts on high-value assets that pose the greatest vulnerabilities and risks. From those lists, by June 30, the DHS must identify six to receive centralized interventions that include staffing and technical support.

The aim is to prioritize where new investment, talent infusions and security policies will make the greatest difference. To maximize that effort, DHS may choose projects that can expand to include other systems and agencies.

OMB must also review and prioritize any impediments to modernization and cloud adoption.

Q is for Quick Start
Technology is not often the most complicated part of many modernization efforts. Finding a viable acquisition strategy that won’t put yesterday’s technology in the government’s hands tomorrow, is often harder. That’s why the report directs OMB to assemble an Acquisition Tiger Team to develop a “quick start” acquisition package to help agencies more quickly license technology and migrate to the cloud.

The aim: combine market research, acquisition plans, readily identified sources and templates for both requests for quotes (RFQs) and Independent Government Cost Estimate (IGCE) calculations — which would be based on completed acquisitions. The tiger team will also help identify qualified small and disadvantaged businesses to help agencies meet set-aside requirements.

R is for Recommendations
There are 50 recommendations in the White House IT modernization report with deadlines ranging from February to August, making the year ahead a busy one for OMB, DHS and GSA, the three agencies responsible for most of the work. A complete list of the recommendations is available here.

T is for the TIC
The federal government developed the Trusted Internet Connection as a means of controlling the number of on and off ramps between government networks and the largely unregulated internet. But in a world now dominated by cloud-based software applications, remote cloud data centers, mobile computing platforms and web-based interfaces that may access multiple different systems to deliver information in context, the TIC needs to be rethought.

“The piece that we struggled with is the Trusted Internet Connections (TIC) initiative – that is a model that has to mature and get solved,” former Federal CIO Tony Scott told Federal News Radio. “It’s an old construct that is applied to modern-day cloud that doesn’t work. It causes performance, cost and latency issues. So the call to double down and sort that out is important. There has been a lot of good work that has happened, but the definitive solution has not been figured out yet.”

The TIC policy is the heart and soul of the government’s perimeter-based security model. Already, some agencies chose to bypass the TIC for certain cloud-based services, such as for Office 365, trusting Microsoft’s security and recognizing that if all that data had to go through an agency’s TIC, performance would suffer.

To modernize TIC capabilities, policies, reference architectures and associated cloud security authorization baselines, OMB must update TIC policies so agencies have a clear path forward to build out data-level protections and more quickly migrate to commercial cloud solutions. A 90-day sprint is to begin in mid-February, during which projects approved by OMB will pilot proposed changes in TIC requirements.

OMB must determine whether all data traveling to and from agency information systems hosted by commercial cloud providers warrants scanning by DHS, or whether only some information needs to be scanned. Other considerations under review: Expanding the number of TIC access points in each agency and a model for determining how best to implement intrusion detection and prevention capabilities into cloud services.

U is for Updating the Federal Cloud Computing Strategy
The government’s “Cloud First” policy is now seven years old. Updates are in order. By April 15, OMB must provide additional guidance on both appropriate use cases and operational security for cloud environments. All relevant policies on cloud migration, infrastructure consolidation and shared services will be reviewed.

In addition, OMB has until June to develop standardized contract language for cloud acquisition, including clauses that define consistent requirements for security, privacy and access to data. Establishing uniform contract language will make it easier to compare and broker cloud offerings and ensure government requirements are met.

V is for Verification
Verification or authentication of users’ identities is at the heart of protecting government information. Are you who you say you are? Key to securing information systems is ensuring that access is granted to only users who can be identified and verified as deserving access.

OMB has until March 1 to issue for public comment new identity policy guidance and to recommend identity service areas suitable for shared services. GSA must provide a business case for consolidating existing identity services to improve usability and drive secure access and enable cloud-based collaboration service that will enhance the ability to easily share and collaborate across agencies, which can be cumbersome today.

W, X, Y, Z is for Wrapping it All Up
The Federal Government is shifting to a consolidated IT model that will change the nature of IT departments and the services they buy. Centralized offerings for commodity IT – whether email, office tools and other common software-as-a-service offerings or virtual desktops and web hosting – will be the norm. As much as possible, the objective is to get agencies on the same page, using the same security services, the same collaboration services, the same data services and make those common (or in some cases shared) across multiple agencies.

Doing so promises to reduce needed manpower and licensing costs by eliminating duplication of effort and increased market leverage to drive down prices. But getting there will not be easy. Integration and security pose unique challenges in a government context, requiring skill, experience and specific expertise. On the government side, policy updates will only solve some of the challenges. Acquisition regulations must also be updated to support wider adoption of commercial cloud products.

Some agencies will need more help than others. Cultural barriers will continue to be major hurdles. Inevitably, staff will have to develop new skills as old ones disappear. Yet even in the midst of all that upheaval, some things don’t change. “In the end, IT modernization is really all about supporting the mission,” says Stan Tyliszczak, chief engineer at systems integrator General Dynamics Information Technology. “It’s about helping government employees complete their work, protecting the privacy of our citizens and ensuring both have timely access to the information and services they need. IT has always made those things better and easier, and modernization is only necessary to continue that process. That much never changes.”

 

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
The ABCs of 2018 Federal IT Modernization: A to H

The ABCs of 2018 Federal IT Modernization: A to H

The White House issued its IT modernization plan last December and followed it with an ambitious program that could become a proving ground for rapidly overhauling IT infrastructure, data access and customer service. After years of talking about IT modernization, cybersecurity and migration to the cloud, federal agencies are now poised to ramp up the action.

Here, in A-B-C form, is what you need to know from A to H:

A is for Agriculture
The U.S. Department of Agriculture (USDA) will be a sort of proving ground for implementing the Trump administration’s vision for the future high-tech, high-performance, customer-satisfying government. USDA announced in December 2017 it will collapse 39 data centers into one (plus a backup), and consolidate 22 independent chief information officers under a single CIO with seven deputies. The aim: reinvent the agency as a modern, customer-centered organization and provide its leaders with instant access to a wealth of agency data.

B is for Better Citizen Services
“It is imperative for the federal government to leverage … innovations to provide better service for its citizens in the most cost-effective and secure manner,” the report states – in just its third sentence. Yes, modernization should ultimately save money by reducing the billions spent to keep aging systems operational. And yes, it should help overcome the patchwork of cybersecurity point solutions now used to protect federal networks, systems and data.

USDA Secretary Sonny Purdue’s experience modernizing government IT during two terms as governor of Georgia from 2003-2010 convinced him he could achieve similar results on the federal level. “He really saw, in reinventing Georgia government, how IT modernization and delivering better customer service benefitted not only employees, but the people of the state,” Deputy Secretary of Agriculture Steve Censky said in a TV interview.

Among the agency’s goals: Increase access to information throughout the agency by means of online service portals and advanced application program interfaces.

C is for Centers of Excellence
USDA won’t be going it alone. Under the direction of the Office of Science and Technology Policy, the agency will be the first to engage with a new set of experts at the General Services Administration (GSA). GSA is on an accelerated course to create five Centers of Excellence, leveraging both public and private sector expertise to develop best practices and standards that agencies can use for:

  • Cloud adoption
  • IT infrastructure optimization
  • Customer experience
  • Service delivery analytics
  • Contact centers

Jack Wilmer, White House senior advisor for Cybersecurity and IT Modernization, says the idea is to provide each agency’s modernization effort with the same core concepts and approach – and the best available experts. “We’re trying to leverage private sector expertise, bringing them in a centralized fashion, making them available to government agencies as they modernize,” he told Government Matters.

While GSA planned to award contracts to industry partners by the end of January – just 45 days after its initial solicitation – by March 5, no contracts had been awarded. Phase 1 contracts for assessment, planning and some initial activities should be finalized soon. Phase 2 awards for cloud migration, infrastructure optimization and customer experience are expected by the end of the year, Joanne Collins Smee, acting director of GSA’s Technology Transformation Service and deputy commissioner of the Federal Acquisition Service, said at a March 1 AFCEA event in Washington, D.C.

D is for Data Centers
While all data centers won’t close down, many more will soon disappear. Modernization is about getting the government out of the business of managing big infrastructure investments and instead, to leverage commercial cloud infrastructure and technology wherever possible. But don’t think your agency’s data won’t be in a data center somewhere.

“What is the cloud, anyway? Isn’t it really someone else’s data center, available on demand?” says Stan Tyliszczak, chief engineer at systems integrator General Dynamics Information Technology (GDIT). “Moving to the cloud means getting out of the business of running that data center yourself.”

The White House splits its cloud strategy into two buckets:

  • “Bring the government to the cloud.” Put government data and applications in privately-owned and operated infrastructure, where it is protected through encryption and other security technologies. This is public cloud, where government data sits side by side with private data in third-party data centers.
  • “Bring the cloud to the government.” Putting government data and applications on vendor-owned infrastructure, but located in government-owned facilities, as the Intelligence Community Information Technology Enterprise (IC ITE) does with the IC’s Commercial Cloud Services (C2S) contract with Amazon Web Services.

Figuring out what makes sense when, depends on your use case and for most agencies, will mean a combination of on premise solutions, shared government services and commercial services in public clouds. “That’s the Hybrid cloud model everyone’s talking about. But it’s not a trivial exercise. Melding those together is the challenge,” Tyliszczak says. “That’s what integrators are for.”

E is for Encryption
Government cybersecurity efforts have historically focused on defending the network and its perimeter, rather than the data that travels on that network. As cloud services are integrated into conventional on premise IT solutions, securing the data has become essential. At least 47 percent of federal network traffic is encrypted today – frustrating agency efforts to monitor what’s crossing network perimeters.

“Rather than treating Federal networks as trusted entities to be defended at the perimeter,” the modernization report advised, “agencies should shift their focus to placing security protections closer to data.”

To do that, the government must improve the way it authenticates devices and users on its networks, securing who has access and how, and encrypting data both at rest and in transit.

“Now you’re starting to obfuscate whether your sensors can actually inspect the content of that data,” notes Eric White, Cybersecurity program director at GDIT’s Health and Civilian Solutions Division. “Because it’s now encrypted, you add another layer of complexity to know for sure whether it’s the good guys or the bad guys moving data in and out of your network.”

White notes that the Department of Homeland Security (DHS) is charged with solving this encryption dilemma, balancing the millions of dollars in investment in high-end network-monitoring sensors, such as those associated with the Einstein program, against protecting individual privacy. Enabling those sensors to see through or decipher encrypted data without undermining the security of the data – or the privacy of individuals – is a critical priority. DHS has commissioned research to develop potential solutions, including virtualizing sensors for cloud environments; relocating sensors to the endpoints of encrypted tunnels; creating man-in-the-middle solutions that intercept data in motion; or providing the sensors with decryption keys.

F is for FedRAMP

The Federal Risk and Authorization Management Program (FedRAMP) remains the critical process for ensuring private-sector cloud offerings meet government security requirements. Look for updates to FedRAMP baselines that could allow tailoring of security controls for low-risk systems, address new approaches to integrated cloud services with federal Trusted Internet Connection (TIC) services and consider common features or capabilities that could be incorporated into higher-risk systems with FedRAMP “high” baselines.

Importantly, the report directs the General Services Administration (GSA), which manages FedRAMP, to come up with new solutions that make it easier for a software-as-a-service (SaaS) products already authorized for use in one agency to be accepted for use in another. Making the process for issuing an authority to operate (ATO) faster and easier to reuse has long been a goal of both cloud providers and government customers. This is particularly critical for shared services, in which one agency provides its approved commercial solution to another agency.

G is for GSA
Already powerfully influential as a buyer and developer for other agencies, GSA stands to become even more influential as the government moves to consolidate networks and other IT services into fewer contracts and licensing agreements, and to increase the commonality of solutions across the government.

This is especially true among smaller agencies that lack the resources, scale and expertise to effectively procure and manage their own IT services.

H is for Homeland Security
DHS is responsible for the overall cybersecurity of all federal government systems. The only federal entity mentioned more frequently in the White House modernization report is the Office of Management and Budget, which is the White House agency responsible for implementing the report’s guidance.

DHS was mandated to issue a report by Feb. 15, identifying the common weaknesses of the government’s highest-value IT assets and recommend solutions for reducing risk and vulnerability government-wide. By May 15, the agency must produce a prioritized list of systems “for government-wide intervention” and will provide a host of advisory and support services to help secure government systems. DHS also owns and manages the National Cybersecurity Protection System (NCPS) and the EINSTEIN sensor suites that capture and analyze network flow, detect intruders and scan the data coming in and out of government systems to identify potentially malicious activity and, in the case of email, blocking and filtering threatening content.

Look for next week’s edition of GovTechWorks for Part 2: Modernization from I to Z. In Part 2, we outline how infrastructure among government agencies will be impacted and streamlined by modernization, as well as discuss the fate of legacy systems and their maintenance budgets, and the major role the Office of Management and Budget will play in overall implementation.

Next week: Part 2, Modernization I-Z.

 

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
How the Air Force Changed Tune on Cybersecurity

How the Air Force Changed Tune on Cybersecurity

Peter Kim, chief information security officer (CISO) for the U.S. Air Force, calls himself Dr. Doom. Lauren Knausenberger, director of cyberspace innovation for the Air Force, is his opposite. Where he sees trouble, she sees opportunity. Where he sees reasons to say no, she seeks ways to change the question.

For Kim, the dialogue they’ve shared since Knausenberger left her job atop a private sector tech consultancy to join the Air Force, has been transformational.

“I have gone into a kind of rehab for cybersecurity pros,” he says. “I’ve had to admit I have a problem: I can’t lock everything down.” He knows. He’s tried.

The two engage constantly, debating and questioning whether decisions and steps designed to protect Air Force systems and data are having their intended effect, they said, sharing a dais during a recent AFCEA cybersecurity event in Crystal City. “Are the things we’re doing actually making us more secure or just generating a lot of paperwork?” asks Knausenberger. “We are trying to turn everything on its head.”

As for Kim, she added, “Pete’s doing really well on his rehab program.”

One way Knausenberger has turned Kim’s head has been her approach to security certification packages for new software. Instead of developing massive cert packages for every program – documentation that’s hundreds of pages thick and unlikely to every be read – she wants the Air Force to certify the processes used to develop software, rather than the programs.

“Why don’t we think about software like meat at the grocery?” she asked. “USDA doesn’t look at every individual piece of meat… Our goal is to certify the factory, not the program.”

Similarly, Knausenberger says the Air Force is trying now to apply similar requirements to acquisition contracts, accepting the idea that since finding software vulnerabilities is inevitable, it’s best to have a plan for fixing them rather than hoping to regulate them out of existence. “So you might start seeing language that says, ‘You need to fix vulnerabilities within 10 days.’ Or perhaps we may have to pay bug bounties,” she says. “We know nothing is going to be perfect and we need to accept that. But we also need to start putting a level of commercial expectation into our programs.”

Combining development, security and operations into an integrated process – DevSecOps, in industry parlance – is the new name of the game, they argue together. The aim: Build security in during development, rather than bolting it on at the end.

The takeaways from the “Hack-the-Air-Force” bug bounty programs run so far, in that every such effort yields new vulnerabilities – and that thousands of pages of certification didn’t prevent them. As computer power becomes less costly and automation gets easier, hackers can be expected to use artificial intelligence to break through security barriers.

Continuous automated testing is the only way to combat their persistent threat, Kim said.

Michael Baker, CISO at systems integrator, General Dynamics Information Technology, agrees. “The best way to find the vulnerabilities – is to continuously monitor your environment and challenge your assumptions, he says. “Hackers already use automated tools and the latest vulnerabilities to exploit systems. We have to beat them to it – finding and patching those vulnerabilities before they can exploit them. Robust and assured endpoint protection, combined with continuous, automated testing to find vulnerabilities and exploits, is the only way to do that.”

I think we ought to get moving on automated security testing and penetration,” Kim added. “The days of RMF [risk management framework] packages are past. They’re dinosaurs. We’ve got to get to a different way of addressing security controls and the RMF process.”

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250
JOMIS Will Take E-Health Records to the Frontlines

JOMIS Will Take E-Health Records to the Frontlines

The Defense Department Military Health System Genesis electronic health records (EHR) system went live last October at Madigan Army Medical Center (Wash.), the biggest step so far in modernizing DOD’s vast MHS with a proven commercial solution. Now comes the hard part: Tying that system in with operational medicine for deployed troops around the globe.

War zones, ships at sea and aeromedical evacuations each present a new set of challenges for digital health records. Front-line units lack the bandwidth and digital infrastructure to enable cloud-based health systems like MHS Genesis. Indeed, when bandwidth is constrained, health data ranks last on the priority list, falling below command and control, intelligence and other mission data.

The Joint Operational Medicine Information Systems (JOMIS) program office oversees DOD’s operational medicine initiatives, including the legacy Theater Medical Information Program – Joint system used in today’s operational theaters of Iraq and Afghanistan, as well as aboard ships and in other remote locales.

“One of the biggest pain points we have right now is the issue of moving data from the various roles of care, from the first responder [in the war zone] to the First Aid station to something like Landstuhl (Germany) Regional Medical Center, to something in the U.S.,” Navy Capt. Dr. James Andrew Ellzy told GovTechWorks. He is deputy program executive officer (functional) for JOMIS, under the Program Executive Office, Defense Healthcare Management Systems (PEO DHMS).

PEO DHMS defines four stages or “roles,” once a patient begins to receive care. Role One is for first responders; Role Two: Forward resuscitative care; Role Three: Theater hospitals; and Role Four: Service-based medical facilities.

“Most of those early roles right now, are still using paper records,” Ellzy said. Electronic documentation begins once medical operators are in an established location. “Good records usually start the first place that has a concrete slab.”

Among the changes MHS Genesis will bring is consolidation. The legacy AHLTA (Armed Forces Health Longitudinal Technology Application – Theater) solution and its heavily modified theater-level variant AHLTA-T, incorporate separate systems for inpatient and outpatient support.

MHS Genesis however, will provide a single record regardless of patient status.

For deployed medical units, that’s important. Set up and maintenance for AHLTA’s outpatient records and the Joint Composite Health Care System have always been challenging.

“In order to set up the system, you have to have the technical skillset to initialize and sustain these systems,” said Ryan Loving, director of Health IT Solutions for military health services and the VA at General Dynamics Information Technology’s (GDIT) Health and Civilian Solutions Division. “This is a bigger problem for the Army than the other services, because the system is neither operated nor maintained until they go downrange. As a result, they lack the experience to be experts in setup and sustainment.”

JOMIS’ ultimate goal according to Stacy A. Cummings, who heads PEO DHMS, is to provide a virtually seamless representation of MHS Genesis deployed locations.

“For the first time, we’re bringing together inpatient and outpatient, medical and dental records, so we’re going to have a single integrated record for the military health system,” Cummings said at the HIMSS 2018 health IT conference in March. Last year, she told Government CIO magazine, “We are configuring the same exact tool for low-and no-communications environments.”

Therein lies the challenge, said GDIT’s Loving. “Genesis wasn’t designed for this kind of austere environment. Adapting to the unique demands of operational medicine will require a lot of collaboration with military health, with service-specific tactical networks, and an intimate understanding of those network environments today and where they’re headed in the future.”

Operating on the tactical edge – whether doing command and control or sharing medical data – is probably the hardest problem to solve, said Tom Sasala, director of the Army Architecture Integration Center and the service’s Chief Data Officer. “The difference between the enterprise environment and the tactical environment, when it comes to some of the more modern technologies like cloud, is that most modern technologies rely on an always-on, low-latency network connection. That simply doesn’t exist in a large portion of the world – and it certainly doesn’t exist in a large portion of the Army’s enterprise.”

Military units deploy into war zones and disaster zones where commercial connectivity is either highly compromised or non-existent. Satellite connectivity is limited at best. “Our challenge is how do we find commercial solutions that we cannot just adopt, but [can] adapt for our special purposes,” Sasala said.

MHS Genesis is like any modern cloud solution in that regard. In fact, it’s based on Cerner Millennium, a popular commercial EHR platform. So while it may be perfect for garrison hospitals and clinics – and ideal for sharing medical records with other agencies, civilian hospitals and health providers – the military’s operational requirements present unique circumstances unimagined by the original system’s architects.

Ellzy acknowledges the concern. “There’s only so much bandwidth,” he said. “So if medical is taking some of it, that means the operators don’t have as much. So how do we work with the operators to get that bandwidth to move the data back and forth?”

Indeed, the bandwidth and latency standards available via satellite links weren’t designed for such systems, nor fast enough to accommodate their requirements. More important, when bandwidth is constrained, military systems must line up for access, and health data is literally last on the priority list. Even ideas like using telemedicine in forward locations aren’t viable. “That works well in a hospital where you have all the connectivity you need,” Sasala said. “But it won’t work so well in an austere environment with limited connectivity.”

The legacy AHLTA-T system has a store-and-forward capability that allows local storage while connectivity is constrained or unavailable, with data forwarded to a central database once it’s back online. Delays mean documentation may not be available at subsequent locations when patients are moved from one level of care to the next.

The challenge for JOMIS will be to find a way to work in theater and then connect and share saved data while overcoming the basic functional challenges that threaten to undermine the system in forward locations.

“I’ll want the ability to go off the network for a period of time,” Ellzy said, “for whatever reason, whether I’m in a place where there isn’t a network, or my network goes down or I’m on a submarine and can’t actually send information out.”

AHLTA-T manages the constrained or disconnected network situation by allowing the system to operate on a stand-alone computer (or network configuration) at field locations, relying on built-in store-and-forward functionality to save medical data locally until it can be forwarded to the Theater Medical Data Store and Clinical Data Repository. There, it can be accessed by authorized medical personnel worldwide.

Engineering a comparable JOMIS solution will be complex and involve working around and within the MHS Genesis architecture, leveraging innovative warfighter IT infrastructure wherever possible. “We have to adapt Genesis to the store-and-forward architecture without compromising the basic functionality it provides,” said GDIT’s Loving.

Ellzy acknowledges compromises necessary to make AHLTA-T work, led to unintended consequences.

“When you look at the legacy AHLTA versus the AHLTA-T, there are some significant differences,” he said. Extra training is necessary to use the combat theater version. That shouldn’t be the case with JOMIS. “The desire with Genesis,” Ellzy said, “is that medical personnel will need significantly less training – if any – as they move from the garrison to the deployed setting.”

Reporter Jon Anderson contributed to this report.

Related Articles

GDIT Recruitment 600×300
GM 250×250
GDIT HCSD SCM 5 250×250 Truck
GDIT Recruitment 250×250
Vago 250×250