feb 2018

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
How AI Is Transforming Defense and Intelligence Technologies

How AI Is Transforming Defense and Intelligence Technologies

A Harvard Belfer Center study commissioned by the Intelligence Advanced Research Projects Agency (IARPA), Artificial Intelligence and National Security, predicted last May that AI will be as transformative to national defense as nuclear weapons, aircraft, computers and biotech.

Advances in AI will enable new capabilities and make others far more affordable – not only to the U.S., but to adversaries as well, raising the stakes as the United States seeks to preserve its hard-won strategic overmatch in the air, land, sea, space and cyberspace domains.

The Pentagon’s Third Offset Strategy seeks to leverage AI and related technologies in a variety of ways, according to Robert Work, former deputy secretary of defense and one of the strategy’s architects. In a forward to a new report from the market analytics firm Govini, Work says the strategy “seeks to exploit advances in AI and autonomous systems to improve the performance of Joint Force guided munitions battle networks” through:

  • Deep learning machines, powered by artificial neural networks and trained with big data sets
  • Advanced human-machine collaboration in which AI-enabled learning machines help humans make more timely and relevant combat decisions
  • AI devices that allow operators of all types to “plug into and call upon the power of the entire Joint Force battle network to accomplish assigned missions and tasks”
  • Human-machine combat teaming of manned and unmanned systems
  • Cyber- and electronic warfare-hardened, network-enabled, autonomous and high-speed weapons capable of collaborative attacks

“By exploiting advances in AI and autonomous systems to improve the warfighting potential and performance of the U.S. military,” Work says, “the strategy aims to restore the Joint Force’s eroding conventional overmatch versus any potential adversary, thereby strengthening conventional deterrence.”

Spending is growing, Govini reports, with AI and related defense program spending increasing at a compound annual rate of 14.5 percent from 2012 to 2017, and poised to grow substantially faster in coming years as advanced computing technologies come on line, driving down computational costs.

But in practical terms, what does that mean? How will AI change the way defense technology is managed, the way we gather and analyze intelligence or protect our computer systems?

Charlie Greenbacker, vice president of analytics at In-Q-Tel in Arlington, Va., the intelligence community’s strategic investment arm, sees dramatic changes ahead.

“The incredible ability of technology to automate parts of the intelligence cycle is a huge opportunity,” he said at an AI summit produced by the Advanced Technology Academic Research Center and Intel in November. “I want humans to focus on more challenging, high-order problems and not the mundane problems of the world.”

The opportunities are possible because of the advent of new, more powerful processing techniques, whether by distributing those loads across a cloud infrastructure, or using specialty processors purpose-built to do this kind of math. “Under the hood, deep learning is really just algebra,” he says. “Specialized processing lets us do this a lot faster.”

Computer vision is one focus of interest – learning to identify faces in crowds or objects in satellite or other surveillance images – as is identifying anomalies in cyber security or text-heavy data searches. “A lot of folks spend massive amounts of time sifting through text looking for needles in the haystack,” Greenbacker continued.

The Air Force is looking at AI to help more quickly identify potential cyber attacks, said Frank Konieczny, chief technology officer in the office of the Air Force chief information officer, speaking at the CyberCon 2017 in November. “We’re looking at various ways of adjusting the network or adjusting the topology based upon threats, like software-defined network capabilities as well as AI-based analysis,” he said.

Marty Trevino Jr., a former technical director and strategist for the National Security Agency, now chief data/analytics officer at intelligence specialist at Red Alpha, a tech firm based in Annapolis Junction, Md. “We are all familiar with computers beating humans in complex games – chess, Go, and so on,” Trevino says. “But experiments are showing that when humans are mated with those same computers, they beat the computer every time. It’s this unique combination of man and machine – each doing what its brain does best – that will constitute the active cyber defense (ACD) systems of tomorrow.”

Machines best humans when the task is highly defined at speed and scale. “With all the hype around artificial intelligence, it is important to understand that AI is only fantastic at performing the specific tasks to which it is intended,” Trevino says. “Otherwise AI can be very dumb.”

Humans on the other hand, are better than machines when it comes to putting information in context. “While the human brain cannot match AI in specific realms,” he adds, “it is unmatched in its ability to process complex contextual information in dynamic environments. In cyber, context is everything. Context enables data-informed strategic decisions to be made.”

Artificial Intelligence and National Security
To prepare for a future in which artificial intelligence plays a heavy or dominant role in a warfare and military strategy-rich future, IARPA commissioned the Harvard Belfer Center to study the issue. The center’s August 2017 report, “Artificial Intelligence and National Security,” offers a series of recommendations, including:

  • Wargames and strategy – The Defense Department should conduct AI-focused wargames to identify potentially disruptive military innovations. It should also fund diverse, long-term strategic analyses to better understand the impact and implications of advanced AI technologies
  • Prioritize investment – Building on strategic analysis, defense and intelligence agencies should prioritize AI research and development investment on technologies and applications that will either provide sustainable strategic advantages or mitigate key risks
  • Counter threats – Because others will also have access to AI technology, investing in “counter-AI” capabilities for both offense and defense is critical to long-term security. This includes developing technological solutions for countering AI-enabled forgery, such as faked audio or video evidence
  • Basic research – The speed of AI development in commercial industry does not preclude specific security requirements in which strategic investment can yield substantial returns. Increased investment in AI-related basic research through DARPA, IARPA, the Office of Naval Research and the National Science Foundation, are critical to achieving long-term strategic advantage
  • Commercial development – Although DoD cannot expect to be a dominant investor in AI technology, increased investment through In-Q-Tel and other means can be critical in attaining startup firms’ interest in national security applications

Building Resiliency
Looking at cybersecurity another way, AI can also be used to rapidly identify and repair software vulnerabilities, said Brian Pierce, director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

“We are using automation to engage cyber attackers in machine time, rather than human time,” he said. Using automation developed under DARPA funding, he said machine-driven defenses have demonstrated AI-based discovery and patching of software vulnerabilities. “Software flaws can last for minutes, instead of as long as years,” he said. “I can’t emphasize enough how much this automation is a game changer in strengthening cyber resiliency.”

Such advanced, cognitive ACD systems employ the gamut of detection tools and techniques, from heuristics to characteristic and signature-based identification, says Red Alpha’s Trevino. “These systems will be self-learning and self-healing, and if compromised, will be able to terminate and reconstitute themselves in an alternative virtual environment, having already learned the lessons of the previous engagement, and incorporated the required capabilities to survive. All of this will be done in real time.”

Seen in that context, AI is just the latest in a series of technologies the U.S. has used as a strategic force multiplier. Just as precision weapons enabled the U.S. Air Force to inflict greater damage with fewer bombs – and with less risk – AI can be used to solve problems that might otherwise take hundreds or even thousands of people. The promise is that instead of eyeballing thousands of images a day or scanning millions of network actions, computers can do the first screening, freeing up analysts for the harder task of interpreting results, says Dennis Gibbs, technical strategist, Intelligence and Security programs at General Dynamics Information Technology. “But just because the technology can do that, doesn’t mean it’s easy. Integrating that technology into existing systems and networks and processes is as much art as science. Success depends on how well you understand your customer. You have to understand how these things fit together.”

In a separate project, DARPA collaborated with a Fortune 100 company that was moving more than a terabyte of data per day across its virtual private network, and generating 12 million network events per day – far beyond the human ability to track or analyze. Using automated tools, however, the project team was able to identify a single unauthorized IP address that successfully logged into 3,336 VPN accounts over seven days.

Mathematically speaking, Pierce said, “The activity associated with this address was close to 9 billion network events with about a 1 in 10 chance of discovery.” The tipoff was a flaw in the botnet that attacked the network: Attacks were staged at exactly 57-minute intervals. Not all botnets of course, will make that mistake. But even pseudo random timing can also be detected. He added: “Using advanced signal processing methods applied to billions of network events over weeks and months-long timelines, we have been successful at finding pseudo random botnets.”

On the flipside however, must be the recognition that AI superiority will not be a given in cyberspace. Unlike air, land, sea or space, cyber is a man-made warfare domain. So it’s fitting that the fight there could end up being machine vs. machine.

The Harvard Artificial Intelligence and National Security study notes emphatically that while AI will make it easier to sort through ever greater volumes of intelligence, “it will also be much easier to lie persuasively.” The use of Photoshop and other image editors is well understood and has been for years. But recent advances in video editing have made it reasonably easy to forge audio and video files.

A trio of University of Washington researchers announced in a research paper published in July that they had used AI to synthesize a photorealistic, lip-synced video of former President Barack Obama. While the researchers used real audio, it’s easy to see the dangers posed if audio is also manipulated.

While the authors describe potential positive uses of the technology – such as “the ability to generate high-quality video from audio [which] could significantly reduce the amount of bandwidth needed in video coding/transmission” – potential nefarious uses are just as clear.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
Five Federal IT Trends to Watch in 2018

Five Federal IT Trends to Watch in 2018

Out with the old, in with the new. As the new year turns, it’s worth looking back on where we’ve been to better grasp where we’re headed tomorrow.

Here are five trends that took off in the year past and will shape the year ahead:

1. Modernization
The White House spent most of 2017 building its comprehensive Report to the President on Federal IT Modernization, and it will spend most of 2018 executing the details of that plan. One part assessment, one part roadmap, the report defines the IT challenges agencies face and lays out a future that will radically alter the way feds manage and acquire information technology.

The plan calls for consolidating federal networks and pooling resources and expertise by adopting common, shared services. Those steps will accelerate cloud adoption and centralize control over many commodity IT services. The payoff, officials argue: “A modern Federal IT architecture where agencies are able to maximize secure use of cloud computing, modernize Government-hosted applications and securely maintain legacy systems.”

What that looks like will play out over the coming months as agencies respond to a series of information requests and leaders at the White House, the Office of Management and Budget, the Department of Homeland Security and the National Institute for Standards and Technology respond to 50 task orders by July 4, 2018.

Among them:

  • A dozen recommendations to prioritize modernization of high-risk, high-value assets (HVAs)
  • 11 recommendations to modernize both Trusted Internet Connections (TIC) and the National Cybersecurity Protection System (NCPS) to improve security while enabling those systems to migrate to cloud-based solutions
  • 8 recommendations to support agencies adoption of shared services to accelerate adoption of commercial cloud services and infrastructure
  • 10 recommendations designed to accelerate broad adoption of commercial cloud-based email and collaboration services, such as Microsoft Office 365 or Google’s G-Suite services
  • 8 recommendations to improve existing shared services and expand such offerings, especially to smaller agencies

The devil will be in the details. Some dates to keep in mind: Updating the Federal Cloud Computing Strategy (new report due April 30); a plan to standardize cloud contract language (due from OMB by June 30); a plan to improve the speed, reliability and reuse of authority to operate (ATO) approvals for both software-as-a-service (SaaS) and other shared services (April 1).

2. CDM’s Eye on Cyber
The driving force behind the modernization plan is cybersecurity, and a key to the government’s cyber strategy is the Department of Homeland Security’s (DHS) Continuous Diagnostics and Mitigation (CDM) program. DHS will expand CDM to “enable a layered security architecture that facilitates transition to modern computing in the commercial cloud.”

Doing so means changing gears. CDM’s Phase 1, now being deployed, is designed to identify what’s connected to federal networks. Phase 2 will identify the people on the network. Phase 3 will identify activity on the network and include the ability to identify and analyze anomalies for signs of compromise, and Phase 4 will focus on securing government data.

Now CDM will have to adopt a new charge: securing government systems in commercial clouds, something not included in the original CDM plan.

“A challenge in implementing CDM capabilities in a more cloud-friendly architecture is that security teams and security operations centers may not necessarily have the expertise available to defend the updated architecture,” DHS officials write in the modernization report. “The Federal Government is working to develop this expertise and provide it across agencies through CDM.” The department is developing a security-as-a-service model with the intent of expanding CDM’s reach beyond the 68 agencies currently using the program to include all civilian federal agencies, large and small.

3. Protecting Critical Infrastructure
Securing federal networks and data is one thing, but 85 percent of the nation’s critical infrastructure is in private, not public hands. Figuring out how best to protect privately owned critical national infrastructure, such as the electric grid, gas and oil pipelines, dams and rivers and levees, public communications networks and other critical infrastructure, has long been a thorny issue.

The private sector has historically enjoyed the freedom of managing its own security – and privacy.  However, growing cyber and terrorist threats and the potential liability that could stem from such attacks means those businesses also like having the guiding hand and cooperation of federal regulators.

To date, this responsibility has taken root in the DHS’s National Protection and Programs Directorate (NPPD), which operates largely beneath the public radar. That soon that could change: The House voted in December to elevate NPPD to be the next operational component within DHS, joining the likes of Customs and Border Protection, the Coast Guard and the Secret Service.

NPPD would become the Cybersecurity and Infrastructure Security Agency and while the new status would not explicitly expand its portfolio, it would pave the way for increased influence within the agency and a greater voice in the national debate.

First, it’s got to clear the Senate. The Cybersecurity and Infrastructure Security Agency Act of 2017 faces an uncertain future in the upper chamber because of complex jurisdictional issues, and a gridlocked legislative process that makes passage of any bill an adventure — even if as in this case, that bill has the active backing of both the White House and DHS leadership.

4. Standards for IoT
The Internet of Things (IoT), the Internet of Everything, the wireless, connected world – call it what you will – is challenging the makers of industrial controls and network-connected technology to rethink security and their entire supply chains.

If a lightbulb, camera, motion detector – or any number of other sensors – can be controlled via networks, they can also be co-opted by bad actors in cyberspace. But while manufacturers have been quick to field network-enabled products, most have been slow to ensure those products are safe from hackers and abuse.

Jim Langevin (D-R.I.) advocates legislation to mandate better security in connected devices. “We need to ensure we approach the security of the Internet of Things with the techniques that have been successful with the smart phone and desktop computers: The policies of automatic patching, authentication and encryption that have worked in those domains need to be extended to all devices that are connected to the Internet,” he said last summer. “I believe the government can act as a convener to work with private industry in this space.”

The first private standard for IoT devices was approved in July when the American National Standards Institute (ANSI) endorsed UL 2900-1, General Requirements for Software Cybersecurity for Network-Connectable Products. Underwriters Laboratories (UL) plans two additional standards to follow: UL 2900-2-1, network-connectable healthcare systems, and UL 2900-2- for industrial controls.

Sens. Mark R. Warner (D-Va.) and Cory Gardner (R-Colo.), co-chairs of the Senate Cybersecurity Caucus, introduced The Internet of Things Cybersecurity Improvement Act of 2017 in August with an eye toward holding suppliers responsible for providing insecure connected products to the federal government.

The bill would require vendors supplying IoT devices to the U.S. government to ensure their devices are patchable, not hard-coded with unchangeable passwords and are free of known security vulnerabilities. It would also require automatic, authenticated security updates from the manufacturer. The measure has been criticized for its vague definitions and language and for limiting its scope to products sold to the federal government.

Yet in a world where cybersecurity is a growing liability concern for businesses of every stripe – and where there is a dearth of industry standards – such a measure could become a benchmark requirement imposed by non-government customers, as well.

5. Artificial Intelligence
2017 was the year when data analytics morphed into artificial intelligence (AI) in the public mindset. Government agencies are only now making the connection that their massive data troves could fuel a revolution in how they manage, fuse and use data to make decisions, deliver services and interact with the public.

According to market researcher IDC, that realization is not limited to government: “By the end of 2018,” the firm predicts, “half of manufacturers will be using analytics, IoT, and social collaboration tools to extend the integrated planning process across the entire enterprise, in real time.”

Gartner goes even further: “The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025,” the company predicts. More than half of businesses and agencies are still searching for strategies, however.

“Enterprises should focus on business results enabled by applications that exploit narrow AI technologies,” says David Cearley, vice president and Gartner Fellow, Gartner Research. “Leave general AI to the researchers and science fiction writers.”

AI and machine learning will not be stand-alone functions, but rather foundational components that underlie the applications and services agencies employ, Cearley says. For example, natural language processing – think of Amazon’s Alexa or Apple’s Siri – can now handle increasingly complicated tasks, promising more sophisticated, faster interactions when the public calls a government 800 number.

Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology’s Health and Civilian Solutions Division, says today’s challenge with AI is two-fold: First, finding the right applications that provide a real return on investment, and second overcoming security and privacy concerns.

“There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal told GovTechWorks. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

What this Means for Federal IT
Federal agencies face tipping points across the technology spectrum. The pace of change quickens as the pressure to modernize increases. While technology is an enabler, new skills will be needed for cloud integration, shared services security and agile development. Similarly, the emergence of new products, services and providers, greatly expand agencies’ choices. But each of those choices has its own downstream implications and risks, from vendor lock-in to bandwidth and run-time challenges. With each new wrinkle, agency environments become more complex, demanding ever more sophisticated expertise from those pulling those hybrid environments together.

“Cybersecurity will be the linchpin in all this,” says Stan Tyliszczak, vice president and chief engineer at GDIT. “Agencies can no longer afford the cyber risks of NOT modernizing their IT. It’s not whether or not to modernize, but how fast can we get there? How much cyber risk do we have in the meantime?”

Cloud is ultimately a massive integration exercise with no one-size-fits-all answers. Agencies will employ multiple systems in multiple clouds for multiple kinds of users. Engineering solutions to make those systems work harmoniously is essential.

“Turning on a cloud service is easy,” Tyliszczak says. “Integrating it with the other things you do – and getting that integration right – is where agencies will need the greatest help going forward.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250