Citizen Services

Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Recognizing the Need for Innovation in Acquisition

Recognizing the Need for Innovation in Acquisition

The President’s Management Agenda lays out ambitious plans for the federal government to modernize information technology, prepare its future workforce and improve the way it manages major acquisitions.

These are among 14 cross-agency priority goals on which the administration is focused as it seeks to jettison outdated legacy systems and embrace less cumbersome ways of doing business.

Increasingly, federal IT managers are recognizing the need for innovation in acquisition, not just technology modernization. What exactly will it take to modernize an acquisition system bound by the 1,917-page Federal Acquisition Regulation? Federal acquisition experts say the challenges have less to do with changing those rules than with human behavior – the incentives, motivations and fears of people who touch federal acquisition – from the acquisition professionals themselves to mission owners and government executives and overseers.

“If you want a world-class acquisition system that is responsive to customer needs, you have to be able to use the right tool at the right time,” says Mathew Blum, associate administrator in the Office of Federal Procurement Policy at the Office of Management and Budget. The trouble isn’t a lack of options, he said at the American Council for Technology’s ACT/IAC Acquisition Excellence conference March 27. Rather he said, it is lack of bandwidth and fear of failure that conspire to keep acquisition pros from trying different acquisition strategies.

Risk aversion is a critical issue, agreed Greg Capella, deputy director of the National Technology Information Service at the Department of Commerce. “If you look at what contracting officers get evaluated on, it’s the number of protests, or the number of small business awards [they make],” he said. “It’s not how many successful procurements they’ve managed or what were the results for individual customers.”

Yet there are ways to break through the fear of failure, protests and blame that can paralyze acquisition shops and at the same time save time, save money and improve mission outcomes. Here are four:

  1. Outside Help

The General Services Administration’s (GSA) 18F digital services organization focuses on improving public facing services and internal systems using commercial-style development approaches. Its agile software development program employs a multidisciplinary team incentivized to work together and produce results quickly, said Alla Goldman Seifert, acting director of GSA’s Office of Acquisition in the Technology Transformation Service.

Her team helps other federal agencies tackle problems quickly and incrementally using an agile development approach. “We bring in a cross-functional team of human-centered design and technical experts, as well as acquisition professionals — all of whom work together to draft a statement of work and do the performance-based contracting for agile software acquisition,” she said.

Acquisition planning may be the most important part of that process. Seifert said 18F learned a lot since launching its Agile Blanket Purchase Agreement. The group suffered seven protests in three venues. “But since then, every time we iterate, we make sure we right-size the scope and risk we are taking.” She added by approaching projects in a modular way, risks are diminished and outcomes improved. That’s a best practice that can be replicated throughout government.

“We’re really looking at software and legacy IT modernization: How do you get a mission critical program off of a mainframe? How do you take what is probably going to be a five-year modernization effort and program for it, plan for it and budget for it?” Seifert asked.

GSA experiments in other ways, as well. For example, 18F helped agencies leverage the government’s Challenge.gov platform, publishing needs and offering prizes to the best solutions. The Defense Advanced Research Projects Agency (DARPA) currently seeks ideas for more efficient use of the radio frequency spectrum in its Spectrum Collaboration Challenge. DARPA will award up to $3.5 million to the best ideas. “Even [intelligence community components] have really enjoyed this,” Seifert said. “It really is a good way to increase competition and lower barriers to entry.”

  1. Coaching and Assistance

Many program acquisition officers cite time pressure and lack of bandwidth to learn new tools as barriers to innovation. It’s a classic chicken-and-egg problem: How do you find the time to learn and try something new?

The Department of Homeland Security’s Procurement Innovation Lab (PIL) was created to help program offices do just that – and then capture and share their experience so others in DHS can leverage the results. The PIL provides coaching, advice and asks only that the accumulated knowledge is shared by webinars and other internal means.

“How do people find time to do innovative stuff?” asked Eric Cho, project lead for PIL. “Either one: find ways to do less, or two: borrow from someone else’s work.” Having a coach to help is also critical, and that’s where his organization comes in.

In less than 100 days, the PIL recently helped a Customs and Border Protection team acquire a system to locate contraband such as drugs hidden in walls, by using a high-end stud finder, Cho said. The effort was completed in less than half the time of an earlier, unsuccessful effort.

Acquisition cycle time can be saved in many ways, from capturing impressions immediately, via group evaluations after oral presentations, to narrowing the competitive field by means of a down-select before trade-off analyses on qualified finalists. Reusing language from similar solicitations can also save time, he said. “This is not an English class.”

Even so, the successful PIL program still left middle managers in program offices a little uncomfortable, DHS officials acknowledged – the natural result of trying something new. Key to success is having high-level commitment and support for such experiments. DHS’s Chief Procurement Officer Soraya Correa has been an outspoken advocate of experimentation and the PIL. That makes a difference.

“It all comes back to the culture of rewarding compliance, rather than creativity,” said OMB’s Blum. “We need to figure out how we build incentives to encourage the workforce to test and adopt new and better ways to do business.”

  1. Outsourcing for Innovation

Another approach is to outsource the heavy-lifting to another better skilled or better experienced government entity to execute on a specialized need, such as hiring GSA’s 18F to manage agile software development.

Similarly, outsourcing to GSA’s FEDSIM is a proven strategy for efficiently managing and executing complex, enterprise-scale programs with price tags approaching $1 billion or more. FEDSIM combines both acquisition and technical expertise to manage such large-scale projects, and execute quickly by leveraging government-wide acquisition vehicles such as Alliant or OASIS, which have already narrowed the field of viable competitors.

“The advantage of FEDSIM is that they have experience executing these large-scale complex IT programs — projects that they’ve done dozens of times — but that others may only face once in a decade,” says Michael McHugh, staff vice president within General Dynamics IT’s Government Wide Acquisition Contract (GWAC) Center. The company supports Alliant and OASIS among other GWACs. “They understand that these programs shouldn’t be just about price, but in identifying the superior technical solution within a predetermined reasonable price range. There’s a difference.”

For program offices looking for guidance rather than to outsource procurement, FEDSIM is developing an “Express Platform” with pre-defined acquisition paths that depend on the need and acquisition templates designed. These streamline and accelerate processes, reduce costs and enable innovation. It’s another example of sharing best practices across government agencies.

  1. Minimizing Risk

OMB’s Blum said he doesn’t blame program managers for feeling anxious. He gets that while they like the concept of innovation, they’d rather someone else take the risk. He also believes the risks are lower than they think.

“If you’re talking about testing something new, the downside risk is much less than the upside gain,” Blum said. “Testing shouldn’t entail any more risk than a normal acquisition if you’re applying good acquisition practices — if you’re scoping it carefully, sharing information readily with potential sources so they understand your goals, and by giving participants a robust debrief,” he added. Risks can be managed.

Properly defining the scope, sounding out experts, defining goals and sharing information cannot happen in a vacuum, of course. Richard Spires, former chief information officer at DHS, and now president of Learning Tree International, said he could tell early if projects were likely to succeed or fail based on the level of teamwork exhibited by stakeholders.

“If we had a solid programmatic team that worked well with the procurement organization and you could ask those probing questions, I’ll tell you what: That’s how you spawn innovation,” Spires said. “I think we need to focus more on how to build the right team with all the right stakeholders: legal, security, the programmatic folks, the IT people running the operations.”

Tony Cothron, vice president with General Dynamics IT’s Intelligence portfolio agreed, saying it takes a combination of teamwork and experience to produce results.

“Contracting and mission need to go hand-in-hand,” Cothron said. “But in this community, mission is paramount. The things everyone should be asking are what other ways are there to get the job done? How do you create more capacity? Deliver analytics to help the mission? Improve continuity of operations? Get more for each dollar? These are hard questions, and they require imaginative solutions.”

For example, Cothron said, bundling services may help reduce costs. Likewise, contractors might accept lower prices in exchange for a longer term. “You need to develop a strategy going in that’s focused on the mission, and then set specific goals for what you want to accomplish,” he added. “There are ways to improve quality. How you contract is one of them.”

Risk of failure doesn’t have to be a disincentive to innovation. Like any risk, it can be managed – and savvy government professionals are discovering they can mitigate risks by leveraging experienced teams, sharing best practices and building on lessons learned. When they do those things, risk decreases – and the odds of success improve.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
How AI Is Transforming Defense and Intelligence Technologies

How AI Is Transforming Defense and Intelligence Technologies

A Harvard Belfer Center study commissioned by the Intelligence Advanced Research Projects Agency (IARPA), Artificial Intelligence and National Security, predicted last May that AI will be as transformative to national defense as nuclear weapons, aircraft, computers and biotech.

Advances in AI will enable new capabilities and make others far more affordable – not only to the U.S., but to adversaries as well, raising the stakes as the United States seeks to preserve its hard-won strategic overmatch in the air, land, sea, space and cyberspace domains.

The Pentagon’s Third Offset Strategy seeks to leverage AI and related technologies in a variety of ways, according to Robert Work, former deputy secretary of defense and one of the strategy’s architects. In a forward to a new report from the market analytics firm Govini, Work says the strategy “seeks to exploit advances in AI and autonomous systems to improve the performance of Joint Force guided munitions battle networks” through:

  • Deep learning machines, powered by artificial neural networks and trained with big data sets
  • Advanced human-machine collaboration in which AI-enabled learning machines help humans make more timely and relevant combat decisions
  • AI devices that allow operators of all types to “plug into and call upon the power of the entire Joint Force battle network to accomplish assigned missions and tasks”
  • Human-machine combat teaming of manned and unmanned systems
  • Cyber- and electronic warfare-hardened, network-enabled, autonomous and high-speed weapons capable of collaborative attacks

“By exploiting advances in AI and autonomous systems to improve the warfighting potential and performance of the U.S. military,” Work says, “the strategy aims to restore the Joint Force’s eroding conventional overmatch versus any potential adversary, thereby strengthening conventional deterrence.”

Spending is growing, Govini reports, with AI and related defense program spending increasing at a compound annual rate of 14.5 percent from 2012 to 2017, and poised to grow substantially faster in coming years as advanced computing technologies come on line, driving down computational costs.

But in practical terms, what does that mean? How will AI change the way defense technology is managed, the way we gather and analyze intelligence or protect our computer systems?

Charlie Greenbacker, vice president of analytics at In-Q-Tel in Arlington, Va., the intelligence community’s strategic investment arm, sees dramatic changes ahead.

“The incredible ability of technology to automate parts of the intelligence cycle is a huge opportunity,” he said at an AI summit produced by the Advanced Technology Academic Research Center and Intel in November. “I want humans to focus on more challenging, high-order problems and not the mundane problems of the world.”

The opportunities are possible because of the advent of new, more powerful processing techniques, whether by distributing those loads across a cloud infrastructure, or using specialty processors purpose-built to do this kind of math. “Under the hood, deep learning is really just algebra,” he says. “Specialized processing lets us do this a lot faster.”

Computer vision is one focus of interest – learning to identify faces in crowds or objects in satellite or other surveillance images – as is identifying anomalies in cyber security or text-heavy data searches. “A lot of folks spend massive amounts of time sifting through text looking for needles in the haystack,” Greenbacker continued.

The Air Force is looking at AI to help more quickly identify potential cyber attacks, said Frank Konieczny, chief technology officer in the office of the Air Force chief information officer, speaking at the CyberCon 2017 in November. “We’re looking at various ways of adjusting the network or adjusting the topology based upon threats, like software-defined network capabilities as well as AI-based analysis,” he said.

Marty Trevino Jr., a former technical director and strategist for the National Security Agency, now chief data/analytics officer at intelligence specialist at Red Alpha, a tech firm based in Annapolis Junction, Md. “We are all familiar with computers beating humans in complex games – chess, Go, and so on,” Trevino says. “But experiments are showing that when humans are mated with those same computers, they beat the computer every time. It’s this unique combination of man and machine – each doing what its brain does best – that will constitute the active cyber defense (ACD) systems of tomorrow.”

Machines best humans when the task is highly defined at speed and scale. “With all the hype around artificial intelligence, it is important to understand that AI is only fantastic at performing the specific tasks to which it is intended,” Trevino says. “Otherwise AI can be very dumb.”

Humans on the other hand, are better than machines when it comes to putting information in context. “While the human brain cannot match AI in specific realms,” he adds, “it is unmatched in its ability to process complex contextual information in dynamic environments. In cyber, context is everything. Context enables data-informed strategic decisions to be made.”

Artificial Intelligence and National Security
To prepare for a future in which artificial intelligence plays a heavy or dominant role in a warfare and military strategy-rich future, IARPA commissioned the Harvard Belfer Center to study the issue. The center’s August 2017 report, “Artificial Intelligence and National Security,” offers a series of recommendations, including:

  • Wargames and strategy – The Defense Department should conduct AI-focused wargames to identify potentially disruptive military innovations. It should also fund diverse, long-term strategic analyses to better understand the impact and implications of advanced AI technologies
  • Prioritize investment – Building on strategic analysis, defense and intelligence agencies should prioritize AI research and development investment on technologies and applications that will either provide sustainable strategic advantages or mitigate key risks
  • Counter threats – Because others will also have access to AI technology, investing in “counter-AI” capabilities for both offense and defense is critical to long-term security. This includes developing technological solutions for countering AI-enabled forgery, such as faked audio or video evidence
  • Basic research – The speed of AI development in commercial industry does not preclude specific security requirements in which strategic investment can yield substantial returns. Increased investment in AI-related basic research through DARPA, IARPA, the Office of Naval Research and the National Science Foundation, are critical to achieving long-term strategic advantage
  • Commercial development – Although DoD cannot expect to be a dominant investor in AI technology, increased investment through In-Q-Tel and other means can be critical in attaining startup firms’ interest in national security applications

Building Resiliency
Looking at cybersecurity another way, AI can also be used to rapidly identify and repair software vulnerabilities, said Brian Pierce, director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

“We are using automation to engage cyber attackers in machine time, rather than human time,” he said. Using automation developed under DARPA funding, he said machine-driven defenses have demonstrated AI-based discovery and patching of software vulnerabilities. “Software flaws can last for minutes, instead of as long as years,” he said. “I can’t emphasize enough how much this automation is a game changer in strengthening cyber resiliency.”

Such advanced, cognitive ACD systems employ the gamut of detection tools and techniques, from heuristics to characteristic and signature-based identification, says Red Alpha’s Trevino. “These systems will be self-learning and self-healing, and if compromised, will be able to terminate and reconstitute themselves in an alternative virtual environment, having already learned the lessons of the previous engagement, and incorporated the required capabilities to survive. All of this will be done in real time.”

Seen in that context, AI is just the latest in a series of technologies the U.S. has used as a strategic force multiplier. Just as precision weapons enabled the U.S. Air Force to inflict greater damage with fewer bombs – and with less risk – AI can be used to solve problems that might otherwise take hundreds or even thousands of people. The promise is that instead of eyeballing thousands of images a day or scanning millions of network actions, computers can do the first screening, freeing up analysts for the harder task of interpreting results, says Dennis Gibbs, technical strategist, Intelligence and Security programs at General Dynamics Information Technology. “But just because the technology can do that, doesn’t mean it’s easy. Integrating that technology into existing systems and networks and processes is as much art as science. Success depends on how well you understand your customer. You have to understand how these things fit together.”

In a separate project, DARPA collaborated with a Fortune 100 company that was moving more than a terabyte of data per day across its virtual private network, and generating 12 million network events per day – far beyond the human ability to track or analyze. Using automated tools, however, the project team was able to identify a single unauthorized IP address that successfully logged into 3,336 VPN accounts over seven days.

Mathematically speaking, Pierce said, “The activity associated with this address was close to 9 billion network events with about a 1 in 10 chance of discovery.” The tipoff was a flaw in the botnet that attacked the network: Attacks were staged at exactly 57-minute intervals. Not all botnets of course, will make that mistake. But even pseudo random timing can also be detected. He added: “Using advanced signal processing methods applied to billions of network events over weeks and months-long timelines, we have been successful at finding pseudo random botnets.”

On the flipside however, must be the recognition that AI superiority will not be a given in cyberspace. Unlike air, land, sea or space, cyber is a man-made warfare domain. So it’s fitting that the fight there could end up being machine vs. machine.

The Harvard Artificial Intelligence and National Security study notes emphatically that while AI will make it easier to sort through ever greater volumes of intelligence, “it will also be much easier to lie persuasively.” The use of Photoshop and other image editors is well understood and has been for years. But recent advances in video editing have made it reasonably easy to forge audio and video files.

A trio of University of Washington researchers announced in a research paper published in July that they had used AI to synthesize a photorealistic, lip-synced video of former President Barack Obama. While the researchers used real audio, it’s easy to see the dangers posed if audio is also manipulated.

While the authors describe potential positive uses of the technology – such as “the ability to generate high-quality video from audio [which] could significantly reduce the amount of bandwidth needed in video coding/transmission” – potential nefarious uses are just as clear.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Five Federal IT Trends to Watch in 2018

Five Federal IT Trends to Watch in 2018

Out with the old, in with the new. As the new year turns, it’s worth looking back on where we’ve been to better grasp where we’re headed tomorrow.

Here are five trends that took off in the year past and will shape the year ahead:

1. Modernization
The White House spent most of 2017 building its comprehensive Report to the President on Federal IT Modernization, and it will spend most of 2018 executing the details of that plan. One part assessment, one part roadmap, the report defines the IT challenges agencies face and lays out a future that will radically alter the way feds manage and acquire information technology.

The plan calls for consolidating federal networks and pooling resources and expertise by adopting common, shared services. Those steps will accelerate cloud adoption and centralize control over many commodity IT services. The payoff, officials argue: “A modern Federal IT architecture where agencies are able to maximize secure use of cloud computing, modernize Government-hosted applications and securely maintain legacy systems.”

What that looks like will play out over the coming months as agencies respond to a series of information requests and leaders at the White House, the Office of Management and Budget, the Department of Homeland Security and the National Institute for Standards and Technology respond to 50 task orders by July 4, 2018.

Among them:

  • A dozen recommendations to prioritize modernization of high-risk, high-value assets (HVAs)
  • 11 recommendations to modernize both Trusted Internet Connections (TIC) and the National Cybersecurity Protection System (NCPS) to improve security while enabling those systems to migrate to cloud-based solutions
  • 8 recommendations to support agencies adoption of shared services to accelerate adoption of commercial cloud services and infrastructure
  • 10 recommendations designed to accelerate broad adoption of commercial cloud-based email and collaboration services, such as Microsoft Office 365 or Google’s G-Suite services
  • 8 recommendations to improve existing shared services and expand such offerings, especially to smaller agencies

The devil will be in the details. Some dates to keep in mind: Updating the Federal Cloud Computing Strategy (new report due April 30); a plan to standardize cloud contract language (due from OMB by June 30); a plan to improve the speed, reliability and reuse of authority to operate (ATO) approvals for both software-as-a-service (SaaS) and other shared services (April 1).

2. CDM’s Eye on Cyber
The driving force behind the modernization plan is cybersecurity, and a key to the government’s cyber strategy is the Department of Homeland Security’s (DHS) Continuous Diagnostics and Mitigation (CDM) program. DHS will expand CDM to “enable a layered security architecture that facilitates transition to modern computing in the commercial cloud.”

Doing so means changing gears. CDM’s Phase 1, now being deployed, is designed to identify what’s connected to federal networks. Phase 2 will identify the people on the network. Phase 3 will identify activity on the network and include the ability to identify and analyze anomalies for signs of compromise, and Phase 4 will focus on securing government data.

Now CDM will have to adopt a new charge: securing government systems in commercial clouds, something not included in the original CDM plan.

“A challenge in implementing CDM capabilities in a more cloud-friendly architecture is that security teams and security operations centers may not necessarily have the expertise available to defend the updated architecture,” DHS officials write in the modernization report. “The Federal Government is working to develop this expertise and provide it across agencies through CDM.” The department is developing a security-as-a-service model with the intent of expanding CDM’s reach beyond the 68 agencies currently using the program to include all civilian federal agencies, large and small.

3. Protecting Critical Infrastructure
Securing federal networks and data is one thing, but 85 percent of the nation’s critical infrastructure is in private, not public hands. Figuring out how best to protect privately owned critical national infrastructure, such as the electric grid, gas and oil pipelines, dams and rivers and levees, public communications networks and other critical infrastructure, has long been a thorny issue.

The private sector has historically enjoyed the freedom of managing its own security – and privacy.  However, growing cyber and terrorist threats and the potential liability that could stem from such attacks means those businesses also like having the guiding hand and cooperation of federal regulators.

To date, this responsibility has taken root in the DHS’s National Protection and Programs Directorate (NPPD), which operates largely beneath the public radar. That soon that could change: The House voted in December to elevate NPPD to be the next operational component within DHS, joining the likes of Customs and Border Protection, the Coast Guard and the Secret Service.

NPPD would become the Cybersecurity and Infrastructure Security Agency and while the new status would not explicitly expand its portfolio, it would pave the way for increased influence within the agency and a greater voice in the national debate.

First, it’s got to clear the Senate. The Cybersecurity and Infrastructure Security Agency Act of 2017 faces an uncertain future in the upper chamber because of complex jurisdictional issues, and a gridlocked legislative process that makes passage of any bill an adventure — even if as in this case, that bill has the active backing of both the White House and DHS leadership.

4. Standards for IoT
The Internet of Things (IoT), the Internet of Everything, the wireless, connected world – call it what you will – is challenging the makers of industrial controls and network-connected technology to rethink security and their entire supply chains.

If a lightbulb, camera, motion detector – or any number of other sensors – can be controlled via networks, they can also be co-opted by bad actors in cyberspace. But while manufacturers have been quick to field network-enabled products, most have been slow to ensure those products are safe from hackers and abuse.

Jim Langevin (D-R.I.) advocates legislation to mandate better security in connected devices. “We need to ensure we approach the security of the Internet of Things with the techniques that have been successful with the smart phone and desktop computers: The policies of automatic patching, authentication and encryption that have worked in those domains need to be extended to all devices that are connected to the Internet,” he said last summer. “I believe the government can act as a convener to work with private industry in this space.”

The first private standard for IoT devices was approved in July when the American National Standards Institute (ANSI) endorsed UL 2900-1, General Requirements for Software Cybersecurity for Network-Connectable Products. Underwriters Laboratories (UL) plans two additional standards to follow: UL 2900-2-1, network-connectable healthcare systems, and UL 2900-2- for industrial controls.

Sens. Mark R. Warner (D-Va.) and Cory Gardner (R-Colo.), co-chairs of the Senate Cybersecurity Caucus, introduced The Internet of Things Cybersecurity Improvement Act of 2017 in August with an eye toward holding suppliers responsible for providing insecure connected products to the federal government.

The bill would require vendors supplying IoT devices to the U.S. government to ensure their devices are patchable, not hard-coded with unchangeable passwords and are free of known security vulnerabilities. It would also require automatic, authenticated security updates from the manufacturer. The measure has been criticized for its vague definitions and language and for limiting its scope to products sold to the federal government.

Yet in a world where cybersecurity is a growing liability concern for businesses of every stripe – and where there is a dearth of industry standards – such a measure could become a benchmark requirement imposed by non-government customers, as well.

5. Artificial Intelligence
2017 was the year when data analytics morphed into artificial intelligence (AI) in the public mindset. Government agencies are only now making the connection that their massive data troves could fuel a revolution in how they manage, fuse and use data to make decisions, deliver services and interact with the public.

According to market researcher IDC, that realization is not limited to government: “By the end of 2018,” the firm predicts, “half of manufacturers will be using analytics, IoT, and social collaboration tools to extend the integrated planning process across the entire enterprise, in real time.”

Gartner goes even further: “The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025,” the company predicts. More than half of businesses and agencies are still searching for strategies, however.

“Enterprises should focus on business results enabled by applications that exploit narrow AI technologies,” says David Cearley, vice president and Gartner Fellow, Gartner Research. “Leave general AI to the researchers and science fiction writers.”

AI and machine learning will not be stand-alone functions, but rather foundational components that underlie the applications and services agencies employ, Cearley says. For example, natural language processing – think of Amazon’s Alexa or Apple’s Siri – can now handle increasingly complicated tasks, promising more sophisticated, faster interactions when the public calls a government 800 number.

Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology’s Health and Civilian Solutions Division, says today’s challenge with AI is two-fold: First, finding the right applications that provide a real return on investment, and second overcoming security and privacy concerns.

“There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal told GovTechWorks. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

What this Means for Federal IT
Federal agencies face tipping points across the technology spectrum. The pace of change quickens as the pressure to modernize increases. While technology is an enabler, new skills will be needed for cloud integration, shared services security and agile development. Similarly, the emergence of new products, services and providers, greatly expand agencies’ choices. But each of those choices has its own downstream implications and risks, from vendor lock-in to bandwidth and run-time challenges. With each new wrinkle, agency environments become more complex, demanding ever more sophisticated expertise from those pulling those hybrid environments together.

“Cybersecurity will be the linchpin in all this,” says Stan Tyliszczak, vice president and chief engineer at GDIT. “Agencies can no longer afford the cyber risks of NOT modernizing their IT. It’s not whether or not to modernize, but how fast can we get there? How much cyber risk do we have in the meantime?”

Cloud is ultimately a massive integration exercise with no one-size-fits-all answers. Agencies will employ multiple systems in multiple clouds for multiple kinds of users. Engineering solutions to make those systems work harmoniously is essential.

“Turning on a cloud service is easy,” Tyliszczak says. “Integrating it with the other things you do – and getting that integration right – is where agencies will need the greatest help going forward.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
In the Age of Agile, Feds Rally

In the Age of Agile, Feds Rally

Agile software development is now the dominant approach to software engineering, with adoption rates reaching 55 percent in late 2016, according to research from Computer Economics of Irvine, Calif.

Today, the incremental development trend is everywhere – even in places you wouldn’t expect. “I was surprised at the degree to which agile software development and agile DevOps are being used in programs,” said Maj. Gen. Sarah Zabel, who recently became the Air Force’s director of IT acquisition process development to help the Air Force acquire systems faster. “F-35 is doing agile. F22 is doing agile. There are so many projects doing some type of agile development.… I’ve seen between 25 and 30 in the past couple of months.”

And she wants to see more.

Zabel’s job was created for her with a specific mission in mind: “It was an expression of the frustration from our secretary and chief of staff,” she said at the Defense Systems Summit in November. “Why does it take us eight to 10 years to develop systems that will be wickedly expensive and which we know won’t be what we need when it’s finally delivered?”

There are a host of reasons, of course. Risk-averse procurement officers, arcane procurement rules and grindingly slow requirements processes are prime causes, but so are old-fashioned waterfall development processes that separate users from developers and assume that nothing will change between the time the requirements are set and the system is delivered.

Agile development, by contrast, breaks down those requirements into smaller, more manageable pieces that can be completed in short “sprints” of one to four weeks. Daily scrums bring all interested parties together to share current task information and discuss any impediments. At the conclusion of a sprint, customers get an early look at functionality and the opportunity not only to approve, but to see possibilities they hadn’t imagined before. Other times, that early access affords the opportunity to “de-scope” requirements and eliminate waste; everyone becomes more vested – and accountable – in the development process.

“I’d have a very hard time going back to waterfall,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions, who has been working in an agile environment since 2009. “Once you transition teams to this, they like it. The customer likes it, too.”

The 2001 Agile Manifesto launched a revolution in customer-centered software development. In the process, its authors laid out 12 underlying principles that can be applied to software development and, indeed, to many other kinds of projects, as well:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity – the art of maximizing the amount of work not done – is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Teams, in fact, are a key to the agile philosophy. Close-knit teams work together like well-oiled machines. “If one person’s tasks for a sprint are done,” Zach said, “we want them to look at the board and see how they can help others on the team. It’s a high-trust, high-competence situation. We trust the team will produce. It all revolves around the team: the overall team succeeds or stumbles. Individual heroics are not as prevalent or necessary.”

The board is a means of communicating status and progress to the group. Agile practitioners may follow one of several methodologies to structure their projects. The most common of these, Scrum, divides project tickets into three categories – Ready, Doing and Done. Tickets move across the board showing progress from conception through testing and completion.

A second agile methodology is Kanban, which is derived from Toyota’s just-in-time delivery process of the same name. In Kanban, work flows through four stages: In progress, testing, ready for release and released. Only a certain number of items can be in each state at any one time. By limiting work in progress (WIP), the team can see when bottlenecks occur and rally to solve those problems, while otherwise managing a steady workflow.

“Teams using Kanban are typically mature teams. They can produce at such a predictable fashion, that they abandon the sprint concept and simply pull work off a prioritized list. These teams produce a small but continual stream of business value. Teams operating at this level have a high amount of automation built into the process to maximize efficiency and quality,” Zach said. “Engaged customers groom the backlog and ensure the most needed functionality is at the top of the list. They can then trust those items will be completed first.”

Customers on Board
That may be the biggest difference of all: Instead of ordering a completed product and then going away until it’s done, the product is in a continuous state of development and the customer is likewise continuously returning to the table, reviewing results, and offering feedback. That means fewer surprises and disappointments and more opportunities to refine ideas and clarify intent.

For the customer, the big change is near constant involvement. Customers are involved at the start so developers can better understand requirements, which are contained in user stories that describe the necessary task, and they are there, as well, at the end of each sprint to see, approve and comment on the work thus far.

It can be a hard adjustment for mission-focused customer agencies. “Officials from the Office of the CIO at three agencies (Environmental Protection Agency, General Services Administration and the Department of Labor) independently reported challenges related to organizational changes, such as staff adapting to the culture shift from being business customers to taking on a more active role as product owners and project managers in the software development process,” noted the General Accountability Office (GAO) in a November report on incremental development.

Formal Processes
The agile movement started in 2001 with the creation of the Agile Manifesto, in which a group of software visionaries sought to change the way developers and their customers approached product development by placing people and interactions ahead of processes and tools; valuing working software over comprehensive documentation; encouraging customer collaboration over contract negotiation; and embracing the ability to respond to changes over slavishly adherence to plans. The group laid out 12 principles to developing better software (see box).

But agile is not all touchy-feely soft skills. At its root is order and organization. The up-front planning, daily scrum sessions, progress boards, one- or two- or three-week-long sprints – are all structures intended to enforce discipline and make teams work more efficiently.

“Think of it this way: If I want to make a low-cal birthday cake, I can’t bake the cake, frost it and then say, ‘Oh, I want to make it low-calorie.’ It doesn’t work that way,” says Paul Black, computer scientist at the National Institute of Standards and Technology. “I have to decide to replace oil with apple sauce before I bake the cake for it to turn out right. It’s the same with software. The planning up front is crucial.”

Yet that should not be construed as trying to plan everything all at once, notes GDIT’s Zach. The critical difference is that agile development teams break down those plans into small, manageable pieces.

Robert Binder, senior engineer in the Software Solutions Division at Carnegie-Mellon University’s Software Engineering Institute (SEI), says agile is here to stay. “You might say it’s all over but the shouting,” he told GovTechWorks. “Agile development is now pretty much the way most software gets done. There are some parts of the software development universe where it’s taken a while for that change to take root: Program offices working on systems that are very long-lived, [which] tend to be on the trailing end of that.”

Now comes the hard part, at least for those arriving late to the party: actually implementing agile processes and managing the changes they entail.

“The cultural piece does represent significant barriers, just because of the way we have traditionally executed software and system acquisitions,” says Eileen Wrubel, technical lead for agile in government at SEI’s Software Solutions Division. “In the interest of being good stewards of taxpayer dollars, [government program managers] have historically tried to nail down all the requirements up front, when, in reality, we operate in a world where the ground changes under us rather frequently. However, those behaviors are still engrained in the culture: The idea that we need to lock down as much as possible because all variability may be interpreted as bad is a cultural mindset that is changing over time.”

Every small program that adopts an agile process is a step in that direction, she says. “There’s been a lot of work to look at smaller programs and engaging testing and cyber security organizations earlier and more often. There’s been a lot of guidance to look at prototyping and innovative means of acquisition. But it has been a shift due to the inertia of how the acquisition system has operated in the past.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250