Defense GTW

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Chief information security officers (CISOs) face a dizzying array of cybersecurity tools to choose from, each loaded with features and promised capabilities that are hard to measure or judge.

That leaves CISOs trying to balance unknown risks against growing costs, without a clear ability to justify the return on their cybersecurity investment. Not surprisingly, today’s high-threat environment makes it preferable to choose safe over sorry – regardless of cost. But is there a better way?

Some cyber insiders believe there is.

Margie Graves Acting U.S. Federal Chief Information Officer

Margie Graves
Acting U.S. Federal Chief Information Officer

Acting U.S. Federal Chief Information Officer (CIO) Margie Graves acknowledges the problem.

“Defining the measure of success is hard sometimes, because it’s hard to measure things that don’t happen,” Graves said. President’s Trump’s Executive Order on Cybersecurity asks each agency to develop its own risk management plan, she noted. “It should be articulated on that plan how every dollar will be applied to buying down that risk.”

There is a difference though, between a plan and an actual measure. A plan can justify an investment intended to reduce risk. But judgment, rather than hard knowledge, will determine how much risk is mitigated by any given tool.

The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) have been trying to develop a methodology measuring the actual value of a given cyber tool’s performance. Their NIPRNet/SIPRNET Cyber Security Architecture Review (NSCSAR – pronounced “NASCAR”) is a classified effort to define a framework for measuring cybersecurity performance, said DISA CIO and Risk Management Executive John Hickey.

“We just went through a drill of ‘what are those metrics that are actually going to show us the effectiveness of those tools,’ because a lot of times we make an investment, people want a return on that investment,” he told GovTechWorks in June. “Security is a poor example of what you are going after. It is really the effectiveness of the security tools or compliance capabilities.”

The NSCSAR review, conducted in partnership with NSA and the Defense Department, may point to a future means of measuring cyber defense capability. “It is a framework that actually looks at the kill chain, how the enemy will move through that kill chain and what defenses we have in place,” Hickey said, adding that NSA is working with DISA on an unclassified version of the framework that could be shared with other agencies or the private sector to measure cyber performance.

“It is a methodology,” Hickey explained. “We look at the sensors we have today and measure what functionality they perform against the threat.… We are tracking the effectiveness of the tools and capabilities to get after that threat, and then making our decisions on what priorities to fund.”

Measuring Security
NSS Labs Inc., independently tests the cybersecurity performance of firewalls and other cyber defenses, annually scoring products’ performances. The Austin, Texas, company evaluated 11 next-generation firewall (NGFW) products from 10 vendors in June 2017, comparing the effectiveness of their security performance, as well as the firewalls’ stability, reliability and total cost of ownership.

In the test, products were presumed to be able to provide basic packet filtering, stateful multi-layer inspection, network address translation, virtual private network capability, application awareness controls, user/group controls, integrated intrusion prevention, reputation services, anti-malware capabilities and SSL inspection. Among the findings:

  • Eight of 11 products tested scored “above average” in terms of both performance and cost-effectiveness; Three scored below
  • Overall security effectiveness ranged from as low as 25.8 percent, up to 99.9; average security effectiveness was 67.3 percent
  • Four products scored below 78.5 percent
  • Total cost of ownership ranged from $5 per protected megabit/second to $105, with an average of $22
  • Nine products failed to detect at least one evasion, while only two detected all evasion attempts

NSS conducted similar tests of advanced endpoint protection tools, data center firewalls, and web application firewalls earlier this year.

But point-in-time performance tests don’t provide a reliable measure of ongoing performance. And measuring the effectiveness of a single tool does not necessarily indicate how well it performs its particular duties as part of a suite of tools, notes Robert J. Carey, vice president within the Global Solutions division at General Dynamics Information Technology (GDIT). The former U.S. Navy CIO and Defense Department principal deputy CIO says that though these tests are valuable, they still make it hard to quantify and compare the performance of different products in an organization’s security stack.

The evolution and blurring of the lines between different cybersecurity tools – from firewalls to intrusion detection/protection, gateways, traffic analysis tools, threat intelligence, intrusion detection, anomaly detection and so on – mean it’s easy to add another tool to one’s stack, but like any multivariate function, it is hard to be sure of its individual contributions to threat protection and what you can do without.

“We don’t know what an adequate cyber security stack looks like. What part of the threat does the firewall protect against, the intrusion detection tool, and so on?” Carey says. “We perceive that the tools are part of the solution. But it’s difficult to quantify the benefit. There’s too much marketing fluff about features and not enough facts.”

Mike Spanbauer, vice president of research strategy at NSS, says this is a common concern, especially in large, managed environments — as is the case in many government instances. One way to address it is to replicate the security stack in a test environment and experiment to see how tools perform against a range of known, current threats while under different configurations and settings.

Another solution is to add one more tool to monitor and measure performance. NSS’ Cyber Advanced Warning System (CAWS) provides continuous security validation monitoring by capturing live threats and then injecting them into a test environment mirroring customers’ actual security stacks. New threats are identified and tested non-stop. If they succeed in penetrating the stack, system owners are notified so they can update their policies to stop that threat in the future.

“We harvest the live threats and capture those in a very careful manner and preserve the complete properties,” Spanbauer said. “Then we bring those back into our virtual environment and run them across the [cyber stack] and determine whether it is detected.”

Adding more tools and solutions isn’t necessarily what Carey had in mind. While that monitoring may reduce risk, it also adds another expense.

And measuring value in terms of return on investment, is a challenge when every new tool adds real cost and results are so difficult to define. In cybersecurity, though managing risk has become the name of the game, actually calculating risk is hard.

The National Institute of Standards and Technology (NIST) created the 800-53 security controls and the cybersecurity risk management framework that encompass today’s best practices. Carey worries that risk management delivers an illusion of security by accepting some level of vulnerability depending on level of investment. The trouble with that is that it drives a compliance culture in which security departments focus on following the framework more than defending the network and securing its applications and data.

“I’m in favor of moving away from risk management,” GDIT’s Carey says. “It’s what we’ve been doing for the past 25 years. It’s produced a lot of spend, but no measurable results. We should move to effects-based cyber. Instead of 60 shades of gray, maybe we should have just five well defined capability bands.”

The ultimate goal: Bring compliance into line with security so that doing the former, delivers the latter. But the evolving nature of cyber threats suggests that may never be possible.

Automated tools will only be as good as the data and intelligence built into them. True, automation improves speed and efficiency, Carey says. “But it doesn’t necessarily make me better.”

System owners should be able to look at their cyber stack and determine exactly how much better security performance would be if they added another tool or upgraded an existing one. If that were the case, they could spend most of their time focused on stopping the most dangerous threats – zero-day vulnerabilities that no tool can identify because they’ve never seen it before – rather than ensuring all processes and controls are in place to minimize risk in the event of a breach.

Point-in-time measures based on known vulnerabilities and available threats help, but may be blind to new or emerging threats of the sort that the NSA identifies and often keeps secret.

The NSCSAR tests DISA and NSA perform include that kind of advanced threat. Rather than trying to measure overall security, they’ve determined that breaking it down into the different levels of security makes sense. Says DISA’s Hickey: “You’ve got to tackle ‘what are we doing at the perimeter, what are we doing at the region and what are we doing at the endpoint.’” A single overall picture isn’t really possible, he says. Rather, one has to ask: “What is that situational awareness? What are those gaps and seams? What do we stop [doing now] in order to do something else? Those are the types of measurements we are looking at.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
How Employers Try to Retain Tech Talent

How Employers Try to Retain Tech Talent

As soon as Scott Algeier hires a freshly minted IT specialist out of college, a little clock starts ticking inside his head.

It’s not that he doesn’t have plenty to offer new hires in his role as director of the non-profit Information Technology-Information Sharing and Analysis Center (IT-ISAC) in Manassas, Va., nor that IT-ISAC cannot pay a fair wage. The issue is Algeier is in an all-out war for talent – and experience counts. Contractors, government agencies – indeed virtually every other employer across the nation – values experience almost as much as education and certifications.

As employees gain that experience, they see their value grow. “If I can get them to stay for at least three years, I consider that a win,” says Algeier. “We have one job where it never lasts more than two years. The best I can do is hire quality people right out of college, train them and hope they stick around for three years.”

The Military Context
An October 2016 White Paper from the Air Force University’s Research Institute says the frequency of churn is even more dire among those in the military, particularly in the Air Force which is undergoing a massive expansion of its cyber operations units.

The present demand for cybersecurity specialists in both the public and private sectors could undoubtedly lead the Air Force to be significantly challenged in retaining its most developed and experienced cyber Airmen in the years ahead, writes Air Force Major William Parker IV, author of the study.

“In the current environment, shortages in all flavors of cyber experts will increase, at least in the foreseeable future. Demand for all varieties of cybersecurity-skilled experts in both the private and public sectors is only rising.”

Meanwhile, it is estimated that today there are at least 30,000 unfilled cybersecurity jobs across the federal government, writes Parker. According to the International Information System Security Certification Consortium (ISC2), demand for cyber-certified professionals will continue to increase at 11 percent per year for the foreseeable future. Some estimates placed the global cyber workforce shortage at close to a million.

The military – both a primary trainer and employer in cyber — offers some interesting insight. A recent survey of Air Force cyber specialists choosing between re-enlistment or pursuit of opportunities in the civilian world indicates those who chose to reenlist were primarily influenced by job security and benefits, including health, retirement and education and training.

“For those Airmen who intended to separate, civilian job opportunities, pay and allowances, bonuses and special pays, promotion opportunities and the evaluation system contributed most heavily to their decisions [to leave the military],” Parker’s paper concluded.

Indeed, several airmen who expressed deep pride and love of serving in the Air Force stated they chose to separate because they felt their skills were not being fully utilized.

“Also, they were aware they had the ability to earn more income for their families in the private sector,” adds Parker. The re-enlistment bonuses the Air Force offered were not enough to make up the pay differences these airmen saw.

“It is also interesting that many of those who say that they will reenlist, included optimistic comments that they hope ‘someday’ they may be able to apply the cyber skills they have attained in the service of the nation.”

Tech companies present a different set of competitive stresses: competing with high pay, industrial glamor and attractive perks. Apple’s new Cupertino, Calif., headquarters epitomizes the age: an airy glass donut that looks like it just touched down from a galaxy far, far away, filled with cafés, restaurants, a wellness center, a child care facility and even an Eden-like garden inside the donut hole. Amazon’s $4 billion urban campus, anchored by the improbable “spheres,” in which three interlocking, multistory glass structures house treehouse meeting rooms, offices and collaborative spaces filled with trees, rare plants, waterfalls and a river that runs through it all.

While Washington, D.C., contractors and non-profits do not have campus rivers or stock option packages, they do have other ways to compete. At the forefront are the high-end missions in which both they and their customers perform. They also offer professional development, certifications, job flexibility and sometimes, the ability to work from home.

“We work with the intelligence community and the DoD,” says Chris Hiltbrand, vice president of Human Resources for General Dynamics Information Technology’s Intelligence Solutions Division. “Our employees have the opportunity to apply cutting-edge technologies to interesting and important missions that truly make a difference to our nation. It’s rewarding work.”

While sometimes people leave for pay packages from Silicon Valley, he admits, pay is rarely the only issue employees consider. Work location, comfort and familiarity, quality of work, colleagues, career opportunities and the impact of working on a worthwhile mission, all play a role.

“It’s not all about maximizing earning potential,” Hiltbrand says. “In terms of money, people want to be compensated fairly – relative to the market – for the work they do. We also look at other aspects of what we can offer, and that is largely around the customer missions we support and our reputation with customers and the industry.”

Especially for veterans, mission, purpose and service to the nation are real motivators. GDIT then goes a step further, supporting staff who are members of the National Guard or military reservists with extra benefits, such as paying the difference in salary when staff go on active duty.

Mission also factors in to the equation at IT-ISAC, Algeier says. “Our employees get to work with some of the big hitters in the industry and that experience definitely keeps them here longer than they might otherwise. But over time, that also has an inevitable effect.

“I get them here by saying: ‘Hey, look who you get to work with,’ he says. “And then within a few years, it’s ‘hey, look who they’re going to go work with.’”

Perks and Benefits
Though automation may seem like a way to replace people rather than entice them to stay, it can be a valuable, if unlikely retention tool.

Automated tools spare staff from the tedious work some find demoralizing (or boring), and save hours or even days for higher-level work, Algeier says. “That means they can now go do far more interesting work instead.” More time doing interesting work leads to happier employees, which in turn makes staff more likely to stay put.

Fitness and wellness programs are two other creative ways employers invest in keeping the talent they have. Gyms, wellness centers, an in-house yoga studio, exercise classes and even CrossFit boxes are some components. Since exercise helps relieve stress and stress can trigger employees to start looking elsewhere for work, it stands that reducing stress can help improve the strains of work and boost production. Keeping people motivated helps keep them from negative feelings that might lead them to seek satisfaction elsewhere.

Providing certified life coaches is another popular way employers can help staff, focusing on both personal and professional development. Indeed, Microsoft deployed life coaches at its Redmond headquarters more than a decade ago. They specialize in working with adults with Attention Deficit Hyperactivity Disorder (ADHD), and can help professionals overcome weaknesses and increase performance.

Such benefits used to be the domain of Silicon Valley alone, but not anymore. Fairfax, Va.-based boutique security company MKACyber, was launched by Mischel Kwon after posts as director of the Department of Homeland Security’s U.S. Computer Emergency Response Team and as vice president of public sector security solutions for Bedford, Mass.-based RSA. Kwon built her company with what she calls “a West Coast environment.”

The company provides breakfast, lunch and snack foods, private “chill” rooms, and operates a family-first environment, according to a job posting. It also highlights the company’s strong commitment to diversity and helps employees remain “life-long learners.”

Kwon says diversity is about more than just hiring the right mix of people. How you treat them is the key to how long they stay.

“There are a lot of things that go on after the hire that we have to concern ourselves with,” she said at a recent RSA conference.

Retention is a challenging problem for everyone in IT, Kwon says, but managers can do more to think differently about how to hire and keep new talent, beginning by focusing not just on raw technical knowledge, but also on soft skills that make a real difference when working on projects and with teams.

“We’re very ready to have people take tests, have certifications, and look at the onesy-twosy things that they know,” says Kwon. “What we’re finding though, is just as important as the actual content that they know, is their actual work ethic, their personalities. Do they fit in with other people? Do they work well in groups? Are they life-long learners? These types of personal skills are as important as technical skills,” Kwon says. “We can teach the technical skills. It’s hard to teach the work ethic.”

Flexible Work Schedules
Two stereotypes of the modern tech age are all-night coders working in perk-laden offices and fueled by free food, lattes and energy drinks. On the other hand are virtual meetings populated by individuals spread out across the nation or the globe, sitting in home offices or bedrooms, working on their laptops. For many, working from home is no longer a privilege. It’s either a right or at least, an opportunity to make work and life balance out. Have to wait for a plumber to fix the leaky sink? No problem: dial in remotely. In the District of Columbia, the government and many employers encourage regular telework as a means to reduce traffic and congestion — as well as for convenience.

For some, working from home also inevitably draws questions. IBM, for years one of the staunchest supporters of telework, now backtracks on the culture it built, telling workers they need to regularly be in the office if they want to stay employed. The policy shift follows similar moves by Yahoo!, among others.

GDIT’s Hiltbrand says because its staff works at company locations as well as on government sites, remote work is common.

“We have a large population of people who have full or part-time teleworking,” he says. “We are not backing away from that model. If anything, we’re trying to expand on that culture of being able to work from anywhere, anytime and on any device.”

Of course, that’s not possible for everyone. Staff working at military and intelligence agencies don’t typically have that flexibility. “But aside from that,” adds Hiltbrand, “we’re putting a priority on the most flexible work arrangements possible to satisfy employee needs.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
How AI Is Transforming Defense and Intelligence Technologies

How AI Is Transforming Defense and Intelligence Technologies

A Harvard Belfer Center study commissioned by the Intelligence Advanced Research Projects Agency (IARPA), Artificial Intelligence and National Security, predicted last May that AI will be as transformative to national defense as nuclear weapons, aircraft, computers and biotech.

Advances in AI will enable new capabilities and make others far more affordable – not only to the U.S., but to adversaries as well, raising the stakes as the United States seeks to preserve its hard-won strategic overmatch in the air, land, sea, space and cyberspace domains.

The Pentagon’s Third Offset Strategy seeks to leverage AI and related technologies in a variety of ways, according to Robert Work, former deputy secretary of defense and one of the strategy’s architects. In a forward to a new report from the market analytics firm Govini, Work says the strategy “seeks to exploit advances in AI and autonomous systems to improve the performance of Joint Force guided munitions battle networks” through:

  • Deep learning machines, powered by artificial neural networks and trained with big data sets
  • Advanced human-machine collaboration in which AI-enabled learning machines help humans make more timely and relevant combat decisions
  • AI devices that allow operators of all types to “plug into and call upon the power of the entire Joint Force battle network to accomplish assigned missions and tasks”
  • Human-machine combat teaming of manned and unmanned systems
  • Cyber- and electronic warfare-hardened, network-enabled, autonomous and high-speed weapons capable of collaborative attacks

“By exploiting advances in AI and autonomous systems to improve the warfighting potential and performance of the U.S. military,” Work says, “the strategy aims to restore the Joint Force’s eroding conventional overmatch versus any potential adversary, thereby strengthening conventional deterrence.”

Spending is growing, Govini reports, with AI and related defense program spending increasing at a compound annual rate of 14.5 percent from 2012 to 2017, and poised to grow substantially faster in coming years as advanced computing technologies come on line, driving down computational costs.

But in practical terms, what does that mean? How will AI change the way defense technology is managed, the way we gather and analyze intelligence or protect our computer systems?

Charlie Greenbacker, vice president of analytics at In-Q-Tel in Arlington, Va., the intelligence community’s strategic investment arm, sees dramatic changes ahead.

“The incredible ability of technology to automate parts of the intelligence cycle is a huge opportunity,” he said at an AI summit produced by the Advanced Technology Academic Research Center and Intel in November. “I want humans to focus on more challenging, high-order problems and not the mundane problems of the world.”

The opportunities are possible because of the advent of new, more powerful processing techniques, whether by distributing those loads across a cloud infrastructure, or using specialty processors purpose-built to do this kind of math. “Under the hood, deep learning is really just algebra,” he says. “Specialized processing lets us do this a lot faster.”

Computer vision is one focus of interest – learning to identify faces in crowds or objects in satellite or other surveillance images – as is identifying anomalies in cyber security or text-heavy data searches. “A lot of folks spend massive amounts of time sifting through text looking for needles in the haystack,” Greenbacker continued.

The Air Force is looking at AI to help more quickly identify potential cyber attacks, said Frank Konieczny, chief technology officer in the office of the Air Force chief information officer, speaking at the CyberCon 2017 in November. “We’re looking at various ways of adjusting the network or adjusting the topology based upon threats, like software-defined network capabilities as well as AI-based analysis,” he said.

Marty Trevino Jr., a former technical director and strategist for the National Security Agency, now chief data/analytics officer at intelligence specialist at Red Alpha, a tech firm based in Annapolis Junction, Md. “We are all familiar with computers beating humans in complex games – chess, Go, and so on,” Trevino says. “But experiments are showing that when humans are mated with those same computers, they beat the computer every time. It’s this unique combination of man and machine – each doing what its brain does best – that will constitute the active cyber defense (ACD) systems of tomorrow.”

Machines best humans when the task is highly defined at speed and scale. “With all the hype around artificial intelligence, it is important to understand that AI is only fantastic at performing the specific tasks to which it is intended,” Trevino says. “Otherwise AI can be very dumb.”

Humans on the other hand, are better than machines when it comes to putting information in context. “While the human brain cannot match AI in specific realms,” he adds, “it is unmatched in its ability to process complex contextual information in dynamic environments. In cyber, context is everything. Context enables data-informed strategic decisions to be made.”

Artificial Intelligence and National Security
To prepare for a future in which artificial intelligence plays a heavy or dominant role in a warfare and military strategy-rich future, IARPA commissioned the Harvard Belfer Center to study the issue. The center’s August 2017 report, “Artificial Intelligence and National Security,” offers a series of recommendations, including:

  • Wargames and strategy – The Defense Department should conduct AI-focused wargames to identify potentially disruptive military innovations. It should also fund diverse, long-term strategic analyses to better understand the impact and implications of advanced AI technologies
  • Prioritize investment – Building on strategic analysis, defense and intelligence agencies should prioritize AI research and development investment on technologies and applications that will either provide sustainable strategic advantages or mitigate key risks
  • Counter threats – Because others will also have access to AI technology, investing in “counter-AI” capabilities for both offense and defense is critical to long-term security. This includes developing technological solutions for countering AI-enabled forgery, such as faked audio or video evidence
  • Basic research – The speed of AI development in commercial industry does not preclude specific security requirements in which strategic investment can yield substantial returns. Increased investment in AI-related basic research through DARPA, IARPA, the Office of Naval Research and the National Science Foundation, are critical to achieving long-term strategic advantage
  • Commercial development – Although DoD cannot expect to be a dominant investor in AI technology, increased investment through In-Q-Tel and other means can be critical in attaining startup firms’ interest in national security applications

Building Resiliency
Looking at cybersecurity another way, AI can also be used to rapidly identify and repair software vulnerabilities, said Brian Pierce, director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

“We are using automation to engage cyber attackers in machine time, rather than human time,” he said. Using automation developed under DARPA funding, he said machine-driven defenses have demonstrated AI-based discovery and patching of software vulnerabilities. “Software flaws can last for minutes, instead of as long as years,” he said. “I can’t emphasize enough how much this automation is a game changer in strengthening cyber resiliency.”

Such advanced, cognitive ACD systems employ the gamut of detection tools and techniques, from heuristics to characteristic and signature-based identification, says Red Alpha’s Trevino. “These systems will be self-learning and self-healing, and if compromised, will be able to terminate and reconstitute themselves in an alternative virtual environment, having already learned the lessons of the previous engagement, and incorporated the required capabilities to survive. All of this will be done in real time.”

Seen in that context, AI is just the latest in a series of technologies the U.S. has used as a strategic force multiplier. Just as precision weapons enabled the U.S. Air Force to inflict greater damage with fewer bombs – and with less risk – AI can be used to solve problems that might otherwise take hundreds or even thousands of people. The promise is that instead of eyeballing thousands of images a day or scanning millions of network actions, computers can do the first screening, freeing up analysts for the harder task of interpreting results, says Dennis Gibbs, technical strategist, Intelligence and Security programs at General Dynamics Information Technology. “But just because the technology can do that, doesn’t mean it’s easy. Integrating that technology into existing systems and networks and processes is as much art as science. Success depends on how well you understand your customer. You have to understand how these things fit together.”

In a separate project, DARPA collaborated with a Fortune 100 company that was moving more than a terabyte of data per day across its virtual private network, and generating 12 million network events per day – far beyond the human ability to track or analyze. Using automated tools, however, the project team was able to identify a single unauthorized IP address that successfully logged into 3,336 VPN accounts over seven days.

Mathematically speaking, Pierce said, “The activity associated with this address was close to 9 billion network events with about a 1 in 10 chance of discovery.” The tipoff was a flaw in the botnet that attacked the network: Attacks were staged at exactly 57-minute intervals. Not all botnets of course, will make that mistake. But even pseudo random timing can also be detected. He added: “Using advanced signal processing methods applied to billions of network events over weeks and months-long timelines, we have been successful at finding pseudo random botnets.”

On the flipside however, must be the recognition that AI superiority will not be a given in cyberspace. Unlike air, land, sea or space, cyber is a man-made warfare domain. So it’s fitting that the fight there could end up being machine vs. machine.

The Harvard Artificial Intelligence and National Security study notes emphatically that while AI will make it easier to sort through ever greater volumes of intelligence, “it will also be much easier to lie persuasively.” The use of Photoshop and other image editors is well understood and has been for years. But recent advances in video editing have made it reasonably easy to forge audio and video files.

A trio of University of Washington researchers announced in a research paper published in July that they had used AI to synthesize a photorealistic, lip-synced video of former President Barack Obama. While the researchers used real audio, it’s easy to see the dangers posed if audio is also manipulated.

While the authors describe potential positive uses of the technology – such as “the ability to generate high-quality video from audio [which] could significantly reduce the amount of bandwidth needed in video coding/transmission” – potential nefarious uses are just as clear.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Five Federal IT Trends to Watch in 2018

Five Federal IT Trends to Watch in 2018

Out with the old, in with the new. As the new year turns, it’s worth looking back on where we’ve been to better grasp where we’re headed tomorrow.

Here are five trends that took off in the year past and will shape the year ahead:

1. Modernization
The White House spent most of 2017 building its comprehensive Report to the President on Federal IT Modernization, and it will spend most of 2018 executing the details of that plan. One part assessment, one part roadmap, the report defines the IT challenges agencies face and lays out a future that will radically alter the way feds manage and acquire information technology.

The plan calls for consolidating federal networks and pooling resources and expertise by adopting common, shared services. Those steps will accelerate cloud adoption and centralize control over many commodity IT services. The payoff, officials argue: “A modern Federal IT architecture where agencies are able to maximize secure use of cloud computing, modernize Government-hosted applications and securely maintain legacy systems.”

What that looks like will play out over the coming months as agencies respond to a series of information requests and leaders at the White House, the Office of Management and Budget, the Department of Homeland Security and the National Institute for Standards and Technology respond to 50 task orders by July 4, 2018.

Among them:

  • A dozen recommendations to prioritize modernization of high-risk, high-value assets (HVAs)
  • 11 recommendations to modernize both Trusted Internet Connections (TIC) and the National Cybersecurity Protection System (NCPS) to improve security while enabling those systems to migrate to cloud-based solutions
  • 8 recommendations to support agencies adoption of shared services to accelerate adoption of commercial cloud services and infrastructure
  • 10 recommendations designed to accelerate broad adoption of commercial cloud-based email and collaboration services, such as Microsoft Office 365 or Google’s G-Suite services
  • 8 recommendations to improve existing shared services and expand such offerings, especially to smaller agencies

The devil will be in the details. Some dates to keep in mind: Updating the Federal Cloud Computing Strategy (new report due April 30); a plan to standardize cloud contract language (due from OMB by June 30); a plan to improve the speed, reliability and reuse of authority to operate (ATO) approvals for both software-as-a-service (SaaS) and other shared services (April 1).

2. CDM’s Eye on Cyber
The driving force behind the modernization plan is cybersecurity, and a key to the government’s cyber strategy is the Department of Homeland Security’s (DHS) Continuous Diagnostics and Mitigation (CDM) program. DHS will expand CDM to “enable a layered security architecture that facilitates transition to modern computing in the commercial cloud.”

Doing so means changing gears. CDM’s Phase 1, now being deployed, is designed to identify what’s connected to federal networks. Phase 2 will identify the people on the network. Phase 3 will identify activity on the network and include the ability to identify and analyze anomalies for signs of compromise, and Phase 4 will focus on securing government data.

Now CDM will have to adopt a new charge: securing government systems in commercial clouds, something not included in the original CDM plan.

“A challenge in implementing CDM capabilities in a more cloud-friendly architecture is that security teams and security operations centers may not necessarily have the expertise available to defend the updated architecture,” DHS officials write in the modernization report. “The Federal Government is working to develop this expertise and provide it across agencies through CDM.” The department is developing a security-as-a-service model with the intent of expanding CDM’s reach beyond the 68 agencies currently using the program to include all civilian federal agencies, large and small.

3. Protecting Critical Infrastructure
Securing federal networks and data is one thing, but 85 percent of the nation’s critical infrastructure is in private, not public hands. Figuring out how best to protect privately owned critical national infrastructure, such as the electric grid, gas and oil pipelines, dams and rivers and levees, public communications networks and other critical infrastructure, has long been a thorny issue.

The private sector has historically enjoyed the freedom of managing its own security – and privacy.  However, growing cyber and terrorist threats and the potential liability that could stem from such attacks means those businesses also like having the guiding hand and cooperation of federal regulators.

To date, this responsibility has taken root in the DHS’s National Protection and Programs Directorate (NPPD), which operates largely beneath the public radar. That soon that could change: The House voted in December to elevate NPPD to be the next operational component within DHS, joining the likes of Customs and Border Protection, the Coast Guard and the Secret Service.

NPPD would become the Cybersecurity and Infrastructure Security Agency and while the new status would not explicitly expand its portfolio, it would pave the way for increased influence within the agency and a greater voice in the national debate.

First, it’s got to clear the Senate. The Cybersecurity and Infrastructure Security Agency Act of 2017 faces an uncertain future in the upper chamber because of complex jurisdictional issues, and a gridlocked legislative process that makes passage of any bill an adventure — even if as in this case, that bill has the active backing of both the White House and DHS leadership.

4. Standards for IoT
The Internet of Things (IoT), the Internet of Everything, the wireless, connected world – call it what you will – is challenging the makers of industrial controls and network-connected technology to rethink security and their entire supply chains.

If a lightbulb, camera, motion detector – or any number of other sensors – can be controlled via networks, they can also be co-opted by bad actors in cyberspace. But while manufacturers have been quick to field network-enabled products, most have been slow to ensure those products are safe from hackers and abuse.

Jim Langevin (D-R.I.) advocates legislation to mandate better security in connected devices. “We need to ensure we approach the security of the Internet of Things with the techniques that have been successful with the smart phone and desktop computers: The policies of automatic patching, authentication and encryption that have worked in those domains need to be extended to all devices that are connected to the Internet,” he said last summer. “I believe the government can act as a convener to work with private industry in this space.”

The first private standard for IoT devices was approved in July when the American National Standards Institute (ANSI) endorsed UL 2900-1, General Requirements for Software Cybersecurity for Network-Connectable Products. Underwriters Laboratories (UL) plans two additional standards to follow: UL 2900-2-1, network-connectable healthcare systems, and UL 2900-2- for industrial controls.

Sens. Mark R. Warner (D-Va.) and Cory Gardner (R-Colo.), co-chairs of the Senate Cybersecurity Caucus, introduced The Internet of Things Cybersecurity Improvement Act of 2017 in August with an eye toward holding suppliers responsible for providing insecure connected products to the federal government.

The bill would require vendors supplying IoT devices to the U.S. government to ensure their devices are patchable, not hard-coded with unchangeable passwords and are free of known security vulnerabilities. It would also require automatic, authenticated security updates from the manufacturer. The measure has been criticized for its vague definitions and language and for limiting its scope to products sold to the federal government.

Yet in a world where cybersecurity is a growing liability concern for businesses of every stripe – and where there is a dearth of industry standards – such a measure could become a benchmark requirement imposed by non-government customers, as well.

5. Artificial Intelligence
2017 was the year when data analytics morphed into artificial intelligence (AI) in the public mindset. Government agencies are only now making the connection that their massive data troves could fuel a revolution in how they manage, fuse and use data to make decisions, deliver services and interact with the public.

According to market researcher IDC, that realization is not limited to government: “By the end of 2018,” the firm predicts, “half of manufacturers will be using analytics, IoT, and social collaboration tools to extend the integrated planning process across the entire enterprise, in real time.”

Gartner goes even further: “The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025,” the company predicts. More than half of businesses and agencies are still searching for strategies, however.

“Enterprises should focus on business results enabled by applications that exploit narrow AI technologies,” says David Cearley, vice president and Gartner Fellow, Gartner Research. “Leave general AI to the researchers and science fiction writers.”

AI and machine learning will not be stand-alone functions, but rather foundational components that underlie the applications and services agencies employ, Cearley says. For example, natural language processing – think of Amazon’s Alexa or Apple’s Siri – can now handle increasingly complicated tasks, promising more sophisticated, faster interactions when the public calls a government 800 number.

Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology’s Health and Civilian Solutions Division, says today’s challenge with AI is two-fold: First, finding the right applications that provide a real return on investment, and second overcoming security and privacy concerns.

“There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal told GovTechWorks. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

What this Means for Federal IT
Federal agencies face tipping points across the technology spectrum. The pace of change quickens as the pressure to modernize increases. While technology is an enabler, new skills will be needed for cloud integration, shared services security and agile development. Similarly, the emergence of new products, services and providers, greatly expand agencies’ choices. But each of those choices has its own downstream implications and risks, from vendor lock-in to bandwidth and run-time challenges. With each new wrinkle, agency environments become more complex, demanding ever more sophisticated expertise from those pulling those hybrid environments together.

“Cybersecurity will be the linchpin in all this,” says Stan Tyliszczak, vice president and chief engineer at GDIT. “Agencies can no longer afford the cyber risks of NOT modernizing their IT. It’s not whether or not to modernize, but how fast can we get there? How much cyber risk do we have in the meantime?”

Cloud is ultimately a massive integration exercise with no one-size-fits-all answers. Agencies will employ multiple systems in multiple clouds for multiple kinds of users. Engineering solutions to make those systems work harmoniously is essential.

“Turning on a cloud service is easy,” Tyliszczak says. “Integrating it with the other things you do – and getting that integration right – is where agencies will need the greatest help going forward.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
In the Age of Agile, Feds Rally

In the Age of Agile, Feds Rally

Agile software development is now the dominant approach to software engineering, with adoption rates reaching 55 percent in late 2016, according to research from Computer Economics of Irvine, Calif.

Today, the incremental development trend is everywhere – even in places you wouldn’t expect. “I was surprised at the degree to which agile software development and agile DevOps are being used in programs,” said Maj. Gen. Sarah Zabel, who recently became the Air Force’s director of IT acquisition process development to help the Air Force acquire systems faster. “F-35 is doing agile. F22 is doing agile. There are so many projects doing some type of agile development.… I’ve seen between 25 and 30 in the past couple of months.”

And she wants to see more.

Zabel’s job was created for her with a specific mission in mind: “It was an expression of the frustration from our secretary and chief of staff,” she said at the Defense Systems Summit in November. “Why does it take us eight to 10 years to develop systems that will be wickedly expensive and which we know won’t be what we need when it’s finally delivered?”

There are a host of reasons, of course. Risk-averse procurement officers, arcane procurement rules and grindingly slow requirements processes are prime causes, but so are old-fashioned waterfall development processes that separate users from developers and assume that nothing will change between the time the requirements are set and the system is delivered.

Agile development, by contrast, breaks down those requirements into smaller, more manageable pieces that can be completed in short “sprints” of one to four weeks. Daily scrums bring all interested parties together to share current task information and discuss any impediments. At the conclusion of a sprint, customers get an early look at functionality and the opportunity not only to approve, but to see possibilities they hadn’t imagined before. Other times, that early access affords the opportunity to “de-scope” requirements and eliminate waste; everyone becomes more vested – and accountable – in the development process.

“I’d have a very hard time going back to waterfall,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions, who has been working in an agile environment since 2009. “Once you transition teams to this, they like it. The customer likes it, too.”

The 2001 Agile Manifesto launched a revolution in customer-centered software development. In the process, its authors laid out 12 underlying principles that can be applied to software development and, indeed, to many other kinds of projects, as well:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity – the art of maximizing the amount of work not done – is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Teams, in fact, are a key to the agile philosophy. Close-knit teams work together like well-oiled machines. “If one person’s tasks for a sprint are done,” Zach said, “we want them to look at the board and see how they can help others on the team. It’s a high-trust, high-competence situation. We trust the team will produce. It all revolves around the team: the overall team succeeds or stumbles. Individual heroics are not as prevalent or necessary.”

The board is a means of communicating status and progress to the group. Agile practitioners may follow one of several methodologies to structure their projects. The most common of these, Scrum, divides project tickets into three categories – Ready, Doing and Done. Tickets move across the board showing progress from conception through testing and completion.

A second agile methodology is Kanban, which is derived from Toyota’s just-in-time delivery process of the same name. In Kanban, work flows through four stages: In progress, testing, ready for release and released. Only a certain number of items can be in each state at any one time. By limiting work in progress (WIP), the team can see when bottlenecks occur and rally to solve those problems, while otherwise managing a steady workflow.

“Teams using Kanban are typically mature teams. They can produce at such a predictable fashion, that they abandon the sprint concept and simply pull work off a prioritized list. These teams produce a small but continual stream of business value. Teams operating at this level have a high amount of automation built into the process to maximize efficiency and quality,” Zach said. “Engaged customers groom the backlog and ensure the most needed functionality is at the top of the list. They can then trust those items will be completed first.”

Customers on Board
That may be the biggest difference of all: Instead of ordering a completed product and then going away until it’s done, the product is in a continuous state of development and the customer is likewise continuously returning to the table, reviewing results, and offering feedback. That means fewer surprises and disappointments and more opportunities to refine ideas and clarify intent.

For the customer, the big change is near constant involvement. Customers are involved at the start so developers can better understand requirements, which are contained in user stories that describe the necessary task, and they are there, as well, at the end of each sprint to see, approve and comment on the work thus far.

It can be a hard adjustment for mission-focused customer agencies. “Officials from the Office of the CIO at three agencies (Environmental Protection Agency, General Services Administration and the Department of Labor) independently reported challenges related to organizational changes, such as staff adapting to the culture shift from being business customers to taking on a more active role as product owners and project managers in the software development process,” noted the General Accountability Office (GAO) in a November report on incremental development.

Formal Processes
The agile movement started in 2001 with the creation of the Agile Manifesto, in which a group of software visionaries sought to change the way developers and their customers approached product development by placing people and interactions ahead of processes and tools; valuing working software over comprehensive documentation; encouraging customer collaboration over contract negotiation; and embracing the ability to respond to changes over slavishly adherence to plans. The group laid out 12 principles to developing better software (see box).

But agile is not all touchy-feely soft skills. At its root is order and organization. The up-front planning, daily scrum sessions, progress boards, one- or two- or three-week-long sprints – are all structures intended to enforce discipline and make teams work more efficiently.

“Think of it this way: If I want to make a low-cal birthday cake, I can’t bake the cake, frost it and then say, ‘Oh, I want to make it low-calorie.’ It doesn’t work that way,” says Paul Black, computer scientist at the National Institute of Standards and Technology. “I have to decide to replace oil with apple sauce before I bake the cake for it to turn out right. It’s the same with software. The planning up front is crucial.”

Yet that should not be construed as trying to plan everything all at once, notes GDIT’s Zach. The critical difference is that agile development teams break down those plans into small, manageable pieces.

Robert Binder, senior engineer in the Software Solutions Division at Carnegie-Mellon University’s Software Engineering Institute (SEI), says agile is here to stay. “You might say it’s all over but the shouting,” he told GovTechWorks. “Agile development is now pretty much the way most software gets done. There are some parts of the software development universe where it’s taken a while for that change to take root: Program offices working on systems that are very long-lived, [which] tend to be on the trailing end of that.”

Now comes the hard part, at least for those arriving late to the party: actually implementing agile processes and managing the changes they entail.

“The cultural piece does represent significant barriers, just because of the way we have traditionally executed software and system acquisitions,” says Eileen Wrubel, technical lead for agile in government at SEI’s Software Solutions Division. “In the interest of being good stewards of taxpayer dollars, [government program managers] have historically tried to nail down all the requirements up front, when, in reality, we operate in a world where the ground changes under us rather frequently. However, those behaviors are still engrained in the culture: The idea that we need to lock down as much as possible because all variability may be interpreted as bad is a cultural mindset that is changing over time.”

Every small program that adopts an agile process is a step in that direction, she says. “There’s been a lot of work to look at smaller programs and engaging testing and cyber security organizations earlier and more often. There’s been a lot of guidance to look at prototyping and innovative means of acquisition. But it has been a shift due to the inertia of how the acquisition system has operated in the past.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train