ep cloud / data

How the Air Force Changed Tune on Cybersecurity

How the Air Force Changed Tune on Cybersecurity

Peter Kim, chief information security officer (CISO) for the U.S. Air Force, calls himself Dr. Doom. Lauren Knausenberger, director of cyberspace innovation for the Air Force, is his opposite. Where he sees trouble, she sees opportunity. Where he sees reasons to say no, she seeks ways to change the question.

For Kim, the dialogue they’ve shared since Knausenberger left her job atop a private sector tech consultancy to join the Air Force, has been transformational.

“I have gone into a kind of rehab for cybersecurity pros,” he says. “I’ve had to admit I have a problem: I can’t lock everything down.” He knows. He’s tried.

The two engage constantly, debating and questioning whether decisions and steps designed to protect Air Force systems and data are having their intended effect, they said, sharing a dais during a recent AFCEA cybersecurity event in Crystal City. “Are the things we’re doing actually making us more secure or just generating a lot of paperwork?” asks Knausenberger. “We are trying to turn everything on its head.”

As for Kim, she added, “Pete’s doing really well on his rehab program.”

One way Knausenberger has turned Kim’s head has been her approach to security certification packages for new software. Instead of developing massive cert packages for every program – documentation that’s hundreds of pages thick and unlikely to every be read – she wants the Air Force to certify the processes used to develop software, rather than the programs.

“Why don’t we think about software like meat at the grocery?” she asked. “USDA doesn’t look at every individual piece of meat… Our goal is to certify the factory, not the program.”

Similarly, Knausenberger says the Air Force is trying now to apply similar requirements to acquisition contracts, accepting the idea that since finding software vulnerabilities is inevitable, it’s best to have a plan for fixing them rather than hoping to regulate them out of existence. “So you might start seeing language that says, ‘You need to fix vulnerabilities within 10 days.’ Or perhaps we may have to pay bug bounties,” she says. “We know nothing is going to be perfect and we need to accept that. But we also need to start putting a level of commercial expectation into our programs.”

Combining development, security and operations into an integrated process – DevSecOps, in industry parlance – is the new name of the game, they argue together. The aim: Build security in during development, rather than bolting it on at the end.

The takeaways from the “Hack-the-Air-Force” bug bounty programs run so far, in that every such effort yields new vulnerabilities – and that thousands of pages of certification didn’t prevent them. As computer power becomes less costly and automation gets easier, hackers can be expected to use artificial intelligence to break through security barriers.

Continuous automated testing is the only way to combat their persistent threat, Kim said.

Michael Baker, CISO at systems integrator, General Dynamics Information Technology, agrees. “The best way to find the vulnerabilities – is to continuously monitor your environment and challenge your assumptions, he says. “Hackers already use automated tools and the latest vulnerabilities to exploit systems. We have to beat them to it – finding and patching those vulnerabilities before they can exploit them. Robust and assured endpoint protection, combined with continuous, automated testing to find vulnerabilities and exploits, is the only way to do that.”

I think we ought to get moving on automated security testing and penetration,” Kim added. “The days of RMF [risk management framework] packages are past. They’re dinosaurs. We’ve got to get to a different way of addressing security controls and the RMF process.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
JOMIS Will Take E-Health Records to the Frontlines

JOMIS Will Take E-Health Records to the Frontlines

The Defense Department Military Health System Genesis electronic health records (EHR) system went live last October at Madigan Army Medical Center (Wash.), the biggest step so far in modernizing DOD’s vast MHS with a proven commercial solution. Now comes the hard part: Tying that system in with operational medicine for deployed troops around the globe.

War zones, ships at sea and aeromedical evacuations each present a new set of challenges for digital health records. Front-line units lack the bandwidth and digital infrastructure to enable cloud-based health systems like MHS Genesis. Indeed, when bandwidth is constrained, health data ranks last on the priority list, falling below command and control, intelligence and other mission data.

The Joint Operational Medicine Information Systems (JOMIS) program office oversees DOD’s operational medicine initiatives, including the legacy Theater Medical Information Program – Joint system used in today’s operational theaters of Iraq and Afghanistan, as well as aboard ships and in other remote locales.

“One of the biggest pain points we have right now is the issue of moving data from the various roles of care, from the first responder [in the war zone] to the First Aid station to something like Landstuhl (Germany) Regional Medical Center, to something in the U.S.,” Navy Capt. Dr. James Andrew Ellzy told GovTechWorks. He is deputy program executive officer (functional) for JOMIS, under the Program Executive Office, Defense Healthcare Management Systems (PEO DHMS).

PEO DHMS defines four stages or “roles,” once a patient begins to receive care. Role One is for first responders; Role Two: Forward resuscitative care; Role Three: Theater hospitals; and Role Four: Service-based medical facilities.

“Most of those early roles right now, are still using paper records,” Ellzy said. Electronic documentation begins once medical operators are in an established location. “Good records usually start the first place that has a concrete slab.”

Among the changes MHS Genesis will bring is consolidation. The legacy AHLTA (Armed Forces Health Longitudinal Technology Application – Theater) solution and its heavily modified theater-level variant AHLTA-T, incorporate separate systems for inpatient and outpatient support.

MHS Genesis however, will provide a single record regardless of patient status.

For deployed medical units, that’s important. Set up and maintenance for AHLTA’s outpatient records and the Joint Composite Health Care System have always been challenging.

“In order to set up the system, you have to have the technical skillset to initialize and sustain these systems,” said Ryan Loving, director of Health IT Solutions for military health services and the VA at General Dynamics Information Technology’s (GDIT) Health and Civilian Solutions Division. “This is a bigger problem for the Army than the other services, because the system is neither operated nor maintained until they go downrange. As a result, they lack the experience to be experts in setup and sustainment.”

JOMIS’ ultimate goal according to Stacy A. Cummings, who heads PEO DHMS, is to provide a virtually seamless representation of MHS Genesis deployed locations.

“For the first time, we’re bringing together inpatient and outpatient, medical and dental records, so we’re going to have a single integrated record for the military health system,” Cummings said at the HIMSS 2018 health IT conference in March. Last year, she told Government CIO magazine, “We are configuring the same exact tool for low-and no-communications environments.”

Therein lies the challenge, said GDIT’s Loving. “Genesis wasn’t designed for this kind of austere environment. Adapting to the unique demands of operational medicine will require a lot of collaboration with military health, with service-specific tactical networks, and an intimate understanding of those network environments today and where they’re headed in the future.”

Operating on the tactical edge – whether doing command and control or sharing medical data – is probably the hardest problem to solve, said Tom Sasala, director of the Army Architecture Integration Center and the service’s Chief Data Officer. “The difference between the enterprise environment and the tactical environment, when it comes to some of the more modern technologies like cloud, is that most modern technologies rely on an always-on, low-latency network connection. That simply doesn’t exist in a large portion of the world – and it certainly doesn’t exist in a large portion of the Army’s enterprise.”

Military units deploy into war zones and disaster zones where commercial connectivity is either highly compromised or non-existent. Satellite connectivity is limited at best. “Our challenge is how do we find commercial solutions that we cannot just adopt, but [can] adapt for our special purposes,” Sasala said.

MHS Genesis is like any modern cloud solution in that regard. In fact, it’s based on Cerner Millennium, a popular commercial EHR platform. So while it may be perfect for garrison hospitals and clinics – and ideal for sharing medical records with other agencies, civilian hospitals and health providers – the military’s operational requirements present unique circumstances unimagined by the original system’s architects.

Ellzy acknowledges the concern. “There’s only so much bandwidth,” he said. “So if medical is taking some of it, that means the operators don’t have as much. So how do we work with the operators to get that bandwidth to move the data back and forth?”

Indeed, the bandwidth and latency standards available via satellite links weren’t designed for such systems, nor fast enough to accommodate their requirements. More important, when bandwidth is constrained, military systems must line up for access, and health data is literally last on the priority list. Even ideas like using telemedicine in forward locations aren’t viable. “That works well in a hospital where you have all the connectivity you need,” Sasala said. “But it won’t work so well in an austere environment with limited connectivity.”

The legacy AHLTA-T system has a store-and-forward capability that allows local storage while connectivity is constrained or unavailable, with data forwarded to a central database once it’s back online. Delays mean documentation may not be available at subsequent locations when patients are moved from one level of care to the next.

The challenge for JOMIS will be to find a way to work in theater and then connect and share saved data while overcoming the basic functional challenges that threaten to undermine the system in forward locations.

“I’ll want the ability to go off the network for a period of time,” Ellzy said, “for whatever reason, whether I’m in a place where there isn’t a network, or my network goes down or I’m on a submarine and can’t actually send information out.”

AHLTA-T manages the constrained or disconnected network situation by allowing the system to operate on a stand-alone computer (or network configuration) at field locations, relying on built-in store-and-forward functionality to save medical data locally until it can be forwarded to the Theater Medical Data Store and Clinical Data Repository. There, it can be accessed by authorized medical personnel worldwide.

Engineering a comparable JOMIS solution will be complex and involve working around and within the MHS Genesis architecture, leveraging innovative warfighter IT infrastructure wherever possible. “We have to adapt Genesis to the store-and-forward architecture without compromising the basic functionality it provides,” said GDIT’s Loving.

Ellzy acknowledges compromises necessary to make AHLTA-T work, led to unintended consequences.

“When you look at the legacy AHLTA versus the AHLTA-T, there are some significant differences,” he said. Extra training is necessary to use the combat theater version. That shouldn’t be the case with JOMIS. “The desire with Genesis,” Ellzy said, “is that medical personnel will need significantly less training – if any – as they move from the garrison to the deployed setting.”

Reporter Jon Anderson contributed to this report.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Recognizing the Need for Innovation in Acquisition

Recognizing the Need for Innovation in Acquisition

The President’s Management Agenda lays out ambitious plans for the federal government to modernize information technology, prepare its future workforce and improve the way it manages major acquisitions.

These are among 14 cross-agency priority goals on which the administration is focused as it seeks to jettison outdated legacy systems and embrace less cumbersome ways of doing business.

Increasingly, federal IT managers are recognizing the need for innovation in acquisition, not just technology modernization. What exactly will it take to modernize an acquisition system bound by the 1,917-page Federal Acquisition Regulation? Federal acquisition experts say the challenges have less to do with changing those rules than with human behavior – the incentives, motivations and fears of people who touch federal acquisition – from the acquisition professionals themselves to mission owners and government executives and overseers.

“If you want a world-class acquisition system that is responsive to customer needs, you have to be able to use the right tool at the right time,” says Mathew Blum, associate administrator in the Office of Federal Procurement Policy at the Office of Management and Budget. The trouble isn’t a lack of options, he said at the American Council for Technology’s ACT/IAC Acquisition Excellence conference March 27. Rather he said, it is lack of bandwidth and fear of failure that conspire to keep acquisition pros from trying different acquisition strategies.

Risk aversion is a critical issue, agreed Greg Capella, deputy director of the National Technology Information Service at the Department of Commerce. “If you look at what contracting officers get evaluated on, it’s the number of protests, or the number of small business awards [they make],” he said. “It’s not how many successful procurements they’ve managed or what were the results for individual customers.”

Yet there are ways to break through the fear of failure, protests and blame that can paralyze acquisition shops and at the same time save time, save money and improve mission outcomes. Here are four:

  1. Outside Help

The General Services Administration’s (GSA) 18F digital services organization focuses on improving public facing services and internal systems using commercial-style development approaches. Its agile software development program employs a multidisciplinary team incentivized to work together and produce results quickly, said Alla Goldman Seifert, acting director of GSA’s Office of Acquisition in the Technology Transformation Service.

Her team helps other federal agencies tackle problems quickly and incrementally using an agile development approach. “We bring in a cross-functional team of human-centered design and technical experts, as well as acquisition professionals — all of whom work together to draft a statement of work and do the performance-based contracting for agile software acquisition,” she said.

Acquisition planning may be the most important part of that process. Seifert said 18F learned a lot since launching its Agile Blanket Purchase Agreement. The group suffered seven protests in three venues. “But since then, every time we iterate, we make sure we right-size the scope and risk we are taking.” She added by approaching projects in a modular way, risks are diminished and outcomes improved. That’s a best practice that can be replicated throughout government.

“We’re really looking at software and legacy IT modernization: How do you get a mission critical program off of a mainframe? How do you take what is probably going to be a five-year modernization effort and program for it, plan for it and budget for it?” Seifert asked.

GSA experiments in other ways, as well. For example, 18F helped agencies leverage the government’s Challenge.gov platform, publishing needs and offering prizes to the best solutions. The Defense Advanced Research Projects Agency (DARPA) currently seeks ideas for more efficient use of the radio frequency spectrum in its Spectrum Collaboration Challenge. DARPA will award up to $3.5 million to the best ideas. “Even [intelligence community components] have really enjoyed this,” Seifert said. “It really is a good way to increase competition and lower barriers to entry.”

  1. Coaching and Assistance

Many program acquisition officers cite time pressure and lack of bandwidth to learn new tools as barriers to innovation. It’s a classic chicken-and-egg problem: How do you find the time to learn and try something new?

The Department of Homeland Security’s Procurement Innovation Lab (PIL) was created to help program offices do just that – and then capture and share their experience so others in DHS can leverage the results. The PIL provides coaching, advice and asks only that the accumulated knowledge is shared by webinars and other internal means.

“How do people find time to do innovative stuff?” asked Eric Cho, project lead for PIL. “Either one: find ways to do less, or two: borrow from someone else’s work.” Having a coach to help is also critical, and that’s where his organization comes in.

In less than 100 days, the PIL recently helped a Customs and Border Protection team acquire a system to locate contraband such as drugs hidden in walls, by using a high-end stud finder, Cho said. The effort was completed in less than half the time of an earlier, unsuccessful effort.

Acquisition cycle time can be saved in many ways, from capturing impressions immediately, via group evaluations after oral presentations, to narrowing the competitive field by means of a down-select before trade-off analyses on qualified finalists. Reusing language from similar solicitations can also save time, he said. “This is not an English class.”

Even so, the successful PIL program still left middle managers in program offices a little uncomfortable, DHS officials acknowledged – the natural result of trying something new. Key to success is having high-level commitment and support for such experiments. DHS’s Chief Procurement Officer Soraya Correa has been an outspoken advocate of experimentation and the PIL. That makes a difference.

“It all comes back to the culture of rewarding compliance, rather than creativity,” said OMB’s Blum. “We need to figure out how we build incentives to encourage the workforce to test and adopt new and better ways to do business.”

  1. Outsourcing for Innovation

Another approach is to outsource the heavy-lifting to another better skilled or better experienced government entity to execute on a specialized need, such as hiring GSA’s 18F to manage agile software development.

Similarly, outsourcing to GSA’s FEDSIM is a proven strategy for efficiently managing and executing complex, enterprise-scale programs with price tags approaching $1 billion or more. FEDSIM combines both acquisition and technical expertise to manage such large-scale projects, and execute quickly by leveraging government-wide acquisition vehicles such as Alliant or OASIS, which have already narrowed the field of viable competitors.

“The advantage of FEDSIM is that they have experience executing these large-scale complex IT programs — projects that they’ve done dozens of times — but that others may only face once in a decade,” says Michael McHugh, staff vice president within General Dynamics IT’s Government Wide Acquisition Contract (GWAC) Center. The company supports Alliant and OASIS among other GWACs. “They understand that these programs shouldn’t be just about price, but in identifying the superior technical solution within a predetermined reasonable price range. There’s a difference.”

For program offices looking for guidance rather than to outsource procurement, FEDSIM is developing an “Express Platform” with pre-defined acquisition paths that depend on the need and acquisition templates designed. These streamline and accelerate processes, reduce costs and enable innovation. It’s another example of sharing best practices across government agencies.

  1. Minimizing Risk

OMB’s Blum said he doesn’t blame program managers for feeling anxious. He gets that while they like the concept of innovation, they’d rather someone else take the risk. He also believes the risks are lower than they think.

“If you’re talking about testing something new, the downside risk is much less than the upside gain,” Blum said. “Testing shouldn’t entail any more risk than a normal acquisition if you’re applying good acquisition practices — if you’re scoping it carefully, sharing information readily with potential sources so they understand your goals, and by giving participants a robust debrief,” he added. Risks can be managed.

Properly defining the scope, sounding out experts, defining goals and sharing information cannot happen in a vacuum, of course. Richard Spires, former chief information officer at DHS, and now president of Learning Tree International, said he could tell early if projects were likely to succeed or fail based on the level of teamwork exhibited by stakeholders.

“If we had a solid programmatic team that worked well with the procurement organization and you could ask those probing questions, I’ll tell you what: That’s how you spawn innovation,” Spires said. “I think we need to focus more on how to build the right team with all the right stakeholders: legal, security, the programmatic folks, the IT people running the operations.”

Tony Cothron, vice president with General Dynamics IT’s Intelligence portfolio agreed, saying it takes a combination of teamwork and experience to produce results.

“Contracting and mission need to go hand-in-hand,” Cothron said. “But in this community, mission is paramount. The things everyone should be asking are what other ways are there to get the job done? How do you create more capacity? Deliver analytics to help the mission? Improve continuity of operations? Get more for each dollar? These are hard questions, and they require imaginative solutions.”

For example, Cothron said, bundling services may help reduce costs. Likewise, contractors might accept lower prices in exchange for a longer term. “You need to develop a strategy going in that’s focused on the mission, and then set specific goals for what you want to accomplish,” he added. “There are ways to improve quality. How you contract is one of them.”

Risk of failure doesn’t have to be a disincentive to innovation. Like any risk, it can be managed – and savvy government professionals are discovering they can mitigate risks by leveraging experienced teams, sharing best practices and building on lessons learned. When they do those things, risk decreases – and the odds of success improve.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Design Thinking and DevOps Combine for Better Customer Experience

Design Thinking and DevOps Combine for Better Customer Experience

How citizens interact with government websites tells you much about how to improve – as long as you’re paying attention, said Aaron Wieczorek, digital services expert with U.S. Digital Services’ team at the Department of Veteran Affairs.

“At VA we will literally sit down with veterans, watch them work with the website and apply for benefits,” he said. The aim is to make sure the experience is what users want and expect he said, not “what we think they want.”

Taking copious notes on their observations, the team then sets to work on programming improvements that can be quickly put to the test. “Maybe some of the buttons were confusing or some of the way things work is confusing – so we immediately start reworking,” Wieczorek explained.

Applying a modern agile development approach means digital services can immediately put those tweaks to the test in their development environment. “If it works there, good. Then it moves to staging. If that’s acceptable, it deploys into production,” Wieczorek said.

That process can happen in days. Vets.gov deploys software updates into production 40 times per month Wieczorek said, and agency wide to all kinds of environments 600 times per month.

Case in point: Vets.gov’s digital Form 1010 EZ, which allows users to apply for VA healthcare online.

“We spent hundreds of hours watching veterans, and in end we were able to totally revamp everything,” Wieczorek said. “It’s actually so easy now, you can do it all on your phone.” More than 330,000 veterans have applied that way since the digital form was introduced. “I think that’s how you scale things.”

Of course, one problem remains: Vets.gov is essentially a veteran-friendly alternative site to VA.gov, which may not be obvious to search engines or veterans looking for the best way in the door. Search Google for “VA 1010ez” and the old, mobile-unfriendly PDF form still shows as the top result. The new mobile-friendly application? It’s the third choice.

At the National Geospatial-Intelligence Agency, developers take a similar approach, but focus hard on balancing speed, quality and design for maximum results. “We believe that requirements and needs should be seen like a carton of milk: The longer they sit around, the worse they get,” said Corry Robb product design lead in the Office of GEOINT Services at the National Geospatial-Intelligence Agency. “We try to handle that need as quickly as we can and deliver that minimally viable product to the user’s hands as fast as we can.”

DevOps techniques, where development and production processes take place simultaneously, increase speed. But speed alone is not the measure of success, Robb said. “Our agency needs to focus on delivering the right thing, not just the wrong thing faster.” So in addition to development sprints, his team has added “design sprints to quickly figure out the problem-solution fit.”

Combining design thinking, which focuses on using design to solve specific user problems, is critical to the methodology, he said. “Being hand in hand with the customer – that’s one of the core values our group has.”

“Iterative development is a proven approach,” said Dennis Gibbs, who established the agile development practice in General Dynamics Information Technology’s Intelligence Solutions Division. “Agile and DevOps techniques accelerate the speed of convergence on a better solution.  We continually incorporate feedback from the user into the solution, resulting in a better capability delivered faster to the user.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Unpleasant Design Could Encourage Better Cyber Hygiene

Unpleasant Design Could Encourage Better Cyber Hygiene

Recent revelations that service members and intelligence professionals are inadvertently giving up their locations and fitness patterns via mobile apps caught federal agencies by surprise.

The surprise wasn’t that Fitbits, smartphones or workout apps try to collect information, nor that some users ignore policies reminding them to watch their privacy and location settings. The real surprise is that many IT policies aren’t doing more to help stop such inadvertent fitness data leaks.

If even fitness-conscious military and intelligence personnel are unknowingly trading security and privacy for convenience, how can IT security managers increase security awareness and compliance?

One answer: Unpleasant design.

Unpleasant design is a proven technique for using design to discourage unwanted behavior. Ever get stuck in an airport and long for a place to lie down — only to find every bench or row of seats is fitted with armrests? That’s no accident. Airports and train terminals don’t want people sleeping across benches. Or consider the decorative metalwork sometimes placed on urban windowsills or planter walls — designed expressly to keep loiterers from sitting down. It’s the same with harsh lights in suburban parking lots, which discourage people from hanging out and make it harder for criminals to lurk in the shadows.

As the federal government and other agency IT security leaders investigate these inadvertent disclosures, can they employ those same concepts to encourage better cyber behavior?

Here’s how unpleasant design might apply to federally furnished Wi-Fi networks: Rather than allow access with only a password, users instead might be required to have their Internet of Things (IoT) devices pass a security screening that requires certain security settings. That screening could include ensuring location services are disabled while such devices are connected to government-provided networks.

Employees would then have to choose between the convenience of free Wi-Fi for personal devices and the risks of inadequate operations security (OPSEC) via insecure device settings.

This of course, only works where users have access to such networks. At facilities where personal devices must be deposited in lockers or left in cars, it won’t make a difference. But for users working (and living) on installations where personnel routinely access Wi-Fi networks, this could be highly effective.

Screening – and even blocking – certain apps or domains could be managed through a cloud access security broker, network security management software that can enforce locally set rules governing apps actively using location data or posing other security risks. Network managers could whitelist acceptable apps and settings, while blocking those deemed unacceptable. If agencies already do that for their wired networks, why not for wireless?

Inconvenient? Absolutely. That’s the point.

IT security staffs are constantly navigating the optimal balance between security and convenience. Perfect security is achievable only when nothing is connected to anything else. Each new connection and additional convenience introduces another dent in the network’s armor.

Employing cloud-access security as a condition of Wi-Fi network access will impinge on some conveniences. In most cases, truly determined users can work around those rules by using local cellular data access instead. In most parts of the world, however, those places where the need for OPSEC is greatest, that access comes with a direct cash cost. When users pay for data by the megabyte, they’re much more likely to give up some convenience, check security and privacy settings, and limit their data consumption.

This too, is unpleasant design at work. Cellular network owners must balance network capacity with use. Lower-capacity networks control demand by raising prices, knowing that higher priced data discourages unbridled consumption.

Training and awareness will always be the most important factors in securing privacy and location data, because few users are willing to wade through pages-long user agreements to discover what’s hidden in the fine print and legalese they contain. More plain language and simpler settings for opting-in or out of certain kinds of data sharing are needed – and app makers must recognize that failing to heed such requirements only increase the risk that government steps in with new regulations.

But training and awareness only go so far. People still click on bad links, which is why some federal agencies automatically disable them. It makes users take a closer, harder look and think twice before clicking. That too, is unpleasant design.

So is requiring users to wear a badge that doubles as a computer access card (as is the case with the Pentagon’s Common Access Card and most Personal Identity Verification cards). Yet, knowing that some will inevitably leave the cards in their computers, such systems automatically log off after only a few minutes of inactivity. It’s inconvenient, but more secure.

We know this much: Human nature is such that people will take the path of least resistance. If that means accepting security settings that aren’t safe, that’s what’s going to happen. Though interrupting that convenience and turning it on its head by means of Wi-Fi security won’t stop everyone. But it might have prevented Australian undergrad Nathan Ruser – and who knows who else – from identifying the regular jogging routes of military members (among other examples) from Strava’s house-built heat map and the 13 trillion GPS points all collected from users.

“If soldiers use the app like normal people do,” Ruser tweeted Jan. 27, “it could be especially dangerous … I shouldn’t be able to establish any pattern of life info from this far away.”

Exactly.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
CDM Program Starts to Tackle Complexities of Cloud

CDM Program Starts to Tackle Complexities of Cloud

The Trump administration’s twin priorities for federal information technology – improved cybersecurity and modernized federal systems – impose a natural tension: How to protect a federal architecture that is rapidly changing as agencies push more and more systems into the cloud.

The Department of Homeland Security’s (DHS) Continuous Diagnostics and Mitigation (CDM) program’s early phases focus on understanding what systems are connected to federal networks and who has access to those systems. The next phases – understanding network activity and protecting federal data itself – will pose stiffer challenges for program managers, chief information security officers and systems integrators developing CDM solutions.

Figuring out how to monitor systems in the cloud – and how to examine and protect data there – is a major challenge that is still being worked out, even as more and more federal systems head that way.

“Getting that visibility into the cloud is critical,” says DHS’s CDM Program Manager Kevin Cox. Establishing a Master Device Record, which recognizes all network systems, and establishing a Master User Record, which identifies all network users, were essentially first steps, he told a gathering of government security experts at the ATARC Chief Information Security Officer Summit Jan. 25. “Where we’re headed is to expand out of the on-premise network and go out to the boundary.”

As federal systems move into the cloud, DHS wants CDM to follow – and to have just as much visibility and understanding of that part of the federal Information technology ecosystem as it has for systems in government data centers. “We need to make sure we know where that data is, and understand how it is protected,” Cox says.

Eric White, cybersecurity program director at General Dynamics Information Technology (GDIT) Health and Civilian Solutions Division, has been involved with CDM almost from its inception. “As agencies move their data and infrastructures from on premise into these virtualized cloud environments, frequently what we see is the complexity of managing IT services and capabilities increasing between on-premise legacy systems and the new cloud solutions. It creates additional challenges for cybersecurity writ large, but also specifically, CDM.”

Combining virtualized and conventional legacy systems is an integration challenge, “not just to get the two to interact effectively, but also to achieve the situational awareness you want in both environments,” White says. “That complexity is something that can impact an organization.”

The next phase of CDM, starts with monitoring network of sensors to identify “what is happening on the network,” including monitoring for defects between a “desired state” and the “actual state” of the device configurations that monitor network health and security. In a closed, on-premise environment, it’s relatively easy to monitor all those activities, because a network manager controls all the settings.

But as agencies incorporate virtualized services, such as cloud-based email or office productivity software, new complexities are introduced. Those services can incorporate their own set of security and communications standards and protocols. They may be housed in multi-tenant environments and implemented with proprietary security capabilities and tools. In some cases, these implementations may not be readily compatible with federal continuous monitoring solutions.

The Report to the President on Federal IT Modernization, describes the challenges faced in trying to combine existing cyber defenses with new cloud and mobile architectures. DHS’s National Cybersecurity Protection System (NCPS), which includes both the EINSTEIN cyber sensors and a range of cyber analytic tools and protection technologies, provide value, the report said, “but are not enough to combat the full spectrum of advanced persistent threats that rapidly change the attack vectors, tactics, techniques and procedures.”

DHS began a cybersecurity architectural review of federal systems last year, building on a similar Defense Department effort by the Defense Information Systems Agency, which conducted the NIPRNET/SIPRNET Cybersecurity Architecture Review (NSCSAR) in 2016 and 2017. Like NSCSAR, the new .Gov Cybersecurity Architecture Review (.GovCAR) intends to take an adversary’s-eye-view of federal networks in order to identify and fix exploitable weaknesses in the overall architecture. In a massively federated arrangement like the federal government’s IT system, that will be a monumental effort.

Cox says the .GovCAR review will also “layer in threat intelligence, so we can evaluate the techniques and technologies we use to see how those technologies are helping us respond to the threat.”

“Ultimately, if the analysis shows our current approach is not optimal, they will look at proposing more optimal approaches,” he says. “We’re looking to be nimble with the CDM program to support that effort.”

The rush to implement CDM as a centrally funded but locally deployed system of systems means the technology varies from agency to agency and implementation to implementation. Meanwhile, agencies have also proceeded with their own modernization and consolidation efforts. So among the pressing challenges is figuring out how to get those sensors and protection technologies to look at federal networks holistically. The government’s network perimeter is no longer a contiguous line. Cloud-based systems are still part of the network, but the security architecture may be completely different, with complex encryption that presents challenges to CDM monitoring technologies almost as effectively as it blocks adversaries.

“Some of these sensors on the network don’t operate too well when they see data in the wrong format,” White explains. “If you’re encrypting data and the sensors aren’t able to decipher it, those sensors won’t return value.”

There won’t be a single answer to solving that riddle. “What you’re trying to do is gather visibility in the cloud, and this requires that you be proactive in working with your cloud service providers,” White says. “You have to understand what they provide, what you are responsible for, what you will have a view of and what you might not be able to see. You’re going to have to negotiate to be compliant with federal FISMA requirements and local security risk thresholds and governance.”

Indeed, Cox points out, “There’s a big push to move more federal data out to the cloud; we need to make sure we know where that data is, and understand how it is protected.” Lapses do occur.
“There have been cases where users have moved data out to the cloud, there was uncertainty as to who is configuring the protections on that data, whether the cloud service provider or the user, and because of that uncertainty, the data was left open for others – or adversaries – to view it,” Cox says.

Addressing that issue will be a critical piece of CDM’s Phase 3 and Phase 4 will go further in data protection, Cox says: “It gets into technologies like digital rights management, data loss prevention, architecturally looking at things like microsegmentation, to ensure that – if there is a compromise –we can keep it isolated.”

Critics have questioned the federal government’s approach, focusing on the network first rather than the data. But Cox defends the strategy: “There was such a need to get some of these foundational capabilities in place – to get the basic visibility – that we had to start with Phase 1 and Phase 2, we had to understand what the landscape looked like, what the user base looked like, so we would then know how to protect the data wherever it was.”

“Now we’re really working to get additional protections to make sure that we will have better understanding if there is an incident and we need to respond, and better yet, keep the adversary off the network completely.”

The CDM program changed its approach last year, rolling out a new acquisition vehicle dubbed CDM DEFEND, which leverages task orders under the Alliant government-wide acquisition contract (GWAC), rather than the original “peanut butter spread” concept. “Before, we had to do the full scope of all the deployments everywhere in a short window,” he says, adding that now, “We can turn new capabilities much more quickly.”

Integrators are an essential partner in all of this, White says, because they have experience with the tools, experience with multiple agencies and the technical experience, skills and knowledge to help ensure a successful deployment. “The central tenet of CDM is to standardize how vulnerabilities are managed across the federal government, how they’re prioritized and remediated, how we manage the configuration of an enterprise,” he says. “It’s important to not only have a strategy at the enterprise level, but also at the government level, and to have an understanding of the complexity beyond your local situation.”

Ultimately, a point solution is always easier than an enterprise solution, and an enterprise solution is always easier than a multi-enterprise solution. Installing cyber defense tools for an installation of 5,000 people is relatively easy – until you have to make that work with a government-wide system that aims to collect and share threat data in a standardized way, as CDM aims to do.

“You have to take a wider, broader view,” says Stan Tyliszczak, chief engineer at GDIT. “You can’t ignore the complex interfaces with other government entities because when you do, you risk opening up a whole lot of back doors into sensitive networks. It’s not that hard to protect the core of the network – the challenge is in making sure the seams are sewn shut. It’s the interfaces between the disparate systems that pose great risk. Agencies have been trying to solve this thing piece by piece, but when you do that you’re going to have cracks and gaps. And cracks and gaps lead to vulnerabilities. You need to take a holistic approach.”

Agency cyber defenders are all in. Mittal Desai, CISO at the Federal Energy Regulatory Commission (FERC), says his agency is in the process of implementing CDM Phase 2, and looks forward to the results. “We’re confident that once we implement those dashboards,” he says, “it’s going to help us reduce our meantime to detect and our meantime to respond to threats.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250