Why a Long-Term Data Strategy is Essential to Stopping Insider Threats

Why a Long-Term Data Strategy is Essential to Stopping Insider Threats

Harold T. Martin III was arrested in August 2016 for allegedly stealing and hoarding terabytes of highly classified documents taken from multiple intelligence agencies, where he’d worked as a contractor for more than 20 years. Around the same time, the Shadow Brokers hacker group exposed a number of sophisticated cyber-hacking tools developed by the National Security Agency (NSA) and now believed to have been downloaded by an agency insider. Those tools were later tied to a pair of major international cyberattacks in spring 2017. A third incident dating to the same time frame, saw a retired DuPont engineer charged by the company with stealing proprietary trade secrets and then leveraging them for his independent consulting business.

In all three cases, trusted individuals with direct access to highly sensitive information violated that trust and evaded internal safeguards before making off with data that was never supposed to leave the premises.

Today, all government organizations and most large private sector firms operate insider threat programs to detect when insiders begin going rogue. The systems collect vast volumes of user data, sifting through it looking for anomalies that indicate a change in behavior patterns. Such changes include sudden copying, downloading or printing of unusual numbers or types of files. What began as a simple quest to track and correlate information is becoming a costly challenge. Tracking each user’s digital behavior means capturing everything from when and where they log on and off, to what applications they use, which data they touch and what and when they download, print or copy. Saving all that data – which can amount to terabytes daily and even petabytes, isn’t free.

“You need to have a long-term data strategy that optimizes that balance between access and cost,” says David Sarmanian, an enterprise architect with systems integrator General Dynamics Information Technology (GDIT), which builds and manages IT systems for a range of government customers in both the classified and unclassified worlds. “Then you need to revisit that strategy annually to make sure it’s current and up-to-date with advancing technology.”

As network speeds and encryption use have grown, insider threat protection has focused on monitoring user activity at the endpoint, said Mike Crouse, director of data and insider threat business solutions for ForcePoint in Austin, Texas.

For agencies with little sensitive data, the collection logs are relatively small. But where national security systems are involved – or data is highly sensitive because it contains personal or perhaps proprietary industry information – the volumes needing collection ramp up quickly.

Minimum standards developed for the National Insider Threat Policy call for “monitoring of user activity on U.S. government networks” to detect, monitor and analyze anomalous user behavior. How much monitoring is necessary is left to communities and agencies discretion – including which data to collect.

The Intelligence Community Standard for collecting audit data is most extensive, calling for the recording and collection almost every user action: logging on and off; creating, accessing, deleting or modifying files; uploads, downloads and printing or copying of files; changes to access or privilege levels and applications use. Even here, decisions must be made. Should the agency collect actual files and the specific changes made in every instance, or just that the files were accessed?

The latest best practices from the CERT division of Carnegie Mellon University’s Software Engineering Institute, begin in the hiring process and continue throughout an employee’s tenure. Necessary steps include:

  • Monitoring and responding to suspicious or disruptive behavior
  • Monitoring and controlled, remote access from all end points including mobile devices
  • Establishing a baseline of normal network device behavior
  • Employing a log correlation engine or security information and event management (SIEM) system to log, monitor, and audit employee actions

Agencies must decide for themselves how long of a period to maintain and review, said Michael Albrethsen, information system security analyst for the CERT division of the Carnegie Mellon Software Engineering Institute.

A baseline profile for each user tracks log-ins and log-outs to IT systems and applications, locations and devices used and specific data accessed or manipulated. Combined with job titles and functions, network and application permissions and physical access to facilities, this provides a portrait against which anomalous behavior can be detected.

But that’s really the easy part, says ForcePoint’s Crouse. “We have visibility into anything the user does on the endpoint or in the cloud,” he says. “But the ability to collect doesn’t mean we should collect it. It isn’t an issue of Big Data or small data, it’s the right data.”

The right data depends on the organization’s risk tolerance, mission, size and budget. An online app for reserving a National Park camping spot does not require the same level of scrutiny as a classified database.

In the least critical applications, log data may be retained for as little as a few months. In typical government applications, it may be a year or more and for classified environments, audit data for forensic investigations might be kept for the user’s entire career (and even beyond). The DuPont engineer’s theft wasn’t apparent until after he left the company. That’s when data really stack up and portability issues come into play. Data formats and media used today may be obsolete in a decade or two. Information security officers can’t afford to simply stack up disks or tapes for future use.

Even when an organization is selective about the information gathered, it can quickly accumulate to the petabyte range (1 petabyte equals 1 million gigabytes), for the law enforcement, defense and intelligence communities, Albrethsen said. “That is where the state of the art is moving.”

Containing Data Volumes
Managing this takes strategy. By strategically identifying which pieces of data must be saved from each event and how such data is stored, the volumes can be made more manageable.

“You’re not going to collect every bit and byte that comes from the end point. You don’t want to pay for data that you never use,” says Sarmanian. “And not all data being collected needs to be available for immediate access. Records from different sources will come in different formats. These logs and other data must be normalized before they can be stored and used. Typically, this is done at the point of ingestion, often by a SIEM solution designed for this purpose.

The next step is storage. Choosing storage requirements determines the long-term cost of a program. While storage costs are perpetually declining, savings are eaten up by increasing volumes of saved data.

“Storage is cheap, but you still have to buy a lot of it,” said James Cemelli, a GDIT project manager. “Most agencies now are storing data in the low petabyte range, but that will grow exponentially as long-term data collection continues.”

Fortunately, not all such data need be treated the same. Security officials should check network logs regularly for unusual activity and examine files for malware and malicious IP addresses. Network Operations teams will want to have 12 to 13 months of log data available. Tiered storage models provide lower-cost options for data only rarely needed, while still maintaining instant access to high-value current data.

Seldom-used data can be maintained in a storage-area network. Rarely-used data needed for forensic investigations following an event, could be kept on less expensive media such as hard disk drives, compact discs or tape drives.

There is a tradeoff between cost and availability. Data collection and storage policies for an insider threat program should be based on cost-benefit-risk analysis. “Creating a long-term data storage strategy helps agencies maximize the benefits of advancing technology while still being able to support their mission of defeating insider threat,” said Cemelli.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250
Proactive Resilience: The Future of Cybersecurity

Proactive Resilience: The Future of Cybersecurity

Today’s state of the art in cybersecurity is operational resilience – an organization’s ability to continue its mission despite disruptions to its IT enterprise. Summer Fowler, technical director of Carnegie Mellon University’s CERT Division, proposes moving beyond this to proactive resilience – what she calls “prosilience.”

It is not enough to remain operational during an attack, Fowler argues. She believes the next step is to anticipate attacks and prepare for those strikes before they hit.

Summer Fowler

Summer Fowler
Technical director of Carnegie Mellon University’s CERT Division

“Prosilience is resilience with consciousness of environment, self-awareness and the capacity to evolve,” Fowler wrote on the Insider Threat Blog, a product of Carnegie Mellon’s Software Engineering Institute. “It is not about being able to operate through disruption,” she says, it is about anticipating disruption and adapting before it even occurs.”

Disruptions, whether malicious or merely unexpected, can flair up in an instant to take down servers. Take the recent incident involving the Federal Communications Commission (FCC): After comedian John Oliver told his viewers to register their disapproval after the agency rescinded so-called net neutrality rules, FCC’s servers were overwhelmed and its website crashed. The agency called it a denial of service attack.

A prosilient architecture might have anticipated that threat and reconfigured itself to remain operational during the surge in traffic.

Prosilience aims to leverage emerging capabilities such as artificial intelligence, machine learning and self-healing to help networks adapt in near real time. “This is something I don’t think we will be ready for several years yet,” Fowler told GovTechWorks, in an interview. “It’s very recent.”

Full prosilience might be as much as a decade away, but some commercial cybersecurity offerings are moving in that direction. Area 1 Security is a young cybersecurity company founded by former National Security Agency (NSA) employees that is developing technology to scan “everything interesting about the Internet.”

“That’s ambitious,” said Phil Syme, chief technology officer at Area 1 Security in Redwood City, Calif., and a former member of the National Security Agency’s engineering organization. But even incomplete information about criminal sites and malicious activity could “really put a dent in the problem” of security breaches by alerting customers to what is coming their way.

Moving Beyond Resilience
Resilience is an extension of risk management, which requires that organizations accept and prepare for risks that cannot be eliminated. A properly prepared organization should be able to continue operations with minimal disruption in the face of a security incident. Carnegie Mellon CERT’s Resilience Management Model lays out best practices for effectively managing security, business continuity and information technology operations.

Prosilience takes that concept one step further, driving organizations to become “smarter about resilience activity” and anticipate, rather than simply respond to events, Fowler said. By leveraging the distributed sensing capability of the Internet of Things, practitioners would be able to accurately spot trends and predict threats. Machine learning technologies would enable networks to respond within milliseconds, reconfiguring themselves if needed to repulse attacks and isolate threats.

At Carnegie Mellon, government, industry and academic experts are teaming up to develop the prosilience concept, beginning by establishing metrics to measure how well security budgets are used in order to develop standardized measures for return on investment. Is the budget performing according to plan? Is the plan correct for the organization? What will it take to achieve the agility needed for prosilience? “All of these roll into budget,” Fowler said.

Building out such models will take time, she added, explaining that simply establishing metrics could take up to five years.

Once an efficiency baseline can be established, developers can design and test a prosilient architecture to leverage those baseline capabilities. Fowler said a workable architecture is probably five to 10 years away.

For government agencies, achieving prosilience poses particular challenges. Many legacy systems still in use today lack the adaptability demanded for such an environment. Modernization is a necessary first step to making prosilience even a reality

Threat prediction
While Carnegie Mellon develops that formal prosilience architecture, operators in the trenches are working on their own proactive resilience efforts. A critical element is  old-fashioned human learning, according to Dan VanBelleghem, cybersecurity program director with General Dynamics Information Technology.

“You can’t start practicing breach response after the fact,” VanBelleghem said. “You need to exercise your cyber teams on a monthly or quarterly basis to prepare, presenting them with relevant threat-based scenarios and developing playbooks so they learn how to respond when different threats arise.

“You can’t figure this out when you’re in a crisis – that’s the worst time to try to learn what to do, he said. “It’s the same approach the Defense Department takes with its cyber teams.”

Still, the sheer volume of threat data and the speed with which attacks can mount, means humans alone cannot keep up. Machine learning, therefore, is critical to identify and predict threats. Area 1’s wide-scale Internet crawling identifies many sites engaged in such malicious activity as credential harvesting or hosting exploit kits. The company works with small hosting services that may not have their own Security Operations Centers (SOCs) to locate and prevent compromises.

“Traditionally you do a take-down” of compromised servers, Syme said. But when that happens, the bad guys just move to a different server. Area 1 takes a different approach: It first monitors activity to understand how the compromise works, then blocks it in such a way as to avoid tipping off the perpetrators.

Machine learning enhances that capability. Area 1 integrates with its customers’ edge equipment to automate responses, creating a powerful force multiplier, Syme said. Provided the base information is strong, it can be highly effective.

“Automation is not free,” he added. “It’s expensive and quite difficult.”

Automated tools must be customized for each enterprise; the programming is only as good as the quality and accuracy of the information it builds upon.

While developing a definitive approach to prosilience may be a long and slow process, Fowler said, it that doesn’t mean government organizations or private institutions should sit back and wait.

“We always want to start where we are,” Fowler said, even if that is not where we want to be. “We can’t sit on our heels.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250
Cyber Alert Overload: 3 Steps to Regaining Control

Cyber Alert Overload: 3 Steps to Regaining Control

Industry’s response to the proliferation of cyber attacks is a growing array of technologies and services designed to address them. Network owners add these products as new attack vectors emerge. One result: A growing cybersecurity stack with overlapping tools that produce so many alerts it is difficult for analysts to sift the signal from the noise.

“The administrator becomes numb to the alerts,” said Curtis Dukes, executive vice president of the Center for Internet Security (CIS) and the National Security Agency’s former director of information assurance. That means significant threats can go unaddressed.

“It’s an old problem that has been dealt with periodically and that comes back again,” said John Pescatore, director of emerging security trends at the SANS Institute who previously designed secure communications systems for the NSA and the Secret Service.

Standardizing technology and processes, prioritizing risks and automating processes are each critical to developing the right solution for an organization.

Chris Barnett, Chief Technology Officer, General Dynamics Information Technology's Intelligence Solutions Division

Chris Barnett
Chief Technology Officer, General Dynamics Information Technology’s Intelligence Solutions Division

“It’s well known that most Enterprises use only 15 percent to 20 percent of the technical capability already available within their toolsets,” said Chris Barnett, chief technology officer in General Dynamics Information Technology’s (GDIT) Intelligence Solutions Division. “It takes both time and expertise to implement the more advanced capabilities found in many of today’s tools. Standardizing tools across the enterprise gives security engineers the opportunity to leverage those sophisticated capabilities and provides opportunities for process automation and event correlation.”

The problem is less false positives than repeat offenders. Multiple products can flag alerts for the same threat or incident.

Security Information and Event Management (SIEM) tools were created in the 1990s in response to information and alerts being generated by perimeter security products such as antivirus and firewalls. This helped reduce the alert volume to a dull roar, Pescatore said. But products eventually fall behind the flood of alerts produced by new security tools, and administrators are again facing alert overload.

The increasing complexity of the Defense Department’s cybersecurity toolset “is driving inefficiencies,” Col. Brian Lytell, the Defense Information Systems Agency’s (DISA) deputy director of cyber development, said in December. “I’m going to have to eliminate some things within the architecture itself to try to simplify it and reduce it down.”

DISA has been evaluating each component in its security stack to determine which it will keep and which it will phase out. The agency’s problem is not unique: IT security stacks tend to grow ad hoc, so periodically modernizing and streamlining to create a more coherent cybersecurity environment is a good idea. But unwinding a complex security solution is time-consuming and complicated. Few enterprises can match the kind of enterprise-wide reach DISA possesses, and even DISA does not control all DOD IT systems.

Chriss Knisley, executive vice president at the security analytics company Haystax Technologies, said a study for one customer found that its systems generated 35,000 alerts over a three-month period, about 390 per day. The 2016 State of Monitoring survey by Big Panda found that only 17 percent of organizations receiving 100 or more alerts a day were able to address all of them within 24 hours.

Fortunately it is not necessary to address every alert. Many alerts are duplicates resulting from the same incident or activity. Of those that remain, some are low risk and can be assigned a lower priority in an effective risk management program.

SIEM tools provide a significant capability for data collection, correlation and risk management, GDIT’s Barnett said. “We’ve built applications using existing SIEM tools to automate, track and report performance-based metrics and dashboards to support risk-based prioritization,” he explained. “We’ve even been able to include logic that automatically changes colors based upon thresholds and service level agreements. Leveraging existing tools this way builds a strategic, scalable capability within the customer space that enables the agency to leverage its existing tool investments to replace timeconsuming, manual methods.”

There are several other practical steps for addressing alert overload and improving overall security.“My advice to DISA is to standardize on consensus-based security benchmarks,” Dukes said. “That would go a long way.” This can help prioritize threats and alerts, automate analysis and response, and reduce the burden on personnel.

Pescatore and Dukes and Barnett outline three essential steps to address alert overload:

1 Standardize

The bible for federal cybersecurity is the National Institute of Standards and Technology’s (NIST) Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations. It contains a 233-page catalog of security controls that agencies can use. But not every agency will need every control; each agency is responsible for selecting the controls that meet its needs.

To jumpstart this task, the NSA in 2008 commissioned a list of controls that would help the DOD address “known bads” – the most pervasive and dangerous threats. The result was the 20 Critical Security Controls, developed through a consensus of industry and government experts and maintained by CIS.

This list is not a complete cybersecurity program; it reflects the 80/20 principle that a small number of actions – if they are the right actions – can address a large percentage of threats. “Organizations that apply just the first five CIS Controls can reduce their risk of cyberattack by around 85 percent,” according to CIS. “Implementing all 20 CIS Controls increases the risk reduction to around 94 percent.” Using a standardized set of controls makes it easier for security teams to focus on alerts that represent the most serious threats.

Standards-based security tools make it easier to implement third-party analytics and automation solutions. The Security Content Automation Protocol (SCAP), developed by NIST, standardizes how security information is generated, allowing automated management of alerts. When security content is standardized, redundant alerts from multiple products can be eliminated, reducing the number of alerts.

2 Prioritize

The total number of alerts and threats you address is less important than their seriousness. “You don’t have to fix everything, but you should do the business-critical things first,” Pescatore said. “Focus on the mission, not the nuisances.”

Prioritization is a force-multiplier, enabling limited manpower to focus on the things that pose the greatest threat to operations. To ensure that you are using the right controls and getting the right alerts, you need to understand your enterprise and its mission. This requires full discovery of the network and attached systems and collaboration with lines-of-business officials. These officials can identify the agency crown jewels in terms of processes and data so that alerts are aligned with high-value and high-impact resources.

When you know what is important, you can configure and tune the tools in your security stack to provide the information you need. You don’t have to ignore lower-priority events, but these can be dealt with on a different schedule or assigned for automated response.

3 Automate

Automation is not a silver-bullet. Letting tools automatically respond to security and threat alerts “almost never works” because of the complexity of IT systems, Pescatore said. Security fixes, patches and configurations often must be tested before they are applied. Intrusion Prevention Systems can automatically block suspect activity, but this is impractical in critical environments where false positives cannot be tolerated. IPSs often are used to alert rather than respond, creating another source of alerts.

But automated tools can be effective for sorting and evaluating alerts, eliminating duplicate information and identifying the most serious threats. SIEM tools are helpful here, but they work with proprietary products and protocols, Dukes said. They work through product APIs, and in a multi-vendor environment the number of SIEMs can multiply, adding complexity.

This is where SCAP comes in. Federal agencies are required to use SCAP-compliant security products when they available. By creating an environment in which security information is standardized for automation, administrators can come closer to the “single pane of glass” that gives full visibility into the status of and activity on the network and reducing the number of alerts.

Each of these activities supports the other two. Together they can reduce and sort through the growing volume of alerts being generated in an increasingly complex threat and security environment. The necessary humans in the loop are better informed so that they can focus on the most important tasks. “If I can do that, I’m ahead of the game,” Pescatore said. “I’m winning the battle.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250
Contractors Get More Time to Meet New Security Regs

Contractors Get More Time to Meet New Security Regs

The Defense Department has given contractors two years to meet new requirements for securing sensitive DOD data on non-Federal IT systems, responding to industry concerns over moving too quickly to the new standards.

The New Defense Federal Acquisition Regulation Supplements (DFARS) were supposed to go into effect Dec. 31. But DoD backed off its initial plan after industry objections surfaced last fall.

The new DFARS was published in August 2015 to reflect the “urgent need to increase the cyber security requirements” on information held by contractors, said DOD spokeswoman Lt. Col. Valerie Henderson.

The new rules require contractors to comply with National Institute of Standards (NIST) Special Publication 800.171 to protect Controlled Unclassified Information (CUI).

The 77-page document establishes a streamlined set of controls drawn from the much larger Special Publication 800-53, a 462-page catalog of NIST security controls developed for federal IT systems.

“Changing NIST standards is not a simple switch for contractors,” wrote the Council of Defense and Space Industry Associations (CODSIA) in a November letter objecting to the new rules. The group also complained of vague language it wants clarified.

David M. Wennergren, executive vice president of operations and technology at the Public Service Council, helped draft the CODSIA letter. He said industry supports the requirements, but needs time to put them into effect.

Wennergren said CODSIA members don’t object to the standards, but are concerned instead about the way they were being applied. “I believe that the NIST security controls are good,” said Wennergren, a former Navy deputy CIO and Pentagon official. “They make sense.”

But it’s too soon to put the requirement into contract language, he said. “We need to be a little more thoughtful.”

New requirements for using government-approved cryptography and for two-factor authentication, for instance, “are good and noble things,” he said, but cannot be implemented immediately.

Indeed, most Federal agencies are still struggling to meet government requirements to implement multi-factor authentication.

As a result, the Pentagon pulled back on its initial requirement in late December and published interim rules extending the compliance deadline to Dec. 31, 2017, and opening the new rules for public comment.

The extension gives contractors time to make an orderly move what a more streamlined set of standards, and gives DoD time to ensure that DFARS requirements are aligned with civilian Federal Acquisition Regulations (FARs) now being developed by the Office of Management and Budget (OMB). Both will incorporate SP 800-171.

The new requirements clarify rules now in place that draw on NIST Special Publication 800-53, a much larger 462-page catalog of security controls developed for federal IT systems and which is the basis for the Federal Risk and Authorization Management Program (FedRAMP), which defines security requirements for vendors providing commercial cloud services to government agencies.

Ron Ross, a NIST Fellow and computer scientist who helped create both documents, said the new guidelines address only one leg of the cybersecurity tripod – information confidentiality. Unlike the broader SP800-53, the regulations do not deal with information integrity or availability.

“It looks a lot different from SP 800-53,” Ross said. “It’s a lot lighter.”

Although industry asked for more time to make the transition, compliance should not be difficult, Ross said. “This is not a stretch. This is pretty much best practices.”

The new guidelines aim to clarify which rules apply to contractors who use or store sensitive government information for their own use and on their own systems. The Federal Information Security Management Act (FISMA) – now the Federal Information Security Modernization Act – applies to government data stored on contractor-furnished equipment, but for government use.

“OMB has been struggling with this for a long time,” Ross said.

OMB initially ruled in 2014 that FISMA applies to all federal information, Ross said.“But that’s never been tested.”

Then in October, OMB Director Shaun Donovan revised that position with new guidance on federal information security and privacy management requirements, acknowledging that there had been multiple “incidents impacting government information that resides on or is connected to contractor systems” and that the government needed “to improve cybersecurity protections in Federal acquisitions.”

According to Ross, “That was the driver for SP 800-171.”

The National Archives and Records Administration (NARA) developed a standard defining CUI, which was to be protected at the “moderate” impact level defined in the Federal Information Processing Standards (FIPS) publication 200. NIST tailored its guidelines for contractors and published them in June 2015.

NARA will follow with final FARs rules for protecting CUI on contractor-owned systems later this year, after approval by OMB. But to date, there has been no coordination in the development of the FARs and DFARS rules, PSC’s Wennergren said.

“This is really good stuff. Moving to a common set of security controls is really powerful and helpful,” he said. But contractors want a common set of expectations for compliance, not one-off requirements for different agencies or government branches. “We need to raise the bar, and we need to raise it together.”

Government contractors are hoping that the civilian FARs and the DoD DFARS will comprise a single, coherent set of requirements for them to deal with.

Both government and industry officials believe that two years will be adequate for contractors to move their cybersecurity to the new requirements. For most, the change will not be drastic, Wennergren said. Many large organizations already are in compliance, and many smaller subcontractors will not fall under the new requirements because they do not hold government CUI on their systems.

For those companies that find they do need help, smaller subcontractors will be able to turn to their larger prime partners for mentoring and advice. Many large security vendors also provide professional services to help their clients ensure regulatory compliance. As new FARs and DFARS language emerges, these will be included in their compliance services portfolios.

Although DoD has no formal program to provide guidance, there are other government options. The NIST Cybersecurity Framework, a set of voluntary guidelines for protecting private sector critical infrastructure, provides valuable guidance, Wennergren said.

“They can also get in touch with us,” Ross said. “We are a resource for the entire nation.”

Ross said the agency takes its responsibility to provide cybersecurity guidance seriously. “We really care that it is implementable.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250
Is Your Agency Falling Behind on IPv6?

Is Your Agency Falling Behind on IPv6?

The American Registry for Internet Numbers (ARIN) announced in September that it had issued its final full allotment of IPv4 addresses, making it the latest of the world’s five regional registries to exhaust its supply. This was a moment the U.S. government had been anticipating since 2005, when agencies were first told to acquire only networking equipment that was already IPv6-ready.

Under a 2010 Office of Management and Budget directive, civilian agencies were supposed to make all public-facing resources IPv6 accessible by 2012, and to begin using the new protocols on all internal networks by 2014. In fact, adoption has fallen well short. According to the National Institute of Standards and Technology, only 59 percent of 2,841 public services tested on October 11 were IPv6 enabled. The Office of Management and Budget’ Federal IT Dashboard shows compliance at less than 55 percent. Only the Social Security Administration and NASA had achieved 100 percent compliance. Scoring lowest were the Agriculture Department (4 percent), the Defense Department (10 percent) and the U.S. Agency for International Development (0%).

Chart: IPv6 Adoption for Public Secondary Domains

Slow adoption is a result of several factors:

  • The IPv4 infrastructure continues to work. So in a constrained fiscal environment, it’s easy to put off such upgrades
  • Millions of still-unused addresses are in the hands of large enterprises, including Federal agencies, so there’s no rush to add addresses. NASA, for example, estimated in 2010 that it would never need more than 10 percent of its allotted addresses.

Experts say networks and enterprises unprepared to use the new protocols could find their networks becoming less efficient, locked into technical workarounds and unable to take advantage of the security, automation, and scalability benefits of IPv6.

New, emerging mobile and cloud computing technologies – and the advent of the Internet of Things – are today fueling an anticipated increase in demand for addresses that would not be possible if IPv6 had not already made inroads across the Internet. And experts credit the government for making that possible by pushing for early adoption, even if it didn’t fully meet its goals.

Doug Montgomery, manager of Internet and scalable systems research at NIST, says the government has been a clear leader in adoption.

“I am unaware of any other enterprise deployment as large as the government’s to date,” Montgomery said. “There are few if any enterprises that have better adoption stories than the government.”

Twin Goals

The government’s transition to IPv6 has two objectives:

  • Ensure that everyone has access to the government’s resources as use of IPv6 traffic outside government increases
  • Spur adoption of the new protocols in the private sector to support the Internet as an integral driver of both the national economy and national security.

In addition, delaying adoption has a cost. Even though IPv4 still works, the engineering effort necessary to keep IPv4 working uses up resources that could be better spent on learning the new protocols, Montgomery said.

“The Internet can’t continue to grow without IPv6,” Montgomery said. “We need to stop rearranging the deck chairs.”

Network Address Translation is one of the hidden taxes on networks that don’t upgrade. According to a 2010 report by the Federal Communications Commission: “Some of these fixes break end-to-end connectivity, impairing innovation and hampering applications, degrading network performance, and resulting in an inferior version of the Internet.” The report continued: “These kludges require capital investment and ongoing operational costs by network service providers, diverting investment from other business objectives.”

So even with its transition incomplete, the 2005 mandate made a huge difference. It “was an incredible stick to compel vendors to mainline their IPv6 plans” in networking and end-user hardware and software, said Tom Coffeen, chief IPv6 evangelist for networking technology company Infoblox.

James Lyne, head of security research at the data security company Sophos, agreed. “The government drew a line in the sand,” he said. “The job is far from done, but they have had more of an impact than most organizations.”

Sorting Out Differences

The best known difference between IPv4 and IPv6 is the number of possible addresses: 4.3 billion possible addresses exist under IPv4, while IPv6, with a 128-bit address space, has an almost unlimited number – 34,000,000,000,000,000,000,000,000,000,000,000,000. The new protocol also includes other significant improvements:

  • While IPv4 has little built-in security, IPsec is integral to IPv6. IPsec supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection
  • IPv4 requires Network Address Translation (NAT) as a workaround to stretch IPv4 addresses. That’s not necessary for IPv6 addresses, all of which will be publicly accessible
  • Quality of service. IPv6 distinguishes delay-sensitive packets, while IPv4 cannot
  • There are no unnecessary fields in the IPv6 address header
  • Stateless addressing auto configuration helps automate address assignment.
Tips for Transitioning to IPv6
  • Don’t slight training. This is a technical changeover. Your network team needs to become knowledgeable and familiar with IPv6
  • Leverage available tools. The government has had an IPv6 testing program since 2008; ask vendors for test results and use products approved by accredited labs under the testing program
  • Question vendors about their roadmap for supporting IPv6 security. Even if parity with IPv4 performance cannot yet be demonstrated, vendors should have a commitment to the transition
  • Begin renegotiating service contracts to ensure they provide support for IPv6 – not only for Internet access, but for all services, including DNS, email, cloud, and Web content delivery
  • Engage all segments of the enterprise in the transition, including application and content owners.

So what’s holding back a faster migration to the new protocols? Issues with legacy applications, service contracts, and security all play a role, industry experts say. But mostly it’s a matter of network managers preferring the devil they know.

“IPv4 sucks,” Lyne said. “But we know how it sucks and industry has gotten good at fixing it.”

For many network administrators, switching to IPv6 means going back to network school. It’s extra work for people who already have too much to do.

Legacy applications are another hindrance. Enabling a mission-critical app to work with IPv6 is not a trivial task, Coffeen said. Multiply that effort by thousands of apps, and it’s easy to see why administrators put off such upgrades for later. Application owners certainly aren’t itching to change things that are working.

“It ends up being less of a technology challenge than an organizational challenge,” Coffeen said.

The fact that the technology is still evolving means there’s no reason to worry if your agency didn’t get on the IPv6 bandwagon early. “Don’t panic,” Coffeen said. With the technology still maturing, he notes, “the late adopters have an advantage here.”

But that advantage won’t endure forever, experts say. The time to move forward is now.

William Jackson has covered virtually every technology sector. He has focused on government telecommunications, networking, and cybersecurity for more than 20 years.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250