GovTechWorks

IOT Security Risks Begin With Supply Chains

IOT Security Risks Begin With Supply Chains

The explosion of network-enabled devices embodied in the Internet of Things (IoT) promises amazing advances in convenience, efficiency and even security. But every promising new device generates new seams and potential opportunities for hackers to worm their way into networks and exploit network weaknesses.

Figuring out which IoT devices are safe – and which aren’t – and how to safely leverage the promise of that technology will require looking beyond traditional supply chain and organizational boundaries and developing new ways to approve, monitor and review products that until recently weren’t even on the radar of information security officials.

Dean Souleles Chief technology officer at the National Counterintelligence & Security Center

Dean Souleles
Chief technology officer at the National Counterintelligence & Security Center

Conventional product definitions have fundamentally changed, said Dean Souleles, chief technology officer at the National Counterintelligence & Security Center, part of the Office of the Director of National Intelligence. To illustrate his point, he held up a light bulb during the recent Institute for Critical Infrastructure Technology Forum, noting that looks can be deceiving.

“This is not a light bulb,” he said. “It does produce light – it has an LED in it. But what controls the LED is a microcircuit at the base.” That microcircuit is controlled by software code that can be accessed and executed, via a local WiFi network, by another device. In the wrong hands – and without the proper controls –that light bulb becomes a medium through which bad actors can access and exploit a network and any system or device connected to it.

“When my light bulb may be listening to me,” Soules says, “we have a problem.”

What’s On Your Network?
Indeed, the whole government has a problem. Asset management software deployed across 70 federal agencies under the Department of Homeland Security’s Continuous Diagnostics and Mitigation program has uncovered the surprising extent of unknown software and systems connected to government networks: At least 44 percent more assets are on agency networks than were previously known, according to CDM program documents, and in some agencies, that number exceeded 200 percent.

IoT will only make such problems worse, because IT leaders rarely have a comprehensive of the many products acquired and installed in their buildings and campuses. It’s hard enough to keep track of computers, laptops, printers and phones. Reining in facilities managers who may not be fully aware of cyber concerns or properly equipped to make informed decisions is a whole different story.

“You have to have a different way of thinking about the internet of things and security,” Souleles said. “When you think about infrastructure today, you have to think beyond your servers and devices.”

Securing IoT is now a supply chain risk management issue, greatly expanding the definition of what constitutes the IT supply chain. “That risk management has got to [focus on] software risk management,” Souleles said. “You have to begin with the fact that software now includes your light bulbs. It’s a different way of thinking than we have had before. And things are moving so quickly that we really have to stay on top of this.”

The Intelligence Community and the technology companies that support it may be best equipped to define the necessary best practices and procedures for ensuring a safe and secure supply chain. Chris Turner, solutions architect at General Dynamics Information Technology, said the supply chain attack surface is huge, and that the risks among technology products can be huge, as well – if left unattended.

Indeed, Jon Boyens, a senior advisor for information security at the National Institute of Standards and Technology (NIST), says as much as 80 percent of cyber breaches originate in the supply chain, citing a 2015 study by the Sans Institute.

Boyens cites two notorious examples of supply chain failures: In the first, supplier-provided keyboard software gave hackers access to the personal data of 600,000 Samsung Galaxy smartphones; in the second, supplier-provided advertising software let attackers snoop on browser traffic on Lenovo computers. Counterfeit products, devices compromised in transit and component-level vulnerabilities are other supply chain risks that can lead to devastating consequences.

Maintaining sufficient controls to minimize risk and maximize transparency requires close relationships with vendors, clear understanding of the risks involved and strict adherence to procedure. Organizations should be able to identify their lower-tier sub-contractors as well as the extent to which their suppliers retain access to internal systems and technology.

For companies that routinely support highly classified programs, this kind of diligence is routine. But many are not sufficiently experienced to be so well equipped, says GDIT’s Taylor, where supply chain risk management is considered a core competency.

“How do you protect the supply chain when everything comes from overseas?” Taylor asks rhetorically. “You can’t know everything. But you can minimize risk. Experienced government contractors know how to do this: We know how to watch every single component.”

That’s not just hyperbole. For the most classified military systems, source materials may be tracked all the way back to where ore was mined from the Earth. Technology components must be understood in all their infinite detail, including subcomponents, source code and embedded firmware. Minimizing the number of suppliers involved and the instances in which products change hands is one way to minimize risks, he said.

Certified Cyber Safe
Making it easier to secure that supply chain and the software that drives IoT-connected devices is what’s behind a two-year-old standards effort at Underwriters Laboratories (UL), the independent safety and testing organization. UL has worked with the American National Standards Institute (ANSI) and the Standards Council of Canada (SCC) to develop a series of security standards that can be applied to IoT devices from lights, sensors and medical devices to access and industrial controls.

The first of these standards will be published in July 2017 and a few products have already been tested against draft versions of the initial standard, UL 2900-1, Software Cybersecurity for Network-Connectable Products, according to Ken Modeste, leader of cybersecurity services at UL. The standard covers access controls, authentication, encryption, remote communication and required penetration and malware testing and code analysis.

Now it’s up to users, manufacturers and regulators – the market – to either buy into the UL standard or develop an alternative.

Mike Buchwald, a career attorney in the Department of Justice’s National Security Division, believes the federal government can help drive that process. “As we look to connected devices, the government can have a lot of say in how those devices should be secured,” he said at the ICIT Forum. As one of the world’s biggest consumers, he argues, the government should leverage the power of its purse “to change the market place to get people to think about security.”

Whether the government has that kind of market power – or needs to add legislative or regulatory muscle to the process is still unclear. The United States may be the world’s single largest buyer of nuclear-powered submarines or aircraft carriers, but its consumption of commercial technology is small when compared to global markets, especially so when considering the global scale of IoT connections.

Steven Walker, acting director of the Defense Advanced Research Projects Agency (DARPA), believes the government’s role could be to encourage industry.

“What if a company that produces a software product receives something equivalent to a Good Housekeeping Seal of Approval for producing a secure product?” he said in June at the AFCEA Defensive Cyber Operations Conference in Baltimore. “What if customers were made aware of unsecure products and the companies that made them? I’m pretty sure customers would buy the more secure products – in today’s world especially.”

How the government might get involved is unclear, but there are already proven models in place in which federal agencies took an active role in encouraging industry standards for measurement and performance of consumer products.

“Philosophically, I’m opposed to government overregulating any industry,” he said. “In my view, overregulation stifles innovation – and invention. But the government does impose some regulation to keep Americans safe: Think of crash tests for automobiles. So should the government think about the equivalent of a crash test for cyber before a product – software or hardware – is put out on the Internet? I don’t know. I’m just asking the question.”

Among those asking the same question include Rep. Jim Langevin (D-R.I.), an early and outspoken proponent of cybersecurity legislation. “We need to ensure we approach the security of the Internet of Things with the techniques that have been successful with the smart phone and desktop computers: The policies of automatic patching, authentication and encryption that have worked in those domains need to be extended to all devices that are connected to the Internet,” he told the ICIT Forum. “I believe the government can act as a convener to work with private industry in this space.”

What might that look like? “Standard labeling for connected devices: Something akin to a nutritional label, if you will, for IoT,” he said.

Langevin agrees that “the pull of federal procurement dollars” can be an incentive in some cases to get the private sector to buy in to that approach.

But the key to rapid advancement in this area will be getting the public and private sectors to work together and buy into the idea that security is not the sole purview of a manufacturer or a customer or someone in IT, but rather everyone involved in the entire process, from product design and manufacture through software development, supply chain management and long-term system maintenance.

As IoT expands the overall attack surface, it’s up to everyone to manage the risks.

“Pure technological solutions will never achieve impenetrable security,” says Langevin. “It’s just not possible. And pure policy solutions can never keep up with technology.”

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff
Secure, High-Quality Software Doesn’t Happen By Accident

Secure, High-Quality Software Doesn’t Happen By Accident

Time, cost and security are all critical factors in developing new software. Late delivery can undermine the mission; rising costs can jeopardize programs; security breaches and system failures can disrupt entire institutions. Yet systematically reviewing system software for quality and security is far from routine.

“People get in a rush to get things built,” says Bill Curtis, founding executive director of Consortium of IT Software Quality (CISQ), where he leads the development of automatable standards that measure software size and quality. “They’re either given schedules they can’t meet or the business is running around saying … ‘The cost of the damage on an outage or a breach is less than what we’ll lose if we don’t get this thing out to market.’”

In the government context, pressures can arise from politics and public attention, as well as contract and schedule.

It shouldn’t take “a nine-digit defect – a defect that goes over 100 million bucks – to change attitudes,” Curtis says. But sometimes that’s what it takes.

Software defects and vulnerabilities come in many forms. The Common Weakness Enumeration (CWE) lists more than 705 types of security weaknesses organized into categories such as “Insecure Interaction Between Components” or “Risky Resource Management.” CWE’s list draws on contributions from participants ranging from Apple and IBM to the National Security Agency and the National Institute for Standards and Technology.

By defining these weaknesses, CWE – and its sponsor, the Department of Homeland Security’s Office of Cybersecurity and Communications – seek to raise awareness about bad software practices by:

  • Defining common language for describing software security weaknesses in architecture, design, or code
  • Developing a standard measuring stick for software security tools targeting such weaknesses
  • Providing a common baseline for identifying, mitigating and preventing weaknesses

Software weaknesses can include inappropriate linkages, defunct code that remains in place (at the risk of being accidentally reactivated later) and avoidable flaws – known vulnerabilities that nonetheless find their way into source code.

“We’ve known about SQL injections [as a security flaw] since the 1990s,” Curtis says. “So why are we still seeing them? It’s because people are in a rush. They don’t know. They weren’t trained.”

Educated Approach
Whether students today get enough rigor and process drilled into them while they’re learning computer languages and logic is open to debate. Curtis favors a more rigorous engineering approach for example, worrying that too many self-taught programmers lack critical underlying skills. Indeed, 2016 survey of 56,033 developers conducted by Stack Overflow, a global online programmer community, found 13 percent claimed they were entirely self-taught. Even among the 62.5 percent who had studied computer science and earned a bachelor’s or master’s degree, the majority also said some portion of their training was self-taught. The result is that some underlying elements of structure, discipline or understanding can be lost, increasing the risk of problems.

Having consistent, reliable processes and tools for examining and ensuring software quality could make a big difference.

Automated software developed to identify weak or risky architecture and code can help overcome that, says Curtis, a 38-year veteran of software engineering and development in industry and academia. Through a combination of static and dynamic reviews, developers can obtain a sense of the overall quality of their code and alerts about potential system weaknesses and vulnerabilities. The lower the score, the riskier the software.

CISQ is not a panacea. It can screen 22 of the 25 Most Dangerous Software Errors as defined by CWE and the SANS Institute, identifying both code-level and architectural-level errors.

By examining system architecture, Curtis says, CISQ delivers a comprehensive review. “We’ve got to be able to do system-level analysis,” Curtis says. “It’s not enough just to find code-level bugs or code-unit-level bugs. We’ve got to find the architectural issues, where somebody comes in through the user interface and slips all the way around the data access or authentication routines. And to do that you have to be able to analyze the overall stack.”

Building on ISO/IEC 25010, an international standard for stating and evaluating software quality requirements, CISQ establishes a process for measuring software quality against four sets of characteristics: security, reliability, performance efficiency and maintainability. These are “nonfunctional requirements,” in that they are peripheral to the actual mission of any given system, yet they are also the source of many of the most damaging security breaches and system failures.

Consider, for example, a 2012 failed software update to servers belonging to Jersey City, N.J. financial services firm Knight Capital Group. The update was supposed to replace old code that had remained in the system – unused – for eight years. The new code, which updated and repurposed a “flag” from the old code, was tested and proven to work correctly and reliably. Then the trouble started.

According to a Securities and Exchange Commission filing, a Knight technician copied the new code to only seven of the eight required servers. No one realized the old code had not been removed from the eighth server nor that the new code had not been added. While the seven updated servers operated correctly, the repurposed flag caused the eighth server to trigger outdated and defective software. The defective code instantly triggered millions of “buy” orders totaling 397 million shares in just 45 minutes. Total lost as a result: $460 million.

“A disciplined software configuration management approach would have stopped that failed deployment on two fronts,” said Andy Ma, senior software architect with General Dynamics Information Technology. “Disciplined configuration management means making sure dead code isn’t waiting in hiding to be turned on by surprise, and that strong control mechanisms are in place to ensure that updates are applied to all servers, not just some. That kind of discipline has to be instilled throughout the IT organization. It’s got to be part of the culture.”

Indeed, had the dead code been deleted, the entire episode would never have happened, Curtis says. Yet it is still common to find dead code hidden in system software. Indeed, as systems grow in complexity, such events could become more frequent. Large systems today utilize three to six computer languages and have constant interaction between different system components.

“We’re past the point where a single person can understand these large complex systems,” he says. “Even a team cannot understand the whole thing.”

As with other challenges where large data sets are beyond human comprehension, automation promises better performance than humans can muster. “Automating the deployment process would have avoided the problem Knight had – if they had configured their tools to update all eight servers,” said GDIT’s Ma. “Automated tools also can perform increasingly sophisticated code analysis to detect flaws. But they’re only as good as the people who use them. You have to spend the time and effort to set them up correctly.”

Contracts and Requirements
For acquisition professionals, such tools could be valuable in measuring quality performance. Contracts can be written to incorporate such measures, with contractors reporting on quality reviews on an ongoing basis. Indeed, the process lends itself to agile development, says Curtis, who recommends using the tools at least once every sprint. That way, risks are flagged and can be fixed immediately. “Some folks do it every week,” he says.

J. Brian Hall, principal director, Developmental Test and Evaluation in the Office of the Secretary of Defense, said at a conference in March that the concept of adding a security quality review early in the development process is still a relatively new idea. But Pentagon operational test and evaluation officials have determined systems to be un-survivable in the past – specifically because of cyber vulnerabilities discovered during operational testing. So establishing routine testing earlier in the process is essential.

The Joint Staff updated systems survivability performance parameters earlier this year and now include a cybersecurity component, Hall said in March. “This constitutes the first real cybersecurity requirements for major defense programs,” he explained. “Those requirements ultimately need to translate into contract specifications so cybersecurity can be engineered in from program inception.”

Building cyber into the requirements process is important because requirements drive funding, Hall said. If testing for cybersecurity is to be funded, it must be reflected in requirements.

The Defense Department will update its current guidance on cyber testing in the development, test and evaluation environment by year’s end, he said.

All this follows the November 2016 publication of Special Publication 800-160, a NIST/ISO standard that is “the playbook for how to integrate security into the systems engineering process,” according to one of its principal authors, Ron Ross, a senior fellow at NIST. That standard covers all aspects of systems development, requirements and life-cycle management.

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff
Securing Health Data Means Going Well Beyond HIPAA

Securing Health Data Means Going Well Beyond HIPAA

A two-decade-old law designed to protect patients’ privacy may be preventing health care organizations from doing more to protect vulnerable health care data from theft or abuse.

The Health Insurance Portability and Accountability Act (HIPAA) established strict rules for how health data can be stored and shared. But in making health care providers vigilant about privacy protection, HIPAA may inadvertently distract providers from focusing on something just as important: overall information security.

“Unfortunately I think HIPAA has focused healthcare organizations too much on data privacy and not enough on data integrity, data loss, disrupted operations and patient safety. You can get your identity back at some point, but not your life,” warns Denise Anderson, president of the National Health Information Sharing and Analysis Center (NH-ISAC). “Many of the attacks we are seeing, such as WannaCry, are disruptive attacks and are not data theft attacks. Organizations should be driven to focus on enterprise risk management and it should come from the Board and CEO level on down.”

“Cybersecurity in Health Care crosses a wide spectrum of issues,” adds Sallie Sweeney, principal cyber solutions architect in the Health and Civilian Solutions Division of systems integrator General Dynamics Information Technology (GDIT). “It’s not just protecting patient data. It includes protecting their financial data and making sure the medical equipment works the way it’s supposed to, when it’s supposed to, without potential for error. Think about the consequences of a Denial of Service attack aimed at the systems monitoring patient vital signs in the ICU. You have to look at the whole picture.”

Many public health agencies and smaller businesses are under-resourced or under-skilled in cyber defense, leaving them reliant on products and service solutions they may not fully understand themselves.

NH-ISAC members have access to support and services, such as Cyber-Fit, a non-profit set of services ranging from simple information services to benchmarking assessments of organizations’ cyber health and security posture; shared risk assessments; and cyber services, including penetration testing, vulnerability management and incident response.

Maggie Amato HHS

Maggie Amato
HHS

Maggie Amato, deputy director of security, design and innovation at the Department of Health and Human Services (HHS), believes increased sharing is at least part of the answer.

“We have to build alliances of threat-sharing capabilities,” Amato says. “The speed, ferocity and depth of attack cannot be dealt with by individual agencies alone.”

Indeed, improved information sharing of threats, weakness and mitigation is one of the key recommendations of the June 2017 Health Care Industry Cybersecurity Task Force.

But getting companies to share threat data is a challenge. Built-in financial incentives drive some firms to minimize publicity and the potential risk it might pose to their businesses. But Anderson says she can see progress.

“I think the public and private sector came together well during the WannaCry incident,” Amato says. Though gaps clearly still exist, the swift response was encouraging.

Anderson’s NH-ISAC could play a key role in improving that response further and narrowing the gaps. NH-ISAC is a non-profit, member-driven organization linking private and public hospitals, providers, health insurance firms, pharmaceutical and biotech manufacturers, laboratories, medical device manufacturers, medical schools and others.

The group is one of 21 non-profit information sharing centers designed to help protect specific industries against cyber threats.

“I think within the NH-ISAC the membership did a phenomenal job of sharing indicators, snort signatures, hashes, mitigation strategies, malware analysis, patching issues and other best practice information. We tried as well to get the information out broadly beyond our membership,” she says. “NH-ISAC is a stellar example of how a community can pull together during an incident to help each other out.”

What HIPAA’s Security Rule Requires

The Office for the National Coordinator for Health Information Technology, which is responsible for overseeing the standards and rules applying to electronic health records writes in its Guide to Security of Electronic Health Information that the HIPAA Security Rule requires:

  • Administrative actions, policies and procedures to prevent, detect, contain and correct security violations and ensure development, implementation and maintenance of security measures to protect electronic personal health information (ePHI).
  • Physical measures, policies and procedures to protect electronic information systems and related buildings and equipment from natural and environmental hazards and unauthorized intrusion to protect and control access to ePHI.
  • Reasonable and appropriate policies and procedures to comply with government requirements, including requirements for contracting with IT services providers, for maintaining data over time and for periodically reviewing policies and procedures.

She has a long way to go, however. While health care represents one of the largest sectors, the NH-ISAC has garnered only about 200 members since its founding in 2010. By contrast, the financial services ISAC has more 6,000 members.

Anderson joined the health ISAC from the finance sector ISAC in part to help drum up participation.

“One of the greatest challenges for the NH-ISAC and all ISACs is the lack of awareness amongst the critical infrastructure owners and operators – particularly the smaller owners and operators – that the ISACs exist and are a valuable tool,” Anderson told the House Energy and Commerce subcommittee on oversight and investigations in April. “Numerous incidents have shown that effective information sharing amongst robust trusted networks of members’ works in combatting cyber threats.” She suggests tax breaks for new members might help encourage wider participation.

“Protecting highly sensitive information – whether it’s patient records; financial data or sensitive government information, is something that has to be baked into every Information system,” said GDIT’s Sweeney. “Too often, we have a health care IT system where security is an afterthought – and trying to bolt on the kinds of protections we need becomes painful and expensive.” Sweeney, whose background includes securing large scale health care information databases and systems for government clients, concluded “Health care systems should be no less secure than financial systems in banks.”

Another new tool for promoting intelligence and threat sharing among health providers is the new Healthcare Cybersecurity and Communications Integration Center (HCCIC), launched by the HHS in May.

Modeled after the Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC), the new HCCIC (pronounced “Aych-Kick) has been criticized as potentially duplicating the NCCIC and other organizations. But Anderson defends the new center as a valuable community tool for funneling information from the many fragmented parts of HHS into a central healthcare information security clearing house.

She concedes, however, that HCCIC will have to prove itself.

“One potential downside of pulling together HHS components into one floor could be, a slowdown of sharing from the private sector as ‘government’ is involved,” she wrote in a written follow up to questions posed by Rep. Tim Murphy (R-PA). “Another downside could be that even though all of the components are brought together, sharing could still take place in a fragmented, unproductive manner. There could be risk of inadvertent disclosure or risk of post-hoc regulatory penalties for a reported breach. Finally if efforts are not effectively differentiated from the NCCIC environment, duplication of effort and additional costs for staffing and resources can result.”

HCICC, in fact, played a key role in the government’s response to May’s WannaCry ransomware attacks. “HCCIC analysts provided early warning of the potential impact of the attack and HHS responded by putting the secretary’s operations center on alert,” testified Leo Scanlon, deputy chief information security officer at HHS before a House Energy and Commerce subcommittee June 8. “This was the first time that a cyber-attack was the focus of such a mobilization,” he said. HCCIC was able to provide “real-time cyber situation awareness, best practices guidance and coordination” with the NCCIC.

Anderson sees further upside potential. Based on her prior experience with the financial services ISAC, “the HCCIC should be successful if carried out as envisioned and if it is voluntary and non-regulatory in nature,” she told GovTechWorks. “This will result in improved dissemination within the sector. In addition, by bringing all of the components of HHS under one roof, increased situational awareness and cyber security efficiencies will result.”

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff
Automated License Plate Readers on the U.S. Border

Automated License Plate Readers on the U.S. Border

AP/FILE 2014

When U.S. Border Patrol agents stopped a vehicle at the border checkpoint in Douglas, Ariz., it wasn’t a lucky break. They had been on the lookout for the driver’s vehicle and it had been spotted by an automated license plate reader (ALPR). The driver, attempting to escape into Mexico, was arrested on suspicion of murder.

All along U.S. borders, ALPRs changed the face and pace of security and enforcement – although not in ways most people might expect.

While APLRs may occasionally catch individuals with a criminal record trying to come into the United States, they play a much greater role in stopping criminals trying to leave. The systems have driven a dramatic drop in vehicle thefts in U.S. border towns. They’ve also been instrumental in finding missing persons and stopping contraband.

“Recognition technology has become very powerful,” says Mark Prestoy, lead systems engineer in General Dynamics Information Technology’s Video Surveillance Lab. “Capturing an image – whether a license plate or something more complex such as a face can be successful when you have a well-placed sensor, network connection and video analytics. Once you have the image, you can process it to enhance and extract information. License plate recognition is similar to optical character recognition used in a printed document.”

“It’s an enforcement tool,” says Efrain Perez, acting director of field operations and readiness for Customs and Border Protection (CBP). “They help us identify high-risk vehicles.”

The agency has about 500 ALPR systems deployed at 91 locations to process passenger vehicles coming into the United States. It also has ALPRs on all 110 outbound lanes to Mexico, which were added in 2009 after the U.S. committed to trying to interrupt the flow of cash and weapons from the U.S. into Mexico. CBP is slowly adding the devices to outbound lanes on the Canadian border, as well.

For APLRs surveilling inbound traffic, their primary purpose is to eliminate the need for border officers to manually enter license plate numbers, allowing them to maintain a steady gaze on travelers so they can spot suspicious behavior and maintain situational awareness. Outbound traffic trained ALPRs are used to identify high-risk travelers, help track the movement of stolen vehicles and support other U.S. law enforcement agencies through the National Law Enforcement Telecommunications System.

Along the southern U.S. border, most ALPRs are fixed units at ports of entry and cover both inbound and outbound vehicles. Along the Canadian border, most APLRs are handheld units. CBP officials hope to install fixed readers at northern ports of entry in the future. “The hand-held readers are not as robust,” points out Rose Marie Davis, acquisition program manager of the Land Border Integration Program (LBIP).

The first generation of readers was deployed around 1997­-98 timeframe. Today, LBIP incorporates the technology, experience and lessons learned from that initial effort. Another effort, under the Western Hemisphere Travel Initiative in 2008 and 2009, extended those lessons learned to all other aspects of inspection processing.

The readers serve three purposes: Information gathered from vehicles transiting checkpoints is checked against a variety of law enforcement databases for outstanding warrants or other alerts. Once through, the readers allow CBP officers who conducted the primary inspection to maintain observation of a vehicle after passage.

CBP operates both fixed and mobile border checkpoints in addition to ports of entry.

But the ALPRs’ facilitation of legitimate travel and processing is one of its most telling and least publicly appreciated roles, Davis noted. “That automation facilitates legitimate travel. On our land borders it’s used to keep up the flow.”

With roughly 100 million privately owned vehicles entering through land borders in fiscal 2016 and 24 million processed at inland Border Patrol checkpoints each year, the ALPRs significantly reduce the need to manually enter license plate information – which takes up to 12 seconds per vehicle – on top of entering numerous other data points and documents, according to Davis.

Those extra seconds add up. CBP says it averages 65.5 seconds to process each vehicle entering the country, or 55 vehicles per lane per hour. That number drops to 46.5 vehicles per lane per hour without ALPR.

“For a 12-lane port like Paso Del Norte in El Paso, Texas, the throughput loss without ALPRs [would be] equivalent to closing two lanes,” CBP said in a statement. The technology is even more critical to CBP’s Trusted Traveler Programs (NEXUS and SENTRI), which allow participants express border-crossing privileges. Those highly efficient lanes now process vehicles in just 36 seconds, so adding 12 seconds processing time to each vehicle would result in a 33 percent decline in throughput.

“At the most congested ports, where wait times exceed 30 minutes daily, even a 5 to 10 second increase in cycle time could result in a doubling of border delays for inbound vehicle travelers,” CBP said.

When it comes to data storage and management, ALPR data is managed and stored through CBP’s TECS system, which allows  users to input, access and maintain records for law enforcement, inspection, intelligence-gathering, and operations.

Privacy advocates like the Electronic Frontier Foundation have expressed concern about potential abuse and commercialization from the sharing of data acquired by law enforcement ALPRs around the country. However, border ALPR data is held by CBP and is law enforcement sensitive. Sharing is strictly with other federal and law enforcement agencies. Sharing of data is under the strict privacy rules of the Department of Homeland Security. However, most of the sharing comes from state and local enforcement agencies sending information on stolen or missing vehicles and people to CBP and the Border Patrol, rather than outward bound information from CBP.

The sharing pays off in numerous ways. For example, a young girl kidnapped in Pennsylvania was found in Arizona, thanks to ALPR border data. Armed and dangerous individuals from Indio, Calif., to Laredo, Texas have been apprehended thanks to border ALPRs. Missing and abducted children have been found and major drug busts have captured volumes of illegal drugs, including 2,827 pounds of marijuana in Falfurrias, Texas, and 60 pounds of cocaine in Las Cruces, N.M., all thanks to ALPR data.

One of the most startling reader successes on the border is the dramatic reduction in vehicle thefts in U.S. border towns. Thieves who steal cars in the United States and attempt to drive them into Mexico now have a much higher chance of being caught.

Laredo, Texas, led American cities in car thefts in 2009. In 2015, it was 137th. Similar drops were seen in San Diego, which dropped from 13th to 45th, Phoenix, which dropped from 40th to 80th and Brownsville, Texas, which dropped from 75th to 217th.

Funding for ALRP purchases comes from the Treasury Executive Office of Asset Forfeiture. While CBP makes an annual request to expand its outbound program, officials are now seeking a complete technology refresh to update second-generation readers installed between 2008 and 2011.

Improvements include higher-resolution day and night cameras, faster processing times, improved data security, lighter, more covert readers, mobile device connectivity , new audio and visual alarms, improved durability and reduced power consumption.

Officials would like to expand ALRP use along the northern border, reading vehicle plates leaving the U.S., starting in metro Detroit.

“We’ve requested the funding for the tech refresh,” says Davis. “We have a new contract and it has been priced out, but we’re not funded to do that refresh yet,” she says. However, officials are hopeful that funding will be found and an even more effective generation of readers can be deployed.

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff
Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Chief information security officers (CISOs) face a dizzying array of cybersecurity tools to choose from, each loaded with features and promised capabilities that are hard to measure or judge.

That leaves CISOs trying to balance unknown risks against growing costs, without a clear ability to justify the return on their cybersecurity investment. Not surprisingly, today’s high-threat environment makes it preferable to choose safe over sorry – regardless of cost. But is there a better way?

Some cyber insiders believe there is.

Margie Graves Acting U.S. Federal Chief Information Officer

Margie Graves
Acting U.S. Federal Chief Information Officer

Acting U.S. Federal Chief Information Officer (CIO) Margie Graves acknowledges the problem.

“Defining the measure of success is hard sometimes, because it’s hard to measure things that don’t happen,” Graves said. President’s Trump’s Executive Order on Cybersecurity asks each agency to develop its own risk management plan, she noted. “It should be articulated on that plan how every dollar will be applied to buying down that risk.”

There is a difference though, between a plan and an actual measure. A plan can justify an investment intended to reduce risk. But judgment, rather than hard knowledge, will determine how much risk is mitigated by any given tool.

The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) have been trying to develop a methodology measuring the actual value of a given cyber tool’s performance. Their NIPRNet/SIPRNET Cyber Security Architecture Review (NSCSAR – pronounced “NASCAR”) is a classified effort to define a framework for measuring cybersecurity performance, said DISA CIO and Risk Management Executive John Hickey.

“We just went through a drill of ‘what are those metrics that are actually going to show us the effectiveness of those tools,’ because a lot of times we make an investment, people want a return on that investment,” he told GovTechWorks in June. “Security is a poor example of what you are going after. It is really the effectiveness of the security tools or compliance capabilities.”

The NSCSAR review, conducted in partnership with NSA and the Defense Department, may point to a future means of measuring cyber defense capability. “It is a framework that actually looks at the kill chain, how the enemy will move through that kill chain and what defenses we have in place,” Hickey said, adding that NSA is working with DISA on an unclassified version of the framework that could be shared with other agencies or the private sector to measure cyber performance.

“It is a methodology,” Hickey explained. “We look at the sensors we have today and measure what functionality they perform against the threat.… We are tracking the effectiveness of the tools and capabilities to get after that threat, and then making our decisions on what priorities to fund.”

Measuring Security
NSS Labs Inc., independently tests the cybersecurity performance of firewalls and other cyber defenses, annually scoring products’ performances. The Austin, Texas, company evaluated 11 next-generation firewall (NGFW) products from 10 vendors in June 2017, comparing the effectiveness of their security performance, as well as the firewalls’ stability, reliability and total cost of ownership.

In the test, products were presumed to be able to provide basic packet filtering, stateful multi-layer inspection, network address translation, virtual private network capability, application awareness controls, user/group controls, integrated intrusion prevention, reputation services, anti-malware capabilities and SSL inspection. Among the findings:

  • Eight of 11 products tested scored “above average” in terms of both performance and cost-effectiveness; Three scored below
  • Overall security effectiveness ranged from as low as 25.8 percent, up to 99.9; average security effectiveness was 67.3 percent
  • Four products scored below 78.5 percent
  • Total cost of ownership ranged from $5 per protected megabit/second to $105, with an average of $22
  • Nine products failed to detect at least one evasion, while only two detected all evasion attempts

NSS conducted similar tests of advanced endpoint protection tools, data center firewalls, and web application firewalls earlier this year.

But point-in-time performance tests don’t provide a reliable measure of ongoing performance. And measuring the effectiveness of a single tool does not necessarily indicate how well it performs its particular duties as part of a suite of tools, notes Robert J. Carey, vice president within the Global Solutions division at General Dynamics Information Technology (GDIT). The former U.S. Navy CIO and Defense Department principal deputy CIO says that though these tests are valuable, they still make it hard to quantify and compare the performance of different products in an organization’s security stack.

The evolution and blurring of the lines between different cybersecurity tools – from firewalls to intrusion detection/protection, gateways, traffic analysis tools, threat intelligence, intrusion detection, anomaly detection and so on – mean it’s easy to add another tool to one’s stack, but like any multivariate function, it is hard to be sure of its individual contributions to threat protection and what you can do without.

“We don’t know what an adequate cyber security stack looks like. What part of the threat does the firewall protect against, the intrusion detection tool, and so on?” Carey says. “We perceive that the tools are part of the solution. But it’s difficult to quantify the benefit. There’s too much marketing fluff about features and not enough facts.”

Mike Spanbauer, vice president of research strategy at NSS, says this is a common concern, especially in large, managed environments — as is the case in many government instances. One way to address it is to replicate the security stack in a test environment and experiment to see how tools perform against a range of known, current threats while under different configurations and settings.

Another solution is to add one more tool to monitor and measure performance. NSS’ Cyber Advanced Warning System (CAWS) provides continuous security validation monitoring by capturing live threats and then injecting them into a test environment mirroring customers’ actual security stacks. New threats are identified and tested non-stop. If they succeed in penetrating the stack, system owners are notified so they can update their policies to stop that threat in the future.

“We harvest the live threats and capture those in a very careful manner and preserve the complete properties,” Spanbauer said. “Then we bring those back into our virtual environment and run them across the [cyber stack] and determine whether it is detected.”

Adding more tools and solutions isn’t necessarily what Carey had in mind. While that monitoring may reduce risk, it also adds another expense.

And measuring value in terms of return on investment, is a challenge when every new tool adds real cost and results are so difficult to define. In cybersecurity, though managing risk has become the name of the game, actually calculating risk is hard.

The National Institute of Standards and Technology (NIST) created the 800-53 security controls and the cybersecurity risk management framework that encompass today’s best practices. Carey worries that risk management delivers an illusion of security by accepting some level of vulnerability depending on level of investment. The trouble with that is that it drives a compliance culture in which security departments focus on following the framework more than defending the network and securing its applications and data.

“I’m in favor of moving away from risk management,” GDIT’s Carey says. “It’s what we’ve been doing for the past 25 years. It’s produced a lot of spend, but no measurable results. We should move to effects-based cyber. Instead of 60 shades of gray, maybe we should have just five well defined capability bands.”

The ultimate goal: Bring compliance into line with security so that doing the former, delivers the latter. But the evolving nature of cyber threats suggests that may never be possible.

Automated tools will only be as good as the data and intelligence built into them. True, automation improves speed and efficiency, Carey says. “But it doesn’t necessarily make me better.”

System owners should be able to look at their cyber stack and determine exactly how much better security performance would be if they added another tool or upgraded an existing one. If that were the case, they could spend most of their time focused on stopping the most dangerous threats – zero-day vulnerabilities that no tool can identify because they’ve never seen it before – rather than ensuring all processes and controls are in place to minimize risk in the event of a breach.

Point-in-time measures based on known vulnerabilities and available threats help, but may be blind to new or emerging threats of the sort that the NSA identifies and often keeps secret.

The NSCSAR tests DISA and NSA perform include that kind of advanced threat. Rather than trying to measure overall security, they’ve determined that breaking it down into the different levels of security makes sense. Says DISA’s Hickey: “You’ve got to tackle ‘what are we doing at the perimeter, what are we doing at the region and what are we doing at the endpoint.’” A single overall picture isn’t really possible, he says. Rather, one has to ask: “What is that situational awareness? What are those gaps and seams? What do we stop [doing now] in order to do something else? Those are the types of measurements we are looking at.”

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff
Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

It’s no longer a question of whether the federal government is going to fully embrace cloud computing. It’s how fast.

With the White House pushing for cloud services as part of its broader cybersecurity strategy and budgets getting squeezed by the administration and Congress, chief information officers (CIOs) are coming around to the idea that the faster they can modernize their systems, the faster they’ll be able to meet security requirements. And that once in the cloud, market dynamics will help them drive down costs.

“The reality is, our data-center-centric model of computing in the federal government no longer works,” says Chad Sheridan, CIO at the Department of Agriculture’s Risk Management Agency. “The evidence is that there is no way we can run a federal data center at the moderate level or below better than what industry can do for us. We don’t have the resources, we don’t have the energy and we are going to be mired with this millstone around our neck of modernization for ever and ever.”

Budget pressure, demand for modernization and concern about security all combine as a forcing function that should be pushing most agencies rapidly toward broad cloud adoption.

Joe Paiva, CIO at the International Trade Administration (ITA), agrees. He used an expiring lease as leverage to force his agency into the cloud soon after he joined ITA three years ago. Time and again the lease was presented to him for a signature and time and again, he says, he tore it up and threw it away.

Finally, with the clock ticking on his data center, Paiva’s staff had to perform a massive “lift and shift” operation to keep services running. Systems were moved to the Amazon cloud.  Not a pretty transition, he admits, but good enough to make the move without incident.

“Sometimes lift and shift actually makes sense,” Paiva told federal IT specialists at the Advanced Technology Academic Research Center’s (ATARC) Cloud and Data Center Summit. “Lift and shift actually gets you there, and for me that was the key – we had to get there.”

At first, he said, “we were no worse off or no better off.” With systems and processes that hadn’t been designed for cloud, however, costs were high. “But then we started doing the rationalization and we dropped our bill 40 percent. We were able to rationalize the way we used the service, we were able to start using more reserve things instead of ad hoc.”

That rationalization included cutting out software and services licenses that duplicated other enterprise solutions. Microsoft Office 365, for example, provided every user with a OneDrive account in the cloud. By getting users to save their work there, meant his team no longer had to support local storage and backup, and the move to shared virtual drives instead of local ones improved worker productivity.

With 226 offices around the world, offloading all that backup was significant. To date, all but a few remote locations have made the switch. Among the surprise benefits: happier users. Once they saw how much easier things were with shared drives that were accessible from anywhere, he says, “they didn’t even care how much money we were saving or how much more secure they were – they cared about how much more functional they suddenly became.”

Likewise, Office 365 provided Skype for Business – meaning the agency could eliminate expensive stand-alone conferencing services – another benefit providing additional savings.

Cost savings matter. Operating in the cloud, ITA’s annual IT costs per user are about $15,000 – less than half the average for the Commerce Department as a whole ($38,000/user/year), or the federal government writ large ($39,000/user/year), he said.

“Those are crazy high numbers,” Paiva says. “That is why I believe we all have to go to the cloud.”

In addition to Office 365, ITA uses Amazon Web Services (AWS) for infrastructure and Salesforce to manage the businesses it supports, along with several other cloud services.

“Government IT spending is out of freaking control,” Paiva says, noting that budget cuts provide incentive for driving change that might not come otherwise. “No one will make the big decisions if they’re not forced to make them.”

Architecture and Planning
If getting to the cloud is now a common objective, figuring out how best to make the move is unique to every user.

“When most organizations consider a move to the cloud, they focus on the ‘front-end’ of the cloud experience – whether or not they should move to the cloud, and if so, how will they get there,” says Srini Singaraju, chief cloud architect at General Dynamics Information Technology, a systems integrator. “However, organizations commonly don’t give as much thought to the ‘back-end’ of their cloud journey: the new operational dynamics that need to be considered in a cloud environment or how operations can be optimized for the cloud, or what cloud capabilities they can leverage once they are there.”

Rather than lift and shift and then start looking for savings, Singaraju advocates planning carefully what to move and what to leave behind. Designing systems and processes to take advantage of its speed and avoiding some of the potential pitfalls not only makes things go more smoothly, it saves money over time.

“Sometimes it just makes more sense to retire and replace an application instead of trying to lift and shift,” Singaraju says. “How long can government maintain and support legacy applications that can pose security and functionality related challenges?”

The challenge is getting there. The number of cloud providers that have won provisional authority to operate under the 5-year-old Federal Risk and Authorization Management Program (FedRAMP) is still relatively small: just 86 with another 75 still in the pipeline. FedRAMP’s efforts to speed up the process are supposed to cut the time it takes to earn a provisional authority to operate (P-ATO) from as much as two years to as little as four months. But so far only three cloud providers have managed to get a product through FedRAMP Accelerated – the new, faster process, according to FedRAMP Director Matt Goodrich. Three more are in the pipeline with a few others lined up behind those, he said.

Once an agency or the FedRAMP Joint Authorization Board has authorized a cloud solution, other agencies can leverage their work with relatively little effort. But even then, moving an application from its current environment is an engineering challenge. Determining how to manage workflow and the infrastructure needed to make a massive move to the cloud work is complicated.

At ITA, for example, Paiva determined that cloud providers like AWS, Microsoft Office 365 and Salesforce had sufficient security controls in place that they could be treated as a part of his internal network. That meant user traffic could be routed directly to them, rather than through his agency’s Trusted Internet Connection (TIC). That provided a huge infrastructure savings because he didn’t have to widen that TIC gateway to accommodate all that routine work traffic, all of which in the past would have stayed inside his agency’s network.

Rather than a conventional “castle-and-moat” architecture, Paiva said he had to interpret the mandate to use the TIC “in a way that made sense for a borderless network.”

“I am not violating the mandate,” he said. “All my traffic that goes to the wild goes through the TIC. I want to be very clear about that. If you want to go to www-dot-name-my-whatever-dot-com, you’re going through the TIC. Office 365? Salesforce? Service Now? Those FedRAMP-approved, fully ATO’d applications that I run in my environment? They’re not external. My Amazon cloud is not external. It is my data center. It is my network. I am fulfilling the intent and letter of the mandate – it’s just that the definition of what is my network has changed.”

Todd Gagorik, senior manager for federal services at AWS, said this approach is starting to take root across the federal government. “People are beginning to understand this clear reality: If FedRAMP has any teeth, if any of this has any meaning, then let’s embrace it and actually use it as it’s intended to be used most efficiently and most securely. If you extend your data center into AWS or Azure, those cloud environments already have these certifications. They’re no different than your data center in terms of the certifications that they run under. What’s important is to separate that traffic from the wild.”

ATARC has organized a working group of government technology leaders to study the network boundary issue and recommend possible changes to the policy, said Tom Suder, ATARC president. “When we started the TIC, that was really kind of pre-cloud, or at least the early stages of cloud,” he said. “It was before FedRAMP. So like any policy, we need to look at that again.” Acting Federal CIO Margie Graves is a reasonable player, he said, and will be open to changes that makes sense, given how much has changed since then.

Indeed, the whole concept of a network’s perimeter has been changed by the introduction of cloud services, Office of Management and Budget’s Grant Schneider, the acting federal chief information security officer (CISO), told GovTechWorks earlier this year.

Limiting what needs to go through the TIC and what does not could have significant implications for cost savings, Paiva said. “It’s not chump change,” he said. “That little architectural detail right there could be billions across the government that could be avoided.”

But changing the network perimeter isn’t trivial. “Agency CIOs and CISOs must take into account the risks and sensitivities of their particular environment and then ensure their security architecture addresses all of those risks,” says GDIT’s Singaraju. “A FedRAMP-certified cloud is a part of the solution, but it’s only that – a part of the solution. You still need to have a complete security architecture built around it. You can’t just go to a cloud service provider without thinking all that through first.”

Sheridan and others involved in the nascent Cloud Center of Excellence sees the continued drive to the cloud as inevitable. “The world has changed,” he says. “It’s been 11 years since these things first appeared on the landscape. We are in exponential growth of technology, and if we hang on to our old ideas we will not continue. We will fail.”

His ad-hoc, unfunded group includes some 130 federal employees from 48 agencies and sub-agencies that operate independent of vendors, think tanks, lobbyists or others with a political or financial interest in the group’s output. “We are a group of people who are struggling to drive our mission forward and coming together to share ideas and experience to solve our common problems and help others to adopt the cloud,” Sheridan says. “It’s about changing the culture.”

Related Articles

LEAPS_300x600
Defensive Cyber Operations Symposium 250×250
GM 250×250
LEAPS_250x250
GDIT Recruitment 250×250
Vago 250×250
GDIT HCSD SCM 2 250×250 Plane Takeoff