Public Safety GTW Spotlight

Automated License Plate Readers on the U.S. Border

Automated License Plate Readers on the U.S. Border

AP/FILE 2014

When U.S. Border Patrol agents stopped a vehicle at the border checkpoint in Douglas, Ariz., it wasn’t a lucky break. They had been on the lookout for the driver’s vehicle and it had been spotted by an automated license plate reader (ALPR). The driver, attempting to escape into Mexico, was arrested on suspicion of murder.

All along U.S. borders, ALPRs changed the face and pace of security and enforcement – although not in ways most people might expect.

While APLRs may occasionally catch individuals with a criminal record trying to come into the United States, they play a much greater role in stopping criminals trying to leave. The systems have driven a dramatic drop in vehicle thefts in U.S. border towns. They’ve also been instrumental in finding missing persons and stopping contraband.

“Recognition technology has become very powerful,” says Mark Prestoy, lead systems engineer in General Dynamics Information Technology’s Video Surveillance Lab. “Capturing an image – whether a license plate or something more complex such as a face can be successful when you have a well-placed sensor, network connection and video analytics. Once you have the image, you can process it to enhance and extract information. License plate recognition is similar to optical character recognition used in a printed document.”

“It’s an enforcement tool,” says Efrain Perez, acting director of field operations and readiness for Customs and Border Protection (CBP). “They help us identify high-risk vehicles.”

The agency has about 500 ALPR systems deployed at 91 locations to process passenger vehicles coming into the United States. It also has ALPRs on all 110 outbound lanes to Mexico, which were added in 2009 after the U.S. committed to trying to interrupt the flow of cash and weapons from the U.S. into Mexico. CBP is slowly adding the devices to outbound lanes on the Canadian border, as well.

For APLRs surveilling inbound traffic, their primary purpose is to eliminate the need for border officers to manually enter license plate numbers, allowing them to maintain a steady gaze on travelers so they can spot suspicious behavior and maintain situational awareness. Outbound traffic trained ALPRs are used to identify high-risk travelers, help track the movement of stolen vehicles and support other U.S. law enforcement agencies through the National Law Enforcement Telecommunications System.

Along the southern U.S. border, most ALPRs are fixed units at ports of entry and cover both inbound and outbound vehicles. Along the Canadian border, most APLRs are handheld units. CBP officials hope to install fixed readers at northern ports of entry in the future. “The hand-held readers are not as robust,” points out Rose Marie Davis, acquisition program manager of the Land Border Integration Program (LBIP).

The first generation of readers was deployed around 1997­-98 timeframe. Today, LBIP incorporates the technology, experience and lessons learned from that initial effort. Another effort, under the Western Hemisphere Travel Initiative in 2008 and 2009, extended those lessons learned to all other aspects of inspection processing.

The readers serve three purposes: Information gathered from vehicles transiting checkpoints is checked against a variety of law enforcement databases for outstanding warrants or other alerts. Once through, the readers allow CBP officers who conducted the primary inspection to maintain observation of a vehicle after passage.

CBP operates both fixed and mobile border checkpoints in addition to ports of entry.

But the ALPRs’ facilitation of legitimate travel and processing is one of its most telling and least publicly appreciated roles, Davis noted. “That automation facilitates legitimate travel. On our land borders it’s used to keep up the flow.”

With roughly 100 million privately owned vehicles entering through land borders in fiscal 2016 and 24 million processed at inland Border Patrol checkpoints each year, the ALPRs significantly reduce the need to manually enter license plate information – which takes up to 12 seconds per vehicle – on top of entering numerous other data points and documents, according to Davis.

Those extra seconds add up. CBP says it averages 65.5 seconds to process each vehicle entering the country, or 55 vehicles per lane per hour. That number drops to 46.5 vehicles per lane per hour without ALPR.

“For a 12-lane port like Paso Del Norte in El Paso, Texas, the throughput loss without ALPRs [would be] equivalent to closing two lanes,” CBP said in a statement. The technology is even more critical to CBP’s Trusted Traveler Programs (NEXUS and SENTRI), which allow participants express border-crossing privileges. Those highly efficient lanes now process vehicles in just 36 seconds, so adding 12 seconds processing time to each vehicle would result in a 33 percent decline in throughput.

“At the most congested ports, where wait times exceed 30 minutes daily, even a 5 to 10 second increase in cycle time could result in a doubling of border delays for inbound vehicle travelers,” CBP said.

When it comes to data storage and management, ALPR data is managed and stored through CBP’s TECS system, which allows  users to input, access and maintain records for law enforcement, inspection, intelligence-gathering, and operations.

Privacy advocates like the Electronic Frontier Foundation have expressed concern about potential abuse and commercialization from the sharing of data acquired by law enforcement ALPRs around the country. However, border ALPR data is held by CBP and is law enforcement sensitive. Sharing is strictly with other federal and law enforcement agencies. Sharing of data is under the strict privacy rules of the Department of Homeland Security. However, most of the sharing comes from state and local enforcement agencies sending information on stolen or missing vehicles and people to CBP and the Border Patrol, rather than outward bound information from CBP.

The sharing pays off in numerous ways. For example, a young girl kidnapped in Pennsylvania was found in Arizona, thanks to ALPR border data. Armed and dangerous individuals from Indio, Calif., to Laredo, Texas have been apprehended thanks to border ALPRs. Missing and abducted children have been found and major drug busts have captured volumes of illegal drugs, including 2,827 pounds of marijuana in Falfurrias, Texas, and 60 pounds of cocaine in Las Cruces, N.M., all thanks to ALPR data.

One of the most startling reader successes on the border is the dramatic reduction in vehicle thefts in U.S. border towns. Thieves who steal cars in the United States and attempt to drive them into Mexico now have a much higher chance of being caught.

Laredo, Texas, led American cities in car thefts in 2009. In 2015, it was 137th. Similar drops were seen in San Diego, which dropped from 13th to 45th, Phoenix, which dropped from 40th to 80th and Brownsville, Texas, which dropped from 75th to 217th.

Funding for ALRP purchases comes from the Treasury Executive Office of Asset Forfeiture. While CBP makes an annual request to expand its outbound program, officials are now seeking a complete technology refresh to update second-generation readers installed between 2008 and 2011.

Improvements include higher-resolution day and night cameras, faster processing times, improved data security, lighter, more covert readers, mobile device connectivity , new audio and visual alarms, improved durability and reduced power consumption.

Officials would like to expand ALRP use along the northern border, reading vehicle plates leaving the U.S., starting in metro Detroit.

“We’ve requested the funding for the tech refresh,” says Davis. “We have a new contract and it has been priced out, but we’re not funded to do that refresh yet,” she says. However, officials are hopeful that funding will be found and an even more effective generation of readers can be deployed.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
Automation Critical to Securing Code in an Agile, DevOps World

Automation Critical to Securing Code in an Agile, DevOps World

The world’s biggest hack might have happened to anyone. The same software flaw hackers exploited to expose 145 million identities in the Equifax database – most likely yours included – was also embedded in thousands of other computer systems belonging to all manner of businesses and government agencies.

The software in question was a commonly used open-source piece of Java code known as Apache Struts. The Department of Homeland Security’s U.S. Computer Emergency Readiness Team (US-CERT) discovered a flaw in that code and issued a warning March 8, detailing the risk posed by the flaw. Like many others, Equifax reviewed the warning and searched its systems for the affected code. Unfortunately, the Atlanta-based credit bureau failed to find it among the millions of lines of code in its systems. Hackers exploited the flaw three days later.

Open source and third-party software components like Apache Struts now make up between 80 and 90 percent of software produced today, says Derek Weeks, vice president and DevOps advocate at Sonatype. The company is a provider of security tools and manager of the world’s largest open source software collections The Central Repository. Programmers completed nearly 60 billion software downloads from the repository in 2017 alone.

“If you are a software developer in any federal agency today, you are very aware that you are using open-source and third-party [software] components in development today,” Weeks says. “The average organization is using 125,000 Java open source components – just Java alone. But organizations aren’t just developing in Java, they’re also developing in JavaScript, .Net, Python and other languages. So that number goes up by double or triple.”

Reusing software saves time and money. It’s also critical to supporting the rapid cycles favored by today’s Agile and DevOps methodologies. Yet while reuse promises time-tested code, it is not without risk: Weeks estimates one in 18 downloads from The Central Repository – 5.5 percent – contains a known vulnerability. Because it never deletes anything, the repository is a user-beware system. It’s up to software developers themselves – not the repository – to determine whether or not the software components they download are safe.

Manual Review or Automation?

Performing a manual, detailed security analysis of each open-source software component takes hours to ensure it is safe and free of vulnerabilities. That in turn, distracts from precious development time, undermining the intended efficiency of reusing code in the first place.

Tools from Sonatype, Black Duck of Burlington, Mass., and others automate most of that work. Sonatype’s Nexus Firewall for example, scans modules as they come into the development environment and stops them if they contain flaws. It also suggests alternative solutions, such as newer versions of the same components, that are safe. Development teams can employ a host of automated tools to simplify or speed other parts of the build, test and secure processes.

Some of these are commercial products, and others like the software itself, are open-source tools. For example, Jenkins is a popular open-source DevOps tool that helps developers quickly find and solve defects in their codebase. These tools focus on the reused code in a system; static analysis tools, like those from Veracode, focus on the critical custom code that glues that open-source software together into a working system.

“Automation is key to agile development,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s (GDIT) Health Solutions. “The tools now exist to automate everything: the builds, unit tests, functional testing, performance testing, penetration testing and more. Ensuring the code behind new functionality not only works, but is also secure, is critical. We need to know that the stuff we’re producing is of high quality and meets our standards, and we try to automate as much of these reviews as possible.”

But automated screening and testing is still far from universal. Some use it, others don’t. Weeks describes one large financial services firm that prided its software team’s rigorous governance process. Developers were required to ask permission from a security group before using open source components. The security team’s thorough reviews took about 12 weeks for new components and six to seven weeks for new versions of components already in use. Even so, officials estimated some 800 open source components had made it through those reviews, and were in use in their 2,000-plus deployed applications.

Then, Sonatype was invited to scan the firm’s deployed software. “We found more than 13,000 open source components were running in those 2,000 applications,” Weeks recalls. “It’s not hard to see what happened. You’ve got developers working on two-week sprints, so what do you think they’re going to do? The natural behavior is, ‘I’ve got a deadline, I have to meet it, I have to be productive.’ They can’t wait 12 weeks for another group to respond.”

Automation, he said, is the answer.

Integration and the Supply Chain

Building software today is a lot like building a car: Rather than manufacture every component, from the screws to the tires to the seat covers, manufacturers focus their efforts on the pieces that differentiate products and outsource the commodity pieces to suppliers.

Chris Wysopal, chief technology officer at Veracode, said the average software application today uses 46 ready-made components. Like Sonatype, Veracode offers a testing tool that scans components for known vulnerabilities; its test suite also includes a static analysis tool to spot problems in custom code and a dynamic analysis tool that tests software in real time.

As development cycles get shorter, the demand for automating features is increasing, Wysopal says. The five-year shift from waterfall to Agile, shortened typical development cycles from months to weeks. The advent of DevOps and continuous development accelerates that further, from weeks to days or even hours.

“We’re going through this transition ourselves. When we started Veracode 11 years ago, we were a waterfall company. We did four to 10 releases a year,” Wysopal says. “Then we went to Agile and did 12 releases a year and now we’re making the transition to DevOps, so we can deploy on a daily basis if we need or want to. What we see in most of our customers is fragmented methodologies: It might be 50 percent waterfall, 40 percent agile and 10 percent DevOps. So they want tools that can fit into that DevOps pipeline.”

A tool built for speed can support slower development cycles; the opposite, however, is not the case.

One way to enhance testing is to let developers know sooner that they may have a problem. Veracode is developing a product that will scan code as its written by running a scan every few seconds and alerting the developer as soon as a problem is spotted. This has two effects: First, to clean up problems more quickly, but second, to help train developers to avoid those problems in the first place. In that sense, it’s like spell check in a word processing program.

“It’s fundamentally changing security testing for a just-in-time programming environment,” Wysopal says.

Yet as powerful and valuable as automation is, these tools alone will not make you secure.

“Automation is extremely important,” he says. “Everyone who’s doing software should be doing automation. And then manual testing on top of that is needed for anyone who has higher security needs.” He puts the financial industry and government users into that category.

For government agencies that contract for most of their software, understanding what kinds of tools and processes their suppliers have in place to ensure software quality, is critical. That could mean hiring a third-party to do security testing on software when it’s delivered, or it could mean requiring systems integrators and development firms to demonstrate their security processes and procedures before software is accepted.

“In today’s Agile-driven environment, software vulnerability can be a major source of potential compromise to sprint cadences for some teams,” says GDIT’s Zach. “We can’t build a weeks-long manual test and evaluation cycle into Agile sprints. Automated testing is the only way we can validate the security of our code while still achieving consistent, frequent software delivery.”

According to Veracode’s State of Software Security 2017, 36 percent of the survey’s respondents do not run (or were unaware of) automated static analysis on their internally developed software. Nearly half never conduct dynamic testing in a runtime environment. Worst of all, 83 percent acknowledge releasing software before or resolving security issues.

“The bottom line is all software needs to be tested. The real question for teams is what ratio and types of testing will be automated and which will be manual,” Zach says. “By exploiting automation tools and practices in the right ways, we can deliver the best possible software, as rapidly and securely as possible, without compromising the overall mission of government agencies.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
How Feds Are Trying to Bring Order to Blockchain Mania

How Feds Are Trying to Bring Order to Blockchain Mania

Blockchain hype is at a fever pitch. The distributed ledger technology is hailed as a cure for everything from identity management to electronic health records and securing the Internet of Things. Blockchain provides a secure, reliable matter of record for transactions between independent parties, entities or companies. There are industry trade groups, a Congressional Blockchain Caucus and frequent panel discussions to raise awareness.

Federal agencies are plunging ahead, both on their own and in concert with the General Services Administration’s Emerging Citizen Technology Office (GSA ECTO). The office groups blockchain with artificial intelligence and robotic automation, social and collaborative technologies, and virtual and augmented reality as its four most critical technologies. Its goal: Develop use cases and roadmaps to hasten government adoption and success with these new technologies.

“There’s a number of people who assume that fed agencies aren’t looking at things like blockchain,” Justin Herman, emerging technology lead and evangelist at GSA ECTO, told a gathering at the State of the Net Conference held Jan. 29 in Washington, D.C. “We got involved in blockchain because there were so many federal agencies coming to the table demanding government wide programs to explore the technology. People had already done analysis on what specific use cases they thought they had and wanted to be able to invest in it.”

Now his office is working with more than 320 federal, state and local agencies interested in one or more of its four emerging tech categories. “A lot of that is blockchain,” Herman said. “Some have already done successful pilots. We hear identity management, supply chain management…. We should be exploring those things together, not in little silos, not in walled gardens, but in public.”

Among those interested:

  • The Joint Staff’s J4 Logistics Directorate and the Deputy Assistant Secretary of Defense for Maintenance, Policy and Programs are collaborating on a project to create a digital supply chain, enabled by additive manufacturing (also known as 3-D Printing). Blockchain’s role would be to secure the integrity of 3-D printing files, seen as “especially vulnerable to cyber threats and intrusions.” The Navy is looking at the same concept.“The ability to secure and securely share data throughout the manufacturing process (from design, prototyping, testing, production and ultimately disposal) is critical to Additive Manufacturing and will form the foundation for future advanced manufacturing initiatives,” writes Lt. Cmdr. Jon McCarter, a member of the Fiscal 2017 Secretary of the Navy Naval Innovation Advisory Council (NIAC).
  • The Office of the Undersecretary of Defense for Acquisition, Technology and Logistics (OUSD (AT&L)) Rapid Reaction Technology Office (RRTO) has similar designs on blockchain, seeing it as a potential solution for ensuring data provenance, according to a solicitation published Jan. 29.
  • The Centers for Disease Control’s Center for Surveillance, Epidemiology and Laboratory Services is interested in using blockchain for public health tracking, such as maintaining a large, reliable, current and shared database of opioid abuse or managing health data during crises. Blockchain’s distributed ledger system ensures that when one user updates the chain, everyone sees the same data, solving a major shortfall today, when researchers are often working with different versions of the same or similar data sets, rather than the same, unified data.
  • The U.S. Food and Drug Administration has similar interests in sharing health data for large-scale clinical trials.
  • The Office of Personnel Management last fall sought ideas for how to create a new consolidated Employee Digital Record that would track an employee’s skills, performance and experience over the course of an entire career, using blockchain as a means to ensure records are up to date and to speed the process of transfers from one agency to another.

Herman sees his mission as bringing agencies together so they can combine expertise and resources and more quickly make progress. “There are multiple government agencies right now exploring electronic health records with blockchain,” he said. “But we can already see the hurdles with this because they are separate efforts, so we’re adding barriers. We’ve got to design new and better ways to move across agencies, across bureaucracies and silos, to test, evaluate and adopt this technology. It should be eight agencies working together on one pilot, not eight separate pilots on one particular thing.”

The Global Blockchain Business Council (GBBC) is an industry group advocating for blockchain technology and trying to take a similar approach in the commercial sector to what GSA is doing in the federal government. “We try to break down these traditionally siloed communities,” said Mercina Tilleman-Dick, chief operating officer for the GBBC.

These days, that means trying to get people together to talk about standards and regulation and connecting those who are having success with others just beginning to think about such issues. “Blockchain is not going to solve every problem,” Tilleman-Dick said. It could prove effective in a range of use cases where secure, up-to-date, public records are essential.

Take property records, for example. The Republic of Georgia moved all its land titles onto a blockchain-based system in 2017, Sweden is exploring the idea and the city of South Burlington, Vt., is working on a blockchain pilot for local real estate transactions. Patrick Byrne, founder of Overstock.com and its subsidiary Medici Ventures, announced in December he’s funding a startup expressly to develop a global property registry system using blockchain technology.

“I think over the next decade it will fundamentally alter many of the systems that power everyday life,” GBBC’s Tilleman-Dick said.

“Blockchain has the potential to revolutionize all of our supply chains. From machine parts to food safety,” said Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology. “We will be able to look up the provenance and history of an item, ensuring it is genuine and tracing the life of its creation along the supply chain.

“Secure entries, immutable and created throughout the life of an object, allow for secure sourcing, eliminate fraud, forgeries and ensure food safety,” Gadwale said. “Wal-Mart has already begun trials of blockchain with food safety in mind.”

Hilary Swab Gawrilow, legislative director and counsel in the office of Rep. Jared Polis (D-Colo.) who is among the Congressional Blockchain Caucus leaders, said the government needs to do more to facilitate understanding of the technology. The rapid rise in value of bitcoin and the overall wild fluctuations in value and speculation in digital cryptocurrencies, has done much to raise awareness. Yet it does not necessarily instill confidence in the concepts behind blockchain and distributed ledger technology.

“There are potential government applications or programs that deserve notice and study,” Swab Gawrilow said.

Identity management is a major challenge for agencies today. In citizen engagement, citizens may have accounts with multiple agencies. Finding a way to verify status without having to build complicated links between disparate systems to enable benefits or confirm program eligibility would be valuable. The same is true for program accountability. “Being able to verify transactions – would be another great way to use blockchain technology.”

That’s where the caucus is coming from: A lot of this is around education. Lawmakers have all heard of bitcoin, whether in a positive or negative way. “They understand what it is, Gawrilow said. “But they don’t necessarily understand the underlying technology.” The caucus’ mission is to help inform the community.

Like GSA’s Herman, Gawrilow favors agency collaboration on new technology projects and pilots. “HHS did a hackathon on blockchain. The Postal Service put out a paper, and State is doing something. DHS is doing something. It’s every agency almost,” she said. “We’ve kicked around the idea of asking the administration to start a commission around blockchain.”

That, in turn, might surface issues requiring legislative action – “tweaks to the law” that underlie programs, such as specifications on information access, or a prescribed means of sharing or verifying data. That’s where lawmakers could be most helpful.

Herman, for his part, sees GSA as trying to fill that role, and to fill it in such a way that his agency can tie together blockchain and other emerging and maturing technologies. “It’s not the technology, it’s the culture,” he said. “So much in federal tech is approached as some zero-sum game, that if an agency is dedicating time to focus and investigate a technology like blockchain, people freak out because they’re not paying attention to cloud or something else.”

Agencies need to pool resources and intelligence, think in terms of shared services and shared approaches, break down walls and look holistically at their challenges to find common ground.

That’s where the payoff will come. Otherwise, Herman asks, “What does it matter if the knowledge developed isn’t shared?”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
Relocatable Video Surveillance Systems Give CBP Flexibility on Border

Relocatable Video Surveillance Systems Give CBP Flexibility on Border

Illegal border crossings fell to their lowest level in at least five years in 2017, but after plunging through April, the numbers have risen each of the past eight months, according to U.S. Customs and Border Protection (CBP).

Meanwhile, the debate continues: Build a physical wall spanning from the Gulf of Mexico to the Pacific Ocean, add more Border Patrol agents or combine better physical barriers with technology to stop drug trafficking, smuggling and illegal immigration?

Increasingly, however, it’s clear no one solution is right for everyplace. Ron Vitiello, acting deputy commissioner at CBP, said the agency intends to expand on the existing 652 miles of walls and fencing now in place – but not necessarily extend the wall the entire length of the border.

“We’re going to add to fill some of the gaps we didn’t get in the [previous] laydown, and then we’re going to prioritize some new wall [construction] across the border in places where we need it the most,” he said in a Jan. 12 TV interview.

Walls and barriers are a priority, Vitiello said in December at a CBP press conference. “In this society and all over our lives, we use walls and fences to protect things,” he said. “It shouldn’t be any different on the border.…  But we’re still challenged with access, we’re still challenged with situational awareness and we’re still challenged with security on that border. We’re still arresting nearly 1,000 people a day.

“So we want to have more capability: We want more agents, we want more technology and we want that barrier to have a safer and more secure environment.”

Among the needs: Relocatable Remote Video Surveillance Systems (R-RVSS) that can be picked up and moved to where they’re needed most as border activity ebbs and flows in response to CBP’s border actions.

CBP mapped its fencing against its 2017 apprehension record in December (see map), finding that areas with physical fencing, such as near the metropolitan centers of San Diego/Tijuana, Tucson/Nogales and El Paso/Juarez are just as likely to see illegal migration activity as unfenced areas in the Laredo/Nueva Laredo area.

CBP mapped its fencing against its 2017 apprehension record in December (see map below), finding that areas with physical fencing are just as likely to see illegal migration activity as unfenced areas.

Source: U.S. Customs and Border Protection

Rep. Will Hurd (R-Tex.), vice chairman of the House Homeland Security subcommittee on Border and Maritime Security, is an advocate for technology as both a complement to and an alternative to physical walls and fences. “A wall from sea to shining sea is the least effective and most expensive solution for border security,” he argued Jan. 16. “This is especially true in areas like Big Bend National Park, where rough terrain, natural barriers and the remoteness of a location render a wall or other structure impractical and ineffective.”

CBP has successfully tested and deployed video surveillance systems to enhance situational awareness on the border and help Border Patrol agents track and respond to incursions. These RVSS systems use multiple day and night sensors mounted on poles to create an advance warning and tracking system identifying potential border-crossing activity. Officers can monitor those sensors feeds remotely and dispatch agents as needed.

Savvy smugglers are quick to adjust when CBP installs new technologies, shifting their routes to less-monitored areas. The new, relocatable RVSS systems (R-RVSS) make it easy for CBP to respond in kind, forcing smugglers and traffickers to constantly adapt.

Robert Gilbert, a former Border Patrol sector chief at CBP and now a senior program director for RVSS at systems integrator General Dynamics Information Technology (GDIT), says relocatable systems will empower CBP with new tools and tactics. “Over the past 20 or 30 years, DOJ then CBP has always deployed technology into the busiest areas along the border, the places with the most traffic. In reality, because of the long procurement process, we usually deployed too late as the traffic had shifted to other locations on the border. The big difference with this capability is you can pick it up and move it to meet the evolving threat. The technology can be relocated within days.”

GDIT fielded a three-tower system in CBP’s Laredo (Texas) West area last summer and a similar setup in McAllen, Texas, in December. The towers – set two to five miles apart – were so effective, CBP is now preparing to buy up to 50 more units to deploy in the Rio Grande sector, where the border follows the river through rugged terrain. There, a physical wall may not be viable, while a technology-based virtual wall could prove highly effective.

Each tower includes an 80-foot-tall collapsible pole that can support a sensor and communications payload weighing up to 2,000 pounds. While far in excess of current needs, it provides a growth path to hanging additional sensors or communications gear if requirements change later on.

When CBP wants to move the units, poles are collapsed, sensors can be packed away and a standard 3/4- or 1-ton pickup truck can haul it to its next location.

Roughly two-thirds of the U.S.-Mexico border runs through land not currently owned by the federal government, a major hurdle when it comes to building permanent infrastructure like walls or even fixed-site towers. Land acquisition would add billions to the cost even if owners agree to the sale. Where owners decline, the government might still be able to seize the land under the legal procedure known as eminent domain, but such cases can take years to resolve.

By contrast, R-RVSS requires only a temporary easement from the land owner. Site work is bare bones: no concrete pad, just a cleared area measuring roughly 40 feet by 40 feet. It need not be level – the R-RVSS system is designed to accommodate slopes up to 10 degrees. Where grid power is unavailable – likely in remote areas – a generator or even a hydrogen fuel cell can produce needed power.

What’s coming next
CBP seeks concepts for a Modular Mobile Surveillance System (M2S2) similar to RVSS, which provide the Border Patrol with an even more rapidly deployable system for detecting, identifying, classifying and tracking “vehicles, people and animals suspected of unlawful border crossing activities.”

More ambitiously, CBP also wants such systems to incorporate data science and artificial intelligence to add a predictive capability. The system would “detect, identify, classify, and track equipment, vehicles, people, and animals used in or suspected of unlawful border crossing activities,” and employ AI to help agents anticipate their direction so they can quickly respond, and resolve each situation.

At the same time, CBP is investigating RVSS-like systems for coastal areas. Deploying pole-mounted systems would train their sensors to monitor coastal waters, where smugglers in small boats seek to exploit the shallows by operating close to shore, rather than the deeper waters patrolled by Coast Guard and Navy ships.

In a market research request CBP floated last June, the agency described a Remote Surveillance System Maritime (RSS-M) as “a subsystem in an overall California Coastal Surveillance demonstration.” The intent: to detect, track, identify, and classify surface targets of interest, so the Border Patrol and partner law enforcement agencies can interdict such threats.

Legislating Tech
Rep. Hurd, Rep. Peter Aguilar (D-Calif.) and a bipartisan group of 49 other congress members support the ‘‘Uniting and Securing America Act of 2017,’’ or “USA Act.” The measure included a plan to evaluate every mile of the U.S.-Mexico border to determine the best security solution for each. After weeks of Senate wrangling over immigration matters, Sens. John McCain (R-Ariz.) and Chris Coons (D-Del.) offered a companion bill in the Senate on Feb. 5.

With 820 miles of border in his district, Hurd says, few in Congress understand the border issue better than he – or feel it more keenly.

“I’m on the border almost every weekend,” he said when unveiling the proposal Jan. 16. The aim: “Full operational control of our border by the year 2020,” Hurd told reporters. “We should be able to know who’s going back and forth across our border. The only way we’re going to do that is by border technologies.” And in an NPR interview that day, he added: “We should be focused on outcomes. How do we get operational control of that border?”

The USA Act would require the Department of Homeland Security to “deploy the most practical and effective technology available along the United States border for achieving situational awareness and operational control of the border by Inauguration Day 2021, including radar surveillance systems; Vehicle and Dismount Exploitation Radars (VADER); three-dimensional, seismic acoustic detection and ranging border tunneling detection technology; sensors, unmanned cameras, drone aircraft and anything else that proves more effective or advanced. The technology is seen as complementing and supporting hard infrastructure.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
The ABCs of 2018 Federal IT Modernization: I to Z

The ABCs of 2018 Federal IT Modernization: I to Z

In part two of GovTechWorks’ analysis of the Trump Administration’s federal IT modernization plan, we examine the likely guiding impact of the Office of Management and Budget, the manner in which agencies’ infrastructures might change, and the fate of expensive legacy systems.

The White House IT modernization plan released in December seeks a rapid overhaul of IT infrastructure across federal civilian agencies, with an emphasis on redefining the government’s approach to managing its networks and securing its data. Here, in this second part of our two-part analysis, is what you need to know from I to Z (for A-H, click here):

I is for Infrastructure
Modernization boils down to three things: Infrastructure, applications and security. Imagine if every government agency managed its own telephone network or international logistics office, rather than outsourcing such services. IT services are essentially the same. Agencies still need expertise to connect to those services – they still have telecom experts and mail room staff – but they don’t have to manage the entire process.

Special exceptions will always exist for certain military, intelligence (or other specialized) requirements. Increasingly, IT services are becoming commodity services purchased on the open market. Rather than having to own, manage and maintain all that infrastructure, agencies will increasingly buy infrastructure as a service (IaaS) in the cloud — netting faster, perpetually maintained and updated equipment at a lower cost. To bring maximum value – and savings – out of those services, they’ll have to invest in integration and support services to ensure their systems are not only cost effective, but also secure.

J is for JAB, the Joint Authorization Board
The JAB combines expertise at General Services Administration (GSA), Department of Homeland Security (DHS) and the Department of Defense (DOD). It issues preliminary authority to operate (ATO) for widely used cloud services. The JAB will have a definitive role in prioritizing and approving commercial cloud offerings for the highest-risk federal systems.

K is for Keys
The ultimate solution for scanning encrypted data for potential malicious activity is to unencrypt that data for a thorough examination. This involves first having access to encryption keys for federal data and then, securing those keys to ensure they don’t get in the wrong hands. In short, these keys are key to the federal strategy of securing both government data and government networks.

L is for Legacy
The government still spends 70 percent of its IT budget managing legacy systems. That’s down from as much as 85 percent a few years ago, but still too much. In a world where volumes of data continue to expand exponentially and the cost of computer processing power continues to plunge, how long can we afford overspending on last year’s (or last decade’s) aging (and less secure) technology.

M is for Monopsony
A monopoly occurs when one source controls the supply of a given product, service or commodity. A monopsony occurs when a single customer controls the consumption of products, services or commodities. In a classical monopsony, the sole customer dictates terms to all sellers.

Despite its size, the federal government cannot dictate terms to information technology vendors. It can consolidate its purchasing power to increase leverage, and that’s exactly what the government will do in coming years. The process begins with networking services as agencies transition from the old Networx contract to the new Enterprise Information Services vehicle.

Look for it to continue as agencies consolidate purchasing power for commodity software services, such as email, continuous monitoring and collaboration software.

The government may not ultimately wield the full market power of a monopsony, but it can leverage greater negotiating power by centralizing decision making and consolidating purchase and licensing agreements. Look for that to increase significantly in the years ahead.

N is for Networks
Networks used to be the crown jewels of the government’s information enterprise, providing the glue that held systems together and enabling the government to operate. But if the past few years proved anything, it’s that you can’t keep the bad guys out. They’re already in, looking around, waiting for an opportunity.

Networks are essential infrastructure, but will increasingly be virtualized in the future, exist in software and protect encrypted data travelling on commercial fiber and stored much of the time, in commercial data centers (generically referred to as the cloud). You may not keep the bad guys out, but you can control what they get access to.

O is for OMB
The Office of Management and Budget has oversight over much of the modernization plan. The agency is mentioned 127 times in the White House plan, including 47 times in its 50 recommendations. OMB will either be the responsible party or the receiving party, for work done by others on 34 of those 50 recommendations.

P is for Prioritization
Given the vast number of technical, manpower and security challenges that weigh down modernization efforts, prioritizing programs that can deliver the greatest payoff, are essential. In addition, agencies are expected to prioritize and focus their modernization efforts on high-value assets that pose the greatest vulnerabilities and risks. From those lists, by June 30, the DHS must identify six to receive centralized interventions that include staffing and technical support.

The aim is to prioritize where new investment, talent infusions and security policies will make the greatest difference. To maximize that effort, DHS may choose projects that can expand to include other systems and agencies.

OMB must also review and prioritize any impediments to modernization and cloud adoption.

Q is for Quick Start
Technology is not often the most complicated part of many modernization efforts. Finding a viable acquisition strategy that won’t put yesterday’s technology in the government’s hands tomorrow, is often harder. That’s why the report directs OMB to assemble an Acquisition Tiger Team to develop a “quick start” acquisition package to help agencies more quickly license technology and migrate to the cloud.

The aim: combine market research, acquisition plans, readily identified sources and templates for both requests for quotes (RFQs) and Independent Government Cost Estimate (IGCE) calculations — which would be based on completed acquisitions. The tiger team will also help identify qualified small and disadvantaged businesses to help agencies meet set-aside requirements.

R is for Recommendations
There are 50 recommendations in the White House IT modernization report with deadlines ranging from February to August, making the year ahead a busy one for OMB, DHS and GSA, the three agencies responsible for most of the work. A complete list of the recommendations is available here.

T is for the TIC
The federal government developed the Trusted Internet Connection as a means of controlling the number of on and off ramps between government networks and the largely unregulated internet. But in a world now dominated by cloud-based software applications, remote cloud data centers, mobile computing platforms and web-based interfaces that may access multiple different systems to deliver information in context, the TIC needs to be rethought.

“The piece that we struggled with is the Trusted Internet Connections (TIC) initiative – that is a model that has to mature and get solved,” former Federal CIO Tony Scott told Federal News Radio. “It’s an old construct that is applied to modern-day cloud that doesn’t work. It causes performance, cost and latency issues. So the call to double down and sort that out is important. There has been a lot of good work that has happened, but the definitive solution has not been figured out yet.”

The TIC policy is the heart and soul of the government’s perimeter-based security model. Already, some agencies chose to bypass the TIC for certain cloud-based services, such as for Office 365, trusting Microsoft’s security and recognizing that if all that data had to go through an agency’s TIC, performance would suffer.

To modernize TIC capabilities, policies, reference architectures and associated cloud security authorization baselines, OMB must update TIC policies so agencies have a clear path forward to build out data-level protections and more quickly migrate to commercial cloud solutions. A 90-day sprint is to begin in mid-February, during which projects approved by OMB will pilot proposed changes in TIC requirements.

OMB must determine whether all data traveling to and from agency information systems hosted by commercial cloud providers warrants scanning by DHS, or whether only some information needs to be scanned. Other considerations under review: Expanding the number of TIC access points in each agency and a model for determining how best to implement intrusion detection and prevention capabilities into cloud services.

U is for Updating the Federal Cloud Computing Strategy
The government’s “Cloud First” policy is now seven years old. Updates are in order. By April 15, OMB must provide additional guidance on both appropriate use cases and operational security for cloud environments. All relevant policies on cloud migration, infrastructure consolidation and shared services will be reviewed.

In addition, OMB has until June to develop standardized contract language for cloud acquisition, including clauses that define consistent requirements for security, privacy and access to data. Establishing uniform contract language will make it easier to compare and broker cloud offerings and ensure government requirements are met.

V is for Verification
Verification or authentication of users’ identities is at the heart of protecting government information. Are you who you say you are? Key to securing information systems is ensuring that access is granted to only users who can be identified and verified as deserving access.

OMB has until March 1 to issue for public comment new identity policy guidance and to recommend identity service areas suitable for shared services. GSA must provide a business case for consolidating existing identity services to improve usability and drive secure access and enable cloud-based collaboration service that will enhance the ability to easily share and collaborate across agencies, which can be cumbersome today.

W, X, Y, Z is for Wrapping it All Up
The Federal Government is shifting to a consolidated IT model that will change the nature of IT departments and the services they buy. Centralized offerings for commodity IT – whether email, office tools and other common software-as-a-service offerings or virtual desktops and web hosting – will be the norm. As much as possible, the objective is to get agencies on the same page, using the same security services, the same collaboration services, the same data services and make those common (or in some cases shared) across multiple agencies.

Doing so promises to reduce needed manpower and licensing costs by eliminating duplication of effort and increased market leverage to drive down prices. But getting there will not be easy. Integration and security pose unique challenges in a government context, requiring skill, experience and specific expertise. On the government side, policy updates will only solve some of the challenges. Acquisition regulations must also be updated to support wider adoption of commercial cloud products.

Some agencies will need more help than others. Cultural barriers will continue to be major hurdles. Inevitably, staff will have to develop new skills as old ones disappear. Yet even in the midst of all that upheaval, some things don’t change. “In the end, IT modernization is really all about supporting the mission,” says Stan Tyliszczak, chief engineer at systems integrator General Dynamics Information Technology. “It’s about helping government employees complete their work, protecting the privacy of our citizens and ensuring both have timely access to the information and services they need. IT has always made those things better and easier, and modernization is only necessary to continue that process. That much never changes.”

 

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250