editorial

Is Identity the New Perimeter? In a Zero-Trust World, More CISOs Think So

Is Identity the New Perimeter? In a Zero-Trust World, More CISOs Think So

As the network perimeter morphs from physical to virtual, the old Tootsie Pop security model – hard shell on the outside with a soft and chewy center – no longer works. The new mantra, as Mittal Desai, chief information security officer (CISO) at the Federal Energy Regulatory Commission, said at the ATARC CISO Summit: “Never trust, double verify.”

The zero-trust model modernizes conventional network-based security for a hybrid cloud environment. As agencies move systems and storage into the cloud, networks are virtualized and security naturally shifts to users and data. That’s easy enough to do in small organizations, but rapidly grows harder with the scale and complexity of an enterprise.

The notion of zero-trust security first surfaced five years ago in a Forrester Research report prepared for the National Institute for Standards and Technology (NIST). “The zero-trust model is simple,” Forrester posited then. “Cybersecurity professionals must stop trusting packets as if they were people. Instead, they must eliminate the idea of a trusted network (usually the internal network) and an untrusted network (external networks). In zero-trust, all network traffic is untrusted.”

Cloud adoption by its nature is forcing the issue, said Department of Homeland Security Chief Technology Officer Mike Hermus, speaking at a recent Tech + Tequila event: “It extends the data center,” he explained. “The traditional perimeter security model is not working well for us anymore. We have to work toward a model where we don’t trust something just because it’s within our boundary. We have to have strong authentication, strong access control – and strong encryption of data across the entire application life cycle.”

Indeed, as other network security features mature, identity – and the access that goes with it – is now the most common cybersecurity attack vector. Hackers favor phishing and spear-phishing attacks because they’re inexpensive and effective – and the passwords they yield are like the digital keys to an enterprise.

About 65 percent of breaches cited in Verizon’s 2017 Data Breach Investigations Report made use of stolen credentials.

Interestingly however, identity and access management represent only a small fraction of cybersecurity investment – less than 5 percent – according to Gartner’s market analysts. Network security equipment by contrast, constitutes more than 12 percent. Enterprises continue to invest in the Tootsie Pop model even as its weaknesses become more evident.

“The future state of commercial cloud computing makes identity and role-based access paramount,” said Rob Carey, vice president for cybersecurity and cloud solutions within the Global Solutions division at General Dynamics Information Technology (GDIT). Carey recommends creating both a framework for better understanding the value of identity management tools, and metrics to measure that impact. “Knowing who is on the network with a high degree of certainty has tremendous value.”

Tom Kemp, chief executive officer at Centrify, which provides cloud-based identity services, has a vested interest in changing that mix. Centrify, based in Sunnyvale, Calif., combines identity data with location and other information to help ensure only authorized, verified users access sensitive information.

“At the heart of zero-trust is the realization that an internal user should be treated just like an external user, because your internal network is just as polluted as your outside network,” Kemp said at the Feb. 7 Institute for Critical Infrastructure (ICIT) Winter Summit. “You need to move to constant verification.” Reprising former President Ronald Reagan’s “trust but verify” mantra, he adds: “Now it’s no trust and always verify. That’s the heart of zero-trust.”

The Google Experience
When Google found itself hacked in 2009, the company launched an internal project to find a better way to keep hackers out of its systems. Instead of beefing up firewalls and tightening virtual private network settings, Google’s BeyondCorp architecture dispensed with the Tootsie Pop model in which users logged in and then gained access to all manner of systems and services.

In its place, Google chose to implement a zero-trust model that challenges every user and every device on every data call – regardless of how that user accessed the internet in the first place.

While that flies in the face of conventional wisdom, Google reasoned that by tightly controlling the device and user permissions to access data, it had found a safer path.

Here’s an example of how that works when an engineer with a corporate-issued laptop wants to access an application from a public Wi-Fi connection:

  1. The laptop provides its device certificate to an access proxy.
  2. The access proxy confirms the device, then redirects to a single-sign-on (SSO) system to verify the user.
  3. The engineer provides primary and second-factor authentication credentials and, once authenticated by the SSO system, is issued a token.
  4. Now, with the device certificate to identify the device and the SSO token to identify the user, an Access Control Engine can perform a specific authorization check for every data access. The user must be confirmed to be in the engineering group; to possess a sufficient trust level; and to be using a managed device in good standing with a sufficient trust level.
  5. If all checks pass, the request is passed to an appropriate back-end system and the data access is allowed. If any of the checks fail however, the request is denied. This is repeated every time the engineer tries to access a data item.

“That’s easy enough when those attributes are simple and clear cut, as with the notional Google engineer,” said GDIT’s Carey, who spent three decades managing defense information systems. “But it gets complicated in a hurry if you’re talking about an enterprise on the scale of the Defense Department or Intelligence community.”

Segmenting the Sprawling Enterprise
A takeaway from 9/11 was that intelligence agencies needed to be better and faster at sharing threat data across agency boundaries. Opening databases across agency divisions, however, had consequences: Chelsea Manning, at the time Pfc. Bradley Manning, delivered a treasure trove of stolen files to WikiLeaks and then a few years later, Edward Snowden stole countless intelligence documents, exposing a program designed to collect metadata from domestic phone and email records.

“The more you want to be sure each user is authorized to see and access only the specific data they have a ‘need-to-know,’ the more granular the identity and access management schema need to be,” Carey said. “Implementing role-based access is complicated because you’ve got to develop ways to both tag data and code users based on their authorized need. Absent a management schema, that can quickly become difficult to manage for all but the smallest applications.”

Consider a scenario of a deployed military command working in a multinational coalition with multiple intelligence agencies represented in the command’s intelligence cell. The unit commands air and ground units from all military services, as well as civilians from defense, intelligence and possibly other agencies. Factors determining individual access to data might include the person’s job, rank, nationality, location and security clearance. Some missions might include geographic location, but others can’t rely on that factor because some members of the task force are located thousands of miles away, or operating from covert locations.

That scenario gets even more complicated in a hybrid cloud environment where some systems are located on premise, and others are far away. Managing identity-based access gets harder anyplace where distance or bandwidth limitations cause delays. Other integration challenges include implementing a single-sign-on solution across multiple clouds, or sharing data by means of an API.

Roles and Attributes
To organize access across an enterprise – whether in a small agency or a vast multi-agency system such as the Intelligence Community Information Technology Enterprise (IC ITE) – information managers must make choices. Access controls can be based on individual roles – such as job level, function and organization – or data attributes – such as type, source, classification level and so on.

“Ultimately, these are two sides of the same coin,” Carey said. “The real challenge is the mechanics of developing the necessary schema to a level of granularity that you can manage, and then building the appropriate tools to implement it.”

For example, the Defense Department intends to use role-based access controls for its Joint Information Enterprise (JIE), using the central Defense Manpower Data Center (DMDC) personnel database to connect names with jobs. The available fields in that database are in effect, the limiting factors on just how granular role-based access controls will be under JIE.

Access controls will only be one piece of JIE’s enterprise security architecture. Other features, ranging from encryption to procedural controls that touch everything from the supply chain to system security settings, will also contribute to overall security.

Skeptical of Everything
Trust – or the lack of it – plays out in each of these areas, and requires healthy skepticism at every step. Rod Turk, CISO at the Department of Commerce, said CISOs need to be skeptical of everything. “I’m talking about personnel, I’m talking about relationships with your services providers,” he told the ATARC CISO Summit.  “We look at the companies we do business with and we look at devices, and we run them through the supply chain.  And I will tell you, we have found things that made my hair curl.”

Commerce’s big push right now is the Decennial Census, which will collect volumes of personal information (PI) and personally identifiable information (PII) on almost every living person in the United States. Conducting a census every decade is like doing a major system reset each time. The next census will be no different, employing mobile devices for census takers and for the first time, allowing individuals to fill out census surveys online. Skepticism is essential because the accuracy of the data depends on the public’s trust in the census.

In a sense, that’s the riddle of the whole zero-trust concept: In order to achieve a highly trusted outcome, CISOs have to start with no trust at all.

Yet trust also cuts in the other direction. Today’s emphasis on modernization and migration to the cloud means agencies face tough choices. “Do we in the federal government trust industry to have our best interests in mind to keep our data in the cloud secure?” Turk asked rhetorically.

In theory, the Federal Risk and Authorization Management Program (FedRAMP) establishes baseline requirements for establishing trust but doubts persist. What satisfies one agency’s requirements may not satisfy another. Compliance with FedRAMP or NIST controls equates to risk management rather than actual security, GDIT’s Carey points out. They’re not the same thing.

Identity and Security
Beau Houser, CISO at the Small Business Administration, is more optimistic by improvements he’s seen as compartmentalized legacy IT systems are replaced with centralized, enterprise solutions in a Microsoft cloud.

“As we move to cloud, as we roll out Windows 10, Office 365 and Azure, we’re getting all this rich visibility of everything that’s happening in the environment,” he said. “We can now see all logins on every web app, whether that’s email or OneDrive or what have you, right on the dashboard. And part of that view is what’s happening over that session: What are they doing with email, where are they moving files.… That’s visibility we didn’t have before.”

Leveraging that visibility effectively extends that notion of zero-trust one step further, or at least shifts it into the realm of a watchful parent rather than one who blindly trusts his teenage children. The watchful parent believes trust is not a right, but an earned privilege.

“Increased visibility means agencies can add behavioral models to their security controls,” Carey said. “Behavioral analysis tools that can match behavior to what people’s roles are supposed to be and trigger warnings if people deviate from expected norms, is the next big hurdle in security.”

As Christopher Wlaschin, CISO at the Department of Health and Human Services, says: “A healthy distrust is a good thing.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Unpleasant Design Could Encourage Better Cyber Hygiene

Unpleasant Design Could Encourage Better Cyber Hygiene

Recent revelations that service members and intelligence professionals are inadvertently giving up their locations and fitness patterns via mobile apps caught federal agencies by surprise.

The surprise wasn’t that Fitbits, smartphones or workout apps try to collect information, nor that some users ignore policies reminding them to watch their privacy and location settings. The real surprise is that many IT policies aren’t doing more to help stop such inadvertent fitness data leaks.

If even fitness-conscious military and intelligence personnel are unknowingly trading security and privacy for convenience, how can IT security managers increase security awareness and compliance?

One answer: Unpleasant design.

Unpleasant design is a proven technique for using design to discourage unwanted behavior. Ever get stuck in an airport and long for a place to lie down — only to find every bench or row of seats is fitted with armrests? That’s no accident. Airports and train terminals don’t want people sleeping across benches. Or consider the decorative metalwork sometimes placed on urban windowsills or planter walls — designed expressly to keep loiterers from sitting down. It’s the same with harsh lights in suburban parking lots, which discourage people from hanging out and make it harder for criminals to lurk in the shadows.

As the federal government and other agency IT security leaders investigate these inadvertent disclosures, can they employ those same concepts to encourage better cyber behavior?

Here’s how unpleasant design might apply to federally furnished Wi-Fi networks: Rather than allow access with only a password, users instead might be required to have their Internet of Things (IoT) devices pass a security screening that requires certain security settings. That screening could include ensuring location services are disabled while such devices are connected to government-provided networks.

Employees would then have to choose between the convenience of free Wi-Fi for personal devices and the risks of inadequate operations security (OPSEC) via insecure device settings.

This of course, only works where users have access to such networks. At facilities where personal devices must be deposited in lockers or left in cars, it won’t make a difference. But for users working (and living) on installations where personnel routinely access Wi-Fi networks, this could be highly effective.

Screening – and even blocking – certain apps or domains could be managed through a cloud access security broker, network security management software that can enforce locally set rules governing apps actively using location data or posing other security risks. Network managers could whitelist acceptable apps and settings, while blocking those deemed unacceptable. If agencies already do that for their wired networks, why not for wireless?

Inconvenient? Absolutely. That’s the point.

IT security staffs are constantly navigating the optimal balance between security and convenience. Perfect security is achievable only when nothing is connected to anything else. Each new connection and additional convenience introduces another dent in the network’s armor.

Employing cloud-access security as a condition of Wi-Fi network access will impinge on some conveniences. In most cases, truly determined users can work around those rules by using local cellular data access instead. In most parts of the world, however, those places where the need for OPSEC is greatest, that access comes with a direct cash cost. When users pay for data by the megabyte, they’re much more likely to give up some convenience, check security and privacy settings, and limit their data consumption.

This too, is unpleasant design at work. Cellular network owners must balance network capacity with use. Lower-capacity networks control demand by raising prices, knowing that higher priced data discourages unbridled consumption.

Training and awareness will always be the most important factors in securing privacy and location data, because few users are willing to wade through pages-long user agreements to discover what’s hidden in the fine print and legalese they contain. More plain language and simpler settings for opting-in or out of certain kinds of data sharing are needed – and app makers must recognize that failing to heed such requirements only increase the risk that government steps in with new regulations.

But training and awareness only go so far. People still click on bad links, which is why some federal agencies automatically disable them. It makes users take a closer, harder look and think twice before clicking. That too, is unpleasant design.

So is requiring users to wear a badge that doubles as a computer access card (as is the case with the Pentagon’s Common Access Card and most Personal Identity Verification cards). Yet, knowing that some will inevitably leave the cards in their computers, such systems automatically log off after only a few minutes of inactivity. It’s inconvenient, but more secure.

We know this much: Human nature is such that people will take the path of least resistance. If that means accepting security settings that aren’t safe, that’s what’s going to happen. Though interrupting that convenience and turning it on its head by means of Wi-Fi security won’t stop everyone. But it might have prevented Australian undergrad Nathan Ruser – and who knows who else – from identifying the regular jogging routes of military members (among other examples) from Strava’s house-built heat map and the 13 trillion GPS points all collected from users.

“If soldiers use the app like normal people do,” Ruser tweeted Jan. 27, “it could be especially dangerous … I shouldn’t be able to establish any pattern of life info from this far away.”

Exactly.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
How Feds Are Trying to Bring Order to Blockchain Mania

How Feds Are Trying to Bring Order to Blockchain Mania

Blockchain hype is at a fever pitch. The distributed ledger technology is hailed as a cure for everything from identity management to electronic health records and securing the Internet of Things. Blockchain provides a secure, reliable matter of record for transactions between independent parties, entities or companies. There are industry trade groups, a Congressional Blockchain Caucus and frequent panel discussions to raise awareness.

Federal agencies are plunging ahead, both on their own and in concert with the General Services Administration’s Emerging Citizen Technology Office (GSA ECTO). The office groups blockchain with artificial intelligence and robotic automation, social and collaborative technologies, and virtual and augmented reality as its four most critical technologies. Its goal: Develop use cases and roadmaps to hasten government adoption and success with these new technologies.

“There’s a number of people who assume that fed agencies aren’t looking at things like blockchain,” Justin Herman, emerging technology lead and evangelist at GSA ECTO, told a gathering at the State of the Net Conference held Jan. 29 in Washington, D.C. “We got involved in blockchain because there were so many federal agencies coming to the table demanding government wide programs to explore the technology. People had already done analysis on what specific use cases they thought they had and wanted to be able to invest in it.”

Now his office is working with more than 320 federal, state and local agencies interested in one or more of its four emerging tech categories. “A lot of that is blockchain,” Herman said. “Some have already done successful pilots. We hear identity management, supply chain management…. We should be exploring those things together, not in little silos, not in walled gardens, but in public.”

Among those interested:

  • The Joint Staff’s J4 Logistics Directorate and the Deputy Assistant Secretary of Defense for Maintenance, Policy and Programs are collaborating on a project to create a digital supply chain, enabled by additive manufacturing (also known as 3-D Printing). Blockchain’s role would be to secure the integrity of 3-D printing files, seen as “especially vulnerable to cyber threats and intrusions.” The Navy is looking at the same concept.“The ability to secure and securely share data throughout the manufacturing process (from design, prototyping, testing, production and ultimately disposal) is critical to Additive Manufacturing and will form the foundation for future advanced manufacturing initiatives,” writes Lt. Cmdr. Jon McCarter, a member of the Fiscal 2017 Secretary of the Navy Naval Innovation Advisory Council (NIAC).
  • The Office of the Undersecretary of Defense for Acquisition, Technology and Logistics (OUSD (AT&L)) Rapid Reaction Technology Office (RRTO) has similar designs on blockchain, seeing it as a potential solution for ensuring data provenance, according to a solicitation published Jan. 29.
  • The Centers for Disease Control’s Center for Surveillance, Epidemiology and Laboratory Services is interested in using blockchain for public health tracking, such as maintaining a large, reliable, current and shared database of opioid abuse or managing health data during crises. Blockchain’s distributed ledger system ensures that when one user updates the chain, everyone sees the same data, solving a major shortfall today, when researchers are often working with different versions of the same or similar data sets, rather than the same, unified data.
  • The U.S. Food and Drug Administration has similar interests in sharing health data for large-scale clinical trials.
  • The Office of Personnel Management last fall sought ideas for how to create a new consolidated Employee Digital Record that would track an employee’s skills, performance and experience over the course of an entire career, using blockchain as a means to ensure records are up to date and to speed the process of transfers from one agency to another.

Herman sees his mission as bringing agencies together so they can combine expertise and resources and more quickly make progress. “There are multiple government agencies right now exploring electronic health records with blockchain,” he said. “But we can already see the hurdles with this because they are separate efforts, so we’re adding barriers. We’ve got to design new and better ways to move across agencies, across bureaucracies and silos, to test, evaluate and adopt this technology. It should be eight agencies working together on one pilot, not eight separate pilots on one particular thing.”

The Global Blockchain Business Council (GBBC) is an industry group advocating for blockchain technology and trying to take a similar approach in the commercial sector to what GSA is doing in the federal government. “We try to break down these traditionally siloed communities,” said Mercina Tilleman-Dick, chief operating officer for the GBBC.

These days, that means trying to get people together to talk about standards and regulation and connecting those who are having success with others just beginning to think about such issues. “Blockchain is not going to solve every problem,” Tilleman-Dick said. It could prove effective in a range of use cases where secure, up-to-date, public records are essential.

Take property records, for example. The Republic of Georgia moved all its land titles onto a blockchain-based system in 2017, Sweden is exploring the idea and the city of South Burlington, Vt., is working on a blockchain pilot for local real estate transactions. Patrick Byrne, founder of Overstock.com and its subsidiary Medici Ventures, announced in December he’s funding a startup expressly to develop a global property registry system using blockchain technology.

“I think over the next decade it will fundamentally alter many of the systems that power everyday life,” GBBC’s Tilleman-Dick said.

“Blockchain has the potential to revolutionize all of our supply chains. From machine parts to food safety,” said Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology. “We will be able to look up the provenance and history of an item, ensuring it is genuine and tracing the life of its creation along the supply chain.

“Secure entries, immutable and created throughout the life of an object, allow for secure sourcing, eliminate fraud, forgeries and ensure food safety,” Gadwale said. “Wal-Mart has already begun trials of blockchain with food safety in mind.”

Hilary Swab Gawrilow, legislative director and counsel in the office of Rep. Jared Polis (D-Colo.) who is among the Congressional Blockchain Caucus leaders, said the government needs to do more to facilitate understanding of the technology. The rapid rise in value of bitcoin and the overall wild fluctuations in value and speculation in digital cryptocurrencies, has done much to raise awareness. Yet it does not necessarily instill confidence in the concepts behind blockchain and distributed ledger technology.

“There are potential government applications or programs that deserve notice and study,” Swab Gawrilow said.

Identity management is a major challenge for agencies today. In citizen engagement, citizens may have accounts with multiple agencies. Finding a way to verify status without having to build complicated links between disparate systems to enable benefits or confirm program eligibility would be valuable. The same is true for program accountability. “Being able to verify transactions – would be another great way to use blockchain technology.”

That’s where the caucus is coming from: A lot of this is around education. Lawmakers have all heard of bitcoin, whether in a positive or negative way. “They understand what it is, Gawrilow said. “But they don’t necessarily understand the underlying technology.” The caucus’ mission is to help inform the community.

Like GSA’s Herman, Gawrilow favors agency collaboration on new technology projects and pilots. “HHS did a hackathon on blockchain. The Postal Service put out a paper, and State is doing something. DHS is doing something. It’s every agency almost,” she said. “We’ve kicked around the idea of asking the administration to start a commission around blockchain.”

That, in turn, might surface issues requiring legislative action – “tweaks to the law” that underlie programs, such as specifications on information access, or a prescribed means of sharing or verifying data. That’s where lawmakers could be most helpful.

Herman, for his part, sees GSA as trying to fill that role, and to fill it in such a way that his agency can tie together blockchain and other emerging and maturing technologies. “It’s not the technology, it’s the culture,” he said. “So much in federal tech is approached as some zero-sum game, that if an agency is dedicating time to focus and investigate a technology like blockchain, people freak out because they’re not paying attention to cloud or something else.”

Agencies need to pool resources and intelligence, think in terms of shared services and shared approaches, break down walls and look holistically at their challenges to find common ground.

That’s where the payoff will come. Otherwise, Herman asks, “What does it matter if the knowledge developed isn’t shared?”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Relocatable Video Surveillance Systems Give CBP Flexibility on Border

Relocatable Video Surveillance Systems Give CBP Flexibility on Border

Illegal border crossings fell to their lowest level in at least five years in 2017, but after plunging through April, the numbers have risen each of the past eight months, according to U.S. Customs and Border Protection (CBP).

Meanwhile, the debate continues: Build a physical wall spanning from the Gulf of Mexico to the Pacific Ocean, add more Border Patrol agents or combine better physical barriers with technology to stop drug trafficking, smuggling and illegal immigration?

Increasingly, however, it’s clear no one solution is right for everyplace. Ron Vitiello, acting deputy commissioner at CBP, said the agency intends to expand on the existing 652 miles of walls and fencing now in place – but not necessarily extend the wall the entire length of the border.

“We’re going to add to fill some of the gaps we didn’t get in the [previous] laydown, and then we’re going to prioritize some new wall [construction] across the border in places where we need it the most,” he said in a Jan. 12 TV interview.

Walls and barriers are a priority, Vitiello said in December at a CBP press conference. “In this society and all over our lives, we use walls and fences to protect things,” he said. “It shouldn’t be any different on the border.…  But we’re still challenged with access, we’re still challenged with situational awareness and we’re still challenged with security on that border. We’re still arresting nearly 1,000 people a day.

“So we want to have more capability: We want more agents, we want more technology and we want that barrier to have a safer and more secure environment.”

Among the needs: Relocatable Remote Video Surveillance Systems (R-RVSS) that can be picked up and moved to where they’re needed most as border activity ebbs and flows in response to CBP’s border actions.

CBP mapped its fencing against its 2017 apprehension record in December (see map), finding that areas with physical fencing, such as near the metropolitan centers of San Diego/Tijuana, Tucson/Nogales and El Paso/Juarez are just as likely to see illegal migration activity as unfenced areas in the Laredo/Nueva Laredo area.

CBP mapped its fencing against its 2017 apprehension record in December (see map below), finding that areas with physical fencing are just as likely to see illegal migration activity as unfenced areas.

Source: U.S. Customs and Border Protection

Rep. Will Hurd (R-Tex.), vice chairman of the House Homeland Security subcommittee on Border and Maritime Security, is an advocate for technology as both a complement to and an alternative to physical walls and fences. “A wall from sea to shining sea is the least effective and most expensive solution for border security,” he argued Jan. 16. “This is especially true in areas like Big Bend National Park, where rough terrain, natural barriers and the remoteness of a location render a wall or other structure impractical and ineffective.”

CBP has successfully tested and deployed video surveillance systems to enhance situational awareness on the border and help Border Patrol agents track and respond to incursions. These RVSS systems use multiple day and night sensors mounted on poles to create an advance warning and tracking system identifying potential border-crossing activity. Officers can monitor those sensors feeds remotely and dispatch agents as needed.

Savvy smugglers are quick to adjust when CBP installs new technologies, shifting their routes to less-monitored areas. The new, relocatable RVSS systems (R-RVSS) make it easy for CBP to respond in kind, forcing smugglers and traffickers to constantly adapt.

Robert Gilbert, a former Border Patrol sector chief at CBP and now a senior program director for RVSS at systems integrator General Dynamics Information Technology (GDIT), says relocatable systems will empower CBP with new tools and tactics. “Over the past 20 or 30 years, DOJ then CBP has always deployed technology into the busiest areas along the border, the places with the most traffic. In reality, because of the long procurement process, we usually deployed too late as the traffic had shifted to other locations on the border. The big difference with this capability is you can pick it up and move it to meet the evolving threat. The technology can be relocated within days.”

GDIT fielded a three-tower system in CBP’s Laredo (Texas) West area last summer and a similar setup in McAllen, Texas, in December. The towers – set two to five miles apart – were so effective, CBP is now preparing to buy up to 50 more units to deploy in the Rio Grande sector, where the border follows the river through rugged terrain. There, a physical wall may not be viable, while a technology-based virtual wall could prove highly effective.

Each tower includes an 80-foot-tall collapsible pole that can support a sensor and communications payload weighing up to 2,000 pounds. While far in excess of current needs, it provides a growth path to hanging additional sensors or communications gear if requirements change later on.

When CBP wants to move the units, poles are collapsed, sensors can be packed away and a standard 3/4- or 1-ton pickup truck can haul it to its next location.

Roughly two-thirds of the U.S.-Mexico border runs through land not currently owned by the federal government, a major hurdle when it comes to building permanent infrastructure like walls or even fixed-site towers. Land acquisition would add billions to the cost even if owners agree to the sale. Where owners decline, the government might still be able to seize the land under the legal procedure known as eminent domain, but such cases can take years to resolve.

By contrast, R-RVSS requires only a temporary easement from the land owner. Site work is bare bones: no concrete pad, just a cleared area measuring roughly 40 feet by 40 feet. It need not be level – the R-RVSS system is designed to accommodate slopes up to 10 degrees. Where grid power is unavailable – likely in remote areas – a generator or even a hydrogen fuel cell can produce needed power.

What’s coming next
CBP seeks concepts for a Modular Mobile Surveillance System (M2S2) similar to RVSS, which provide the Border Patrol with an even more rapidly deployable system for detecting, identifying, classifying and tracking “vehicles, people and animals suspected of unlawful border crossing activities.”

More ambitiously, CBP also wants such systems to incorporate data science and artificial intelligence to add a predictive capability. The system would “detect, identify, classify, and track equipment, vehicles, people, and animals used in or suspected of unlawful border crossing activities,” and employ AI to help agents anticipate their direction so they can quickly respond, and resolve each situation.

At the same time, CBP is investigating RVSS-like systems for coastal areas. Deploying pole-mounted systems would train their sensors to monitor coastal waters, where smugglers in small boats seek to exploit the shallows by operating close to shore, rather than the deeper waters patrolled by Coast Guard and Navy ships.

In a market research request CBP floated last June, the agency described a Remote Surveillance System Maritime (RSS-M) as “a subsystem in an overall California Coastal Surveillance demonstration.” The intent: to detect, track, identify, and classify surface targets of interest, so the Border Patrol and partner law enforcement agencies can interdict such threats.

Legislating Tech
Rep. Hurd, Rep. Peter Aguilar (D-Calif.) and a bipartisan group of 49 other congress members support the ‘‘Uniting and Securing America Act of 2017,’’ or “USA Act.” The measure included a plan to evaluate every mile of the U.S.-Mexico border to determine the best security solution for each. After weeks of Senate wrangling over immigration matters, Sens. John McCain (R-Ariz.) and Chris Coons (D-Del.) offered a companion bill in the Senate on Feb. 5.

With 820 miles of border in his district, Hurd says, few in Congress understand the border issue better than he – or feel it more keenly.

“I’m on the border almost every weekend,” he said when unveiling the proposal Jan. 16. The aim: “Full operational control of our border by the year 2020,” Hurd told reporters. “We should be able to know who’s going back and forth across our border. The only way we’re going to do that is by border technologies.” And in an NPR interview that day, he added: “We should be focused on outcomes. How do we get operational control of that border?”

The USA Act would require the Department of Homeland Security to “deploy the most practical and effective technology available along the United States border for achieving situational awareness and operational control of the border by Inauguration Day 2021, including radar surveillance systems; Vehicle and Dismount Exploitation Radars (VADER); three-dimensional, seismic acoustic detection and ranging border tunneling detection technology; sensors, unmanned cameras, drone aircraft and anything else that proves more effective or advanced. The technology is seen as complementing and supporting hard infrastructure.

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
CDM Program Starts to Tackle Complexities of Cloud

CDM Program Starts to Tackle Complexities of Cloud

The Trump administration’s twin priorities for federal information technology – improved cybersecurity and modernized federal systems – impose a natural tension: How to protect a federal architecture that is rapidly changing as agencies push more and more systems into the cloud.

The Department of Homeland Security’s (DHS) Continuous Diagnostics and Mitigation (CDM) program’s early phases focus on understanding what systems are connected to federal networks and who has access to those systems. The next phases – understanding network activity and protecting federal data itself – will pose stiffer challenges for program managers, chief information security officers and systems integrators developing CDM solutions.

Figuring out how to monitor systems in the cloud – and how to examine and protect data there – is a major challenge that is still being worked out, even as more and more federal systems head that way.

“Getting that visibility into the cloud is critical,” says DHS’s CDM Program Manager Kevin Cox. Establishing a Master Device Record, which recognizes all network systems, and establishing a Master User Record, which identifies all network users, were essentially first steps, he told a gathering of government security experts at the ATARC Chief Information Security Officer Summit Jan. 25. “Where we’re headed is to expand out of the on-premise network and go out to the boundary.”

As federal systems move into the cloud, DHS wants CDM to follow – and to have just as much visibility and understanding of that part of the federal Information technology ecosystem as it has for systems in government data centers. “We need to make sure we know where that data is, and understand how it is protected,” Cox says.

Eric White, cybersecurity program director at General Dynamics Information Technology (GDIT) Health and Civilian Solutions Division, has been involved with CDM almost from its inception. “As agencies move their data and infrastructures from on premise into these virtualized cloud environments, frequently what we see is the complexity of managing IT services and capabilities increasing between on-premise legacy systems and the new cloud solutions. It creates additional challenges for cybersecurity writ large, but also specifically, CDM.”

Combining virtualized and conventional legacy systems is an integration challenge, “not just to get the two to interact effectively, but also to achieve the situational awareness you want in both environments,” White says. “That complexity is something that can impact an organization.”

The next phase of CDM, starts with monitoring network of sensors to identify “what is happening on the network,” including monitoring for defects between a “desired state” and the “actual state” of the device configurations that monitor network health and security. In a closed, on-premise environment, it’s relatively easy to monitor all those activities, because a network manager controls all the settings.

But as agencies incorporate virtualized services, such as cloud-based email or office productivity software, new complexities are introduced. Those services can incorporate their own set of security and communications standards and protocols. They may be housed in multi-tenant environments and implemented with proprietary security capabilities and tools. In some cases, these implementations may not be readily compatible with federal continuous monitoring solutions.

The Report to the President on Federal IT Modernization, describes the challenges faced in trying to combine existing cyber defenses with new cloud and mobile architectures. DHS’s National Cybersecurity Protection System (NCPS), which includes both the EINSTEIN cyber sensors and a range of cyber analytic tools and protection technologies, provide value, the report said, “but are not enough to combat the full spectrum of advanced persistent threats that rapidly change the attack vectors, tactics, techniques and procedures.”

DHS began a cybersecurity architectural review of federal systems last year, building on a similar Defense Department effort by the Defense Information Systems Agency, which conducted the NIPRNET/SIPRNET Cybersecurity Architecture Review (NSCSAR) in 2016 and 2017. Like NSCSAR, the new .Gov Cybersecurity Architecture Review (.GovCAR) intends to take an adversary’s-eye-view of federal networks in order to identify and fix exploitable weaknesses in the overall architecture. In a massively federated arrangement like the federal government’s IT system, that will be a monumental effort.

Cox says the .GovCAR review will also “layer in threat intelligence, so we can evaluate the techniques and technologies we use to see how those technologies are helping us respond to the threat.”

“Ultimately, if the analysis shows our current approach is not optimal, they will look at proposing more optimal approaches,” he says. “We’re looking to be nimble with the CDM program to support that effort.”

The rush to implement CDM as a centrally funded but locally deployed system of systems means the technology varies from agency to agency and implementation to implementation. Meanwhile, agencies have also proceeded with their own modernization and consolidation efforts. So among the pressing challenges is figuring out how to get those sensors and protection technologies to look at federal networks holistically. The government’s network perimeter is no longer a contiguous line. Cloud-based systems are still part of the network, but the security architecture may be completely different, with complex encryption that presents challenges to CDM monitoring technologies almost as effectively as it blocks adversaries.

“Some of these sensors on the network don’t operate too well when they see data in the wrong format,” White explains. “If you’re encrypting data and the sensors aren’t able to decipher it, those sensors won’t return value.”

There won’t be a single answer to solving that riddle. “What you’re trying to do is gather visibility in the cloud, and this requires that you be proactive in working with your cloud service providers,” White says. “You have to understand what they provide, what you are responsible for, what you will have a view of and what you might not be able to see. You’re going to have to negotiate to be compliant with federal FISMA requirements and local security risk thresholds and governance.”

Indeed, Cox points out, “There’s a big push to move more federal data out to the cloud; we need to make sure we know where that data is, and understand how it is protected.” Lapses do occur.
“There have been cases where users have moved data out to the cloud, there was uncertainty as to who is configuring the protections on that data, whether the cloud service provider or the user, and because of that uncertainty, the data was left open for others – or adversaries – to view it,” Cox says.

Addressing that issue will be a critical piece of CDM’s Phase 3 and Phase 4 will go further in data protection, Cox says: “It gets into technologies like digital rights management, data loss prevention, architecturally looking at things like microsegmentation, to ensure that – if there is a compromise –we can keep it isolated.”

Critics have questioned the federal government’s approach, focusing on the network first rather than the data. But Cox defends the strategy: “There was such a need to get some of these foundational capabilities in place – to get the basic visibility – that we had to start with Phase 1 and Phase 2, we had to understand what the landscape looked like, what the user base looked like, so we would then know how to protect the data wherever it was.”

“Now we’re really working to get additional protections to make sure that we will have better understanding if there is an incident and we need to respond, and better yet, keep the adversary off the network completely.”

The CDM program changed its approach last year, rolling out a new acquisition vehicle dubbed CDM DEFEND, which leverages task orders under the Alliant government-wide acquisition contract (GWAC), rather than the original “peanut butter spread” concept. “Before, we had to do the full scope of all the deployments everywhere in a short window,” he says, adding that now, “We can turn new capabilities much more quickly.”

Integrators are an essential partner in all of this, White says, because they have experience with the tools, experience with multiple agencies and the technical experience, skills and knowledge to help ensure a successful deployment. “The central tenet of CDM is to standardize how vulnerabilities are managed across the federal government, how they’re prioritized and remediated, how we manage the configuration of an enterprise,” he says. “It’s important to not only have a strategy at the enterprise level, but also at the government level, and to have an understanding of the complexity beyond your local situation.”

Ultimately, a point solution is always easier than an enterprise solution, and an enterprise solution is always easier than a multi-enterprise solution. Installing cyber defense tools for an installation of 5,000 people is relatively easy – until you have to make that work with a government-wide system that aims to collect and share threat data in a standardized way, as CDM aims to do.

“You have to take a wider, broader view,” says Stan Tyliszczak, chief engineer at GDIT. “You can’t ignore the complex interfaces with other government entities because when you do, you risk opening up a whole lot of back doors into sensitive networks. It’s not that hard to protect the core of the network – the challenge is in making sure the seams are sewn shut. It’s the interfaces between the disparate systems that pose great risk. Agencies have been trying to solve this thing piece by piece, but when you do that you’re going to have cracks and gaps. And cracks and gaps lead to vulnerabilities. You need to take a holistic approach.”

Agency cyber defenders are all in. Mittal Desai, CISO at the Federal Energy Regulatory Commission (FERC), says his agency is in the process of implementing CDM Phase 2, and looks forward to the results. “We’re confident that once we implement those dashboards,” he says, “it’s going to help us reduce our meantime to detect and our meantime to respond to threats.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250
Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

Do Spectre, Meltdown Threaten Feds’ Rush to the Cloud?

As industry responds to the Spectre and Meltdown cyber vulnerabilities, issuing microcode patches and restructuring the way high-performance microprocessors handle speculative execution, the broader fallout remains unclear: How will IT customers respond?

The realization that virtually every server installed over the past decade, along with millions of iPhones, laptops and other devices are exposed is one thing; the risk that hackers can exploit these techniques to leak passwords, encryption keys or other data across virtual security barriers in cloud-based systems, is another.

For a federal IT community racing to modernize, shut down legacy data centers and migrate government systems to the cloud, worries about data leaks raise new questions about the security of placing data in shared public clouds.

“It is likely that Meltdown and Spectre will reinforce concerns among those worried about moving to the cloud,” said Michael Daniel, president of the Cyber Threat Alliance who was a special assistant to President Obama and the National Security Council’s cybersecurity coordinator until January 2017.

“But the truth is that while those vulnerabilities do pose risks – and all clients of cloud service providers should be asking those providers how they intend to mitigate those risks – the case for moving to the cloud remains overwhelming. Overall, the benefits still far outweigh the risks.”

Adi Gadwale, chief enterprise architect for systems integrator General Dynamics Information Technology (GDIT), says the risks are greater in public cloud environments where users’ data and applications can be side by side with that of other, unrelated users. “Most government entities use a government community cloud where there are additional controls and safeguards and the only other customers are public sector entities,” he says. “This development does bring out some of the deepest cloud fears, but the vulnerability is still in the theoretical stage. It’s important not to overreact.”

How Spectre and Meltdown Work
Spectre and Meltdown both take advantage of speculative execution, a technique designed to speed up computer processing by allowing a processor to start executing instructions before completing the security checks necessary to ensure the action is allowed, Gadwale says.

“Imagine we’re in a track race with many participants,” he explains. “The gun goes off, and some runners start too quickly, just before the gun goes off. We have two options: Stop the runners, review the tapes and disqualify the early starters, which might be the right thing to do but would be tedious. Or let the race complete and then afterward, discard the false starts.

“Speculative execution is similar,” Gadwale continues. “Rather than leave the processor idle, operations are completed while memory and security checks happen in parallel. If the process is allowed, you’ve gained speed; if the security check fails, the operation is discarded.”

This is where Spectre and Meltdown come in. By executing code speculatively and then exploiting what happens by means of shared memory mapping, hackers can get a sneak peek into system processes, potentially exposing very sensitive data.

“Every time the processor discards an inappropriate action, the timing and other indirect signals can be exploited to discover memory information that should have been inaccessible,” Gadwale says. “Meltdown exposes kernel data to regular user programs. Spectre allows programs to spy on other programs, the operating system and on shared programs from other customers running in a cloud environment.”

The technique was exposed by a number of different research groups all at once, including Jann Horn, a researcher with Google’s Project Zero, at Cyberus Technology, Graz University of Technology, the University of Pennsylvania, the University of Maryland and the University of Adelaide.

The fact that so many researchers were researching the same vulnerability at once – studying a technique that has been in use for nearly 20 years – “raises the question of who else might have found the attacks before them – and who might have secretly used them for spying, potentially for years,” writes Andy Greenberg in Wired. But speculation that the National Security Agency might have utilized the technique was shot down last week when former NSA offensive cyber chief Rob Joyce (Daniel’s successor as White House cybersecurity coordinator) said NSA would not have risked keeping hidden such a major flaw affecting virtually every Intel processor made in the past 20 years.

The Vulnerability Notes Database operated by the CERT Division of the Software Engineering Institute, a federally funded research and development center at Carnegie Mellon University sponsored by the Department of Homeland Security, calls Spectre and Meltdown “cache side-channel attacks.” CERT explains that Spectre takes advantage of a CPU’s branch prediction capabilities. When a branch is incorrectly predicted, the speculatively executed instructions will be discarded, and the direct side-effects of the instructions are undone. “What is not undone are the indirect side-effects, such as CPU cache changes,” CERT explains. “By measuring latency of memory access operations, the cache can be used to extract values from speculatively-executed instructions.”

Meltdown, on the other hand, leverages an ability to execute instructions out of their intended order to maximize available processor time. If an out-of-order instruction is ultimately disallowed, the processor negates those steps. But the results of those failed instructions persist in cache, providing a hacker access to valuable system information.

Emerging Threat
It’s important to understand that there are no verified instances where hackers actually used either technique. But with awareness spreading fast, vendors and operators are moving as quickly as possible to shut both techniques down.

“Two weeks ago, very few people knew about the problem,” says CTA’s Daniel. “Going forward, it’s now one of the vulnerabilities that organizations have to address in their IT systems. When thinking about your cyber risk management, your plans and processes have to account for the fact that these kinds of vulnerabilities will emerge from time to time and therefore you need a repeatable methodology for how you will review and deal with them when they happen.”

The National Cybersecurity and Communications Integration Center, part of the Department of Homeland Security’s U.S. Computer Emergency Readiness Team, advises close consultation with product vendors and support contractors as updates and defenses evolve.

“In the case of Spectre,” it warns, “the vulnerability exists in CPU architecture rather than in software, and is not easily patched; however, this vulnerability is more difficult to exploit.”

Vendors Weigh In
Closing up the vulnerabilities will impact system performance, with estimates varying depending on the processor, operating system and applications in use. Intel reported Jan. 10 that performance hits were relatively modest – between 0 and 8 percent – for desktop and mobile systems running Windows 7 and Windows 10. Less clear is the impact on server performance.

Amazon Web Services (AWS) recommends customers patch their instance operating systems to prevent the possibility of software running within the same instance leaking data from one application to another.

Apple sees Meltdown as a more likely threat and said its mitigations, issued in December, did not affect performance. It said Spectre exploits would be extremely difficult to execute on its products, but could potentially leverage JavaScript running on a web browser to access kernel memory. Updates to the Safari browser to mitigate against such threats had minimal performance impacts, the company said.

GDIT’s Gadwale said performance penalties may be short lived, as cloud vendors and chipmakers respond with hardware investments and engineering changes. “Servers and enterprise class software will take a harder performance hit than desktop and end-user software,” he says. “My advice is to pay more attention to datacenter equipment. Those planning on large investments in server infrastructure in the next few months should get answers to difficult questions, like whether buying new equipment now versus waiting will leave you stuck with previous-generation technology. Pay attention: If the price your vendor is offering is too good to be true, check the chipset!”

Bypassing Conventional Security
The most ominous element of the Spectre and Meltdown attack vectors is that they bypass conventional cybersecurity approaches. Because the exploits don’t have to successfully execute code, the hackers’ tracks are harder to exploit.

Says CTA’s Daniel: “In many cases, companies won’t be able to take the performance degradation that would come from eliminating speculative processing. So the industry needs to come with other ways to protect against that risk.” That means developing ways to “detect someone using the Spectre exploit or block the exfiltration of information gleaned from using the exploit,” he added.

Longer term, Daniel suggested that these latest exploits could be a catalyst for moving to a whole different kind of processor architecture. “From a systemic stand-point,” he said, “both Meltdown and Spectre point to the need to move away from the x86 architecture that still undergirds most chips, to a new, more secure architecture.”

Related Articles

GDIT Recruitment 600×300
GDIT HCSD SCM 5 250×250 Truck
NPR Morning Edition 250×250
AFCEA/GMU Critical Issues in  C4I Symposium 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Vago 250×250
Nextgov Newsletter 250×250