Secure, High-Quality Software Doesn’t Happen By Accident

Time, cost and security are all critical factors in developing new software. Late delivery can undermine the mission; rising costs can jeopardize programs; security breaches and system failures can disrupt entire institutions. Yet systematically reviewing system software for quality and security is far from routine.

“People get in a rush to get things built,” says Bill Curtis, founding executive director of Consortium of IT Software Quality (CISQ), where he leads the development of automatable standards that measure software size and quality. “They’re either given schedules they can’t meet or the business is running around saying … ‘The cost of the damage on an outage or a breach is less than what we’ll lose if we don’t get this thing out to market.’”

In the government context, pressures can arise from politics and public attention, as well as contract and schedule.

It shouldn’t take “a nine-digit defect – a defect that goes over 100 million bucks – to change attitudes,” Curtis says. But sometimes that’s what it takes.

Software defects and vulnerabilities come in many forms. The Common Weakness Enumeration (CWE) lists more than 705 types of security weaknesses organized into categories such as “Insecure Interaction Between Components” or “Risky Resource Management.” CWE’s list draws on contributions from participants ranging from Apple and IBM to the National Security Agency and the National Institute for Standards and Technology.

By defining these weaknesses, CWE – and its sponsor, the Department of Homeland Security’s Office of Cybersecurity and Communications – seek to raise awareness about bad software practices by:

  • Defining common language for describing software security weaknesses in architecture, design, or code
  • Developing a standard measuring stick for software security tools targeting such weaknesses
  • Providing a common baseline for identifying, mitigating and preventing weaknesses

Software weaknesses can include inappropriate linkages, defunct code that remains in place (at the risk of being accidentally reactivated later) and avoidable flaws – known vulnerabilities that nonetheless find their way into source code.

“We’ve known about SQL injections [as a security flaw] since the 1990s,” Curtis says. “So why are we still seeing them? It’s because people are in a rush. They don’t know. They weren’t trained.”

Educated Approach
Whether students today get enough rigor and process drilled into them while they’re learning computer languages and logic is open to debate. Curtis favors a more rigorous engineering approach for example, worrying that too many self-taught programmers lack critical underlying skills. Indeed, 2016 survey of 56,033 developers conducted by Stack Overflow, a global online programmer community, found 13 percent claimed they were entirely self-taught. Even among the 62.5 percent who had studied computer science and earned a bachelor’s or master’s degree, the majority also said some portion of their training was self-taught. The result is that some underlying elements of structure, discipline or understanding can be lost, increasing the risk of problems.

Having consistent, reliable processes and tools for examining and ensuring software quality could make a big difference.

Automated software developed to identify weak or risky architecture and code can help overcome that, says Curtis, a 38-year veteran of software engineering and development in industry and academia. Through a combination of static and dynamic reviews, developers can obtain a sense of the overall quality of their code and alerts about potential system weaknesses and vulnerabilities. The lower the score, the riskier the software.

CISQ is not a panacea. It can screen 22 of the 25 Most Dangerous Software Errors as defined by CWE and the SANS Institute, identifying both code-level and architectural-level errors.

By examining system architecture, Curtis says, CISQ delivers a comprehensive review. “We’ve got to be able to do system-level analysis,” Curtis says. “It’s not enough just to find code-level bugs or code-unit-level bugs. We’ve got to find the architectural issues, where somebody comes in through the user interface and slips all the way around the data access or authentication routines. And to do that you have to be able to analyze the overall stack.”

Building on ISO/IEC 25010, an international standard for stating and evaluating software quality requirements, CISQ establishes a process for measuring software quality against four sets of characteristics: security, reliability, performance efficiency and maintainability. These are “nonfunctional requirements,” in that they are peripheral to the actual mission of any given system, yet they are also the source of many of the most damaging security breaches and system failures.

Consider, for example, a 2012 failed software update to servers belonging to Jersey City, N.J. financial services firm Knight Capital Group. The update was supposed to replace old code that had remained in the system – unused – for eight years. The new code, which updated and repurposed a “flag” from the old code, was tested and proven to work correctly and reliably. Then the trouble started.

According to a Securities and Exchange Commission filing, a Knight technician copied the new code to only seven of the eight required servers. No one realized the old code had not been removed from the eighth server nor that the new code had not been added. While the seven updated servers operated correctly, the repurposed flag caused the eighth server to trigger outdated and defective software. The defective code instantly triggered millions of “buy” orders totaling 397 million shares in just 45 minutes. Total lost as a result: $460 million.

“A disciplined software configuration management approach would have stopped that failed deployment on two fronts,” said Andy Ma, senior software architect with General Dynamics Information Technology. “Disciplined configuration management means making sure dead code isn’t waiting in hiding to be turned on by surprise, and that strong control mechanisms are in place to ensure that updates are applied to all servers, not just some. That kind of discipline has to be instilled throughout the IT organization. It’s got to be part of the culture.”

Indeed, had the dead code been deleted, the entire episode would never have happened, Curtis says. Yet it is still common to find dead code hidden in system software. Indeed, as systems grow in complexity, such events could become more frequent. Large systems today utilize three to six computer languages and have constant interaction between different system components.

“We’re past the point where a single person can understand these large complex systems,” he says. “Even a team cannot understand the whole thing.”

As with other challenges where large data sets are beyond human comprehension, automation promises better performance than humans can muster. “Automating the deployment process would have avoided the problem Knight had – if they had configured their tools to update all eight servers,” said GDIT’s Ma. “Automated tools also can perform increasingly sophisticated code analysis to detect flaws. But they’re only as good as the people who use them. You have to spend the time and effort to set them up correctly.”

Contracts and Requirements
For acquisition professionals, such tools could be valuable in measuring quality performance. Contracts can be written to incorporate such measures, with contractors reporting on quality reviews on an ongoing basis. Indeed, the process lends itself to agile development, says Curtis, who recommends using the tools at least once every sprint. That way, risks are flagged and can be fixed immediately. “Some folks do it every week,” he says.

J. Brian Hall, principal director, Developmental Test and Evaluation in the Office of the Secretary of Defense, said at a conference in March that the concept of adding a security quality review early in the development process is still a relatively new idea. But Pentagon operational test and evaluation officials have determined systems to be un-survivable in the past – specifically because of cyber vulnerabilities discovered during operational testing. So establishing routine testing earlier in the process is essential.

The Joint Staff updated systems survivability performance parameters earlier this year and now include a cybersecurity component, Hall said in March. “This constitutes the first real cybersecurity requirements for major defense programs,” he explained. “Those requirements ultimately need to translate into contract specifications so cybersecurity can be engineered in from program inception.”

Building cyber into the requirements process is important because requirements drive funding, Hall said. If testing for cybersecurity is to be funded, it must be reflected in requirements.

The Defense Department will update its current guidance on cyber testing in the development, test and evaluation environment by year’s end, he said.

All this follows the November 2016 publication of Special Publication 800-160, a NIST/ISO standard that is “the playbook for how to integrate security into the systems engineering process,” according to one of its principal authors, Ron Ross, a senior fellow at NIST. That standard covers all aspects of systems development, requirements and life-cycle management.

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GEMG 250×250
Intel & National Security Summit

Upcoming Events

GDIT Recruitment 250×250
USNI News: 250×250
gdit cloud 250×250