Defense

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Wanted: Metrics for Measuring Cyber Performance and Effectiveness

Chief information security officers (CISOs) face a dizzying array of cybersecurity tools to choose from, each loaded with features and promised capabilities that are hard to measure or judge.

That leaves CISOs trying to balance unknown risks against growing costs, without a clear ability to justify the return on their cybersecurity investment. Not surprisingly, today’s high-threat environment makes it preferable to choose safe over sorry – regardless of cost. But is there a better way?

Some cyber insiders believe there is.

Margie Graves Acting U.S. Federal Chief Information Officer

Margie Graves
Acting U.S. Federal Chief Information Officer

Acting U.S. Federal Chief Information Officer (CIO) Margie Graves acknowledges the problem.

“Defining the measure of success is hard sometimes, because it’s hard to measure things that don’t happen,” Graves said. President’s Trump’s Executive Order on Cybersecurity asks each agency to develop its own risk management plan, she noted. “It should be articulated on that plan how every dollar will be applied to buying down that risk.”

There is a difference though, between a plan and an actual measure. A plan can justify an investment intended to reduce risk. But judgment, rather than hard knowledge, will determine how much risk is mitigated by any given tool.

The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) have been trying to develop a methodology measuring the actual value of a given cyber tool’s performance. Their NIPRNet/SIPRNET Cyber Security Architecture Review (NSCSAR – pronounced “NASCAR”) is a classified effort to define a framework for measuring cybersecurity performance, said DISA CIO and Risk Management Executive John Hickey.

“We just went through a drill of ‘what are those metrics that are actually going to show us the effectiveness of those tools,’ because a lot of times we make an investment, people want a return on that investment,” he told GovTechWorks in June. “Security is a poor example of what you are going after. It is really the effectiveness of the security tools or compliance capabilities.”

The NSCSAR review, conducted in partnership with NSA and the Defense Department, may point to a future means of measuring cyber defense capability. “It is a framework that actually looks at the kill chain, how the enemy will move through that kill chain and what defenses we have in place,” Hickey said, adding that NSA is working with DISA on an unclassified version of the framework that could be shared with other agencies or the private sector to measure cyber performance.

“It is a methodology,” Hickey explained. “We look at the sensors we have today and measure what functionality they perform against the threat.… We are tracking the effectiveness of the tools and capabilities to get after that threat, and then making our decisions on what priorities to fund.”

Measuring Security
NSS Labs Inc., independently tests the cybersecurity performance of firewalls and other cyber defenses, annually scoring products’ performances. The Austin, Texas, company evaluated 11 next-generation firewall (NGFW) products from 10 vendors in June 2017, comparing the effectiveness of their security performance, as well as the firewalls’ stability, reliability and total cost of ownership.

In the test, products were presumed to be able to provide basic packet filtering, stateful multi-layer inspection, network address translation, virtual private network capability, application awareness controls, user/group controls, integrated intrusion prevention, reputation services, anti-malware capabilities and SSL inspection. Among the findings:

  • Eight of 11 products tested scored “above average” in terms of both performance and cost-effectiveness; Three scored below
  • Overall security effectiveness ranged from as low as 25.8 percent, up to 99.9; average security effectiveness was 67.3 percent
  • Four products scored below 78.5 percent
  • Total cost of ownership ranged from $5 per protected megabit/second to $105, with an average of $22
  • Nine products failed to detect at least one evasion, while only two detected all evasion attempts

NSS conducted similar tests of advanced endpoint protection tools, data center firewalls, and web application firewalls earlier this year.

But point-in-time performance tests don’t provide a reliable measure of ongoing performance. And measuring the effectiveness of a single tool does not necessarily indicate how well it performs its particular duties as part of a suite of tools, notes Robert J. Carey, vice president within the Global Solutions division at General Dynamics Information Technology (GDIT). The former U.S. Navy CIO and Defense Department principal deputy CIO says that though these tests are valuable, they still make it hard to quantify and compare the performance of different products in an organization’s security stack.

The evolution and blurring of the lines between different cybersecurity tools – from firewalls to intrusion detection/protection, gateways, traffic analysis tools, threat intelligence, intrusion detection, anomaly detection and so on – mean it’s easy to add another tool to one’s stack, but like any multivariate function, it is hard to be sure of its individual contributions to threat protection and what you can do without.

“We don’t know what an adequate cyber security stack looks like. What part of the threat does the firewall protect against, the intrusion detection tool, and so on?” Carey says. “We perceive that the tools are part of the solution. But it’s difficult to quantify the benefit. There’s too much marketing fluff about features and not enough facts.”

Mike Spanbauer, vice president of research strategy at NSS, says this is a common concern, especially in large, managed environments — as is the case in many government instances. One way to address it is to replicate the security stack in a test environment and experiment to see how tools perform against a range of known, current threats while under different configurations and settings.

Another solution is to add one more tool to monitor and measure performance. NSS’ Cyber Advanced Warning System (CAWS) provides continuous security validation monitoring by capturing live threats and then injecting them into a test environment mirroring customers’ actual security stacks. New threats are identified and tested non-stop. If they succeed in penetrating the stack, system owners are notified so they can update their policies to stop that threat in the future.

“We harvest the live threats and capture those in a very careful manner and preserve the complete properties,” Spanbauer said. “Then we bring those back into our virtual environment and run them across the [cyber stack] and determine whether it is detected.”

Adding more tools and solutions isn’t necessarily what Carey had in mind. While that monitoring may reduce risk, it also adds another expense.

And measuring value in terms of return on investment, is a challenge when every new tool adds real cost and results are so difficult to define. In cybersecurity, though managing risk has become the name of the game, actually calculating risk is hard.

The National Institute of Standards and Technology (NIST) created the 800-53 security controls and the cybersecurity risk management framework that encompass today’s best practices. Carey worries that risk management delivers an illusion of security by accepting some level of vulnerability depending on level of investment. The trouble with that is that it drives a compliance culture in which security departments focus on following the framework more than defending the network and securing its applications and data.

“I’m in favor of moving away from risk management,” GDIT’s Carey says. “It’s what we’ve been doing for the past 25 years. It’s produced a lot of spend, but no measurable results. We should move to effects-based cyber. Instead of 60 shades of gray, maybe we should have just five well defined capability bands.”

The ultimate goal: Bring compliance into line with security so that doing the former, delivers the latter. But the evolving nature of cyber threats suggests that may never be possible.

Automated tools will only be as good as the data and intelligence built into them. True, automation improves speed and efficiency, Carey says. “But it doesn’t necessarily make me better.”

System owners should be able to look at their cyber stack and determine exactly how much better security performance would be if they added another tool or upgraded an existing one. If that were the case, they could spend most of their time focused on stopping the most dangerous threats – zero-day vulnerabilities that no tool can identify because they’ve never seen it before – rather than ensuring all processes and controls are in place to minimize risk in the event of a breach.

Point-in-time measures based on known vulnerabilities and available threats help, but may be blind to new or emerging threats of the sort that the NSA identifies and often keeps secret.

The NSCSAR tests DISA and NSA perform include that kind of advanced threat. Rather than trying to measure overall security, they’ve determined that breaking it down into the different levels of security makes sense. Says DISA’s Hickey: “You’ve got to tackle ‘what are we doing at the perimeter, what are we doing at the region and what are we doing at the endpoint.’” A single overall picture isn’t really possible, he says. Rather, one has to ask: “What is that situational awareness? What are those gaps and seams? What do we stop [doing now] in order to do something else? Those are the types of measurements we are looking at.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
How Employers Try to Retain Tech Talent

How Employers Try to Retain Tech Talent

As soon as Scott Algeier hires a freshly minted IT specialist out of college, a little clock starts ticking inside his head.

It’s not that he doesn’t have plenty to offer new hires in his role as director of the non-profit Information Technology-Information Sharing and Analysis Center (IT-ISAC) in Manassas, Va., nor that IT-ISAC cannot pay a fair wage. The issue is Algeier is in an all-out war for talent – and experience counts. Contractors, government agencies – indeed virtually every other employer across the nation – values experience almost as much as education and certifications.

As employees gain that experience, they see their value grow. “If I can get them to stay for at least three years, I consider that a win,” says Algeier. “We have one job where it never lasts more than two years. The best I can do is hire quality people right out of college, train them and hope they stick around for three years.”

The Military Context
An October 2016 White Paper from the Air Force University’s Research Institute says the frequency of churn is even more dire among those in the military, particularly in the Air Force which is undergoing a massive expansion of its cyber operations units.

The present demand for cybersecurity specialists in both the public and private sectors could undoubtedly lead the Air Force to be significantly challenged in retaining its most developed and experienced cyber Airmen in the years ahead, writes Air Force Major William Parker IV, author of the study.

“In the current environment, shortages in all flavors of cyber experts will increase, at least in the foreseeable future. Demand for all varieties of cybersecurity-skilled experts in both the private and public sectors is only rising.”

Meanwhile, it is estimated that today there are at least 30,000 unfilled cybersecurity jobs across the federal government, writes Parker. According to the International Information System Security Certification Consortium (ISC2), demand for cyber-certified professionals will continue to increase at 11 percent per year for the foreseeable future. Some estimates placed the global cyber workforce shortage at close to a million.

The military – both a primary trainer and employer in cyber — offers some interesting insight. A recent survey of Air Force cyber specialists choosing between re-enlistment or pursuit of opportunities in the civilian world indicates those who chose to reenlist were primarily influenced by job security and benefits, including health, retirement and education and training.

“For those Airmen who intended to separate, civilian job opportunities, pay and allowances, bonuses and special pays, promotion opportunities and the evaluation system contributed most heavily to their decisions [to leave the military],” Parker’s paper concluded.

Indeed, several airmen who expressed deep pride and love of serving in the Air Force stated they chose to separate because they felt their skills were not being fully utilized.

“Also, they were aware they had the ability to earn more income for their families in the private sector,” adds Parker. The re-enlistment bonuses the Air Force offered were not enough to make up the pay differences these airmen saw.

“It is also interesting that many of those who say that they will reenlist, included optimistic comments that they hope ‘someday’ they may be able to apply the cyber skills they have attained in the service of the nation.”

Tech companies present a different set of competitive stresses: competing with high pay, industrial glamor and attractive perks. Apple’s new Cupertino, Calif., headquarters epitomizes the age: an airy glass donut that looks like it just touched down from a galaxy far, far away, filled with cafés, restaurants, a wellness center, a child care facility and even an Eden-like garden inside the donut hole. Amazon’s $4 billion urban campus, anchored by the improbable “spheres,” in which three interlocking, multistory glass structures house treehouse meeting rooms, offices and collaborative spaces filled with trees, rare plants, waterfalls and a river that runs through it all.

While Washington, D.C., contractors and non-profits do not have campus rivers or stock option packages, they do have other ways to compete. At the forefront are the high-end missions in which both they and their customers perform. They also offer professional development, certifications, job flexibility and sometimes, the ability to work from home.

“We work with the intelligence community and the DoD,” says Chris Hiltbrand, vice president of Human Resources for General Dynamics Information Technology’s Intelligence Solutions Division. “Our employees have the opportunity to apply cutting-edge technologies to interesting and important missions that truly make a difference to our nation. It’s rewarding work.”

While sometimes people leave for pay packages from Silicon Valley, he admits, pay is rarely the only issue employees consider. Work location, comfort and familiarity, quality of work, colleagues, career opportunities and the impact of working on a worthwhile mission, all play a role.

“It’s not all about maximizing earning potential,” Hiltbrand says. “In terms of money, people want to be compensated fairly – relative to the market – for the work they do. We also look at other aspects of what we can offer, and that is largely around the customer missions we support and our reputation with customers and the industry.”

Especially for veterans, mission, purpose and service to the nation are real motivators. GDIT then goes a step further, supporting staff who are members of the National Guard or military reservists with extra benefits, such as paying the difference in salary when staff go on active duty.

Mission also factors in to the equation at IT-ISAC, Algeier says. “Our employees get to work with some of the big hitters in the industry and that experience definitely keeps them here longer than they might otherwise. But over time, that also has an inevitable effect.

“I get them here by saying: ‘Hey, look who you get to work with,’ he says. “And then within a few years, it’s ‘hey, look who they’re going to go work with.’”

Perks and Benefits
Though automation may seem like a way to replace people rather than entice them to stay, it can be a valuable, if unlikely retention tool.

Automated tools spare staff from the tedious work some find demoralizing (or boring), and save hours or even days for higher-level work, Algeier says. “That means they can now go do far more interesting work instead.” More time doing interesting work leads to happier employees, which in turn makes staff more likely to stay put.

Fitness and wellness programs are two other creative ways employers invest in keeping the talent they have. Gyms, wellness centers, an in-house yoga studio, exercise classes and even CrossFit boxes are some components. Since exercise helps relieve stress and stress can trigger employees to start looking elsewhere for work, it stands that reducing stress can help improve the strains of work and boost production. Keeping people motivated helps keep them from negative feelings that might lead them to seek satisfaction elsewhere.

Providing certified life coaches is another popular way employers can help staff, focusing on both personal and professional development. Indeed, Microsoft deployed life coaches at its Redmond headquarters more than a decade ago. They specialize in working with adults with Attention Deficit Hyperactivity Disorder (ADHD), and can help professionals overcome weaknesses and increase performance.

Such benefits used to be the domain of Silicon Valley alone, but not anymore. Fairfax, Va.-based boutique security company MKACyber, was launched by Mischel Kwon after posts as director of the Department of Homeland Security’s U.S. Computer Emergency Response Team and as vice president of public sector security solutions for Bedford, Mass.-based RSA. Kwon built her company with what she calls “a West Coast environment.”

The company provides breakfast, lunch and snack foods, private “chill” rooms, and operates a family-first environment, according to a job posting. It also highlights the company’s strong commitment to diversity and helps employees remain “life-long learners.”

Kwon says diversity is about more than just hiring the right mix of people. How you treat them is the key to how long they stay.

“There are a lot of things that go on after the hire that we have to concern ourselves with,” she said at a recent RSA conference.

Retention is a challenging problem for everyone in IT, Kwon says, but managers can do more to think differently about how to hire and keep new talent, beginning by focusing not just on raw technical knowledge, but also on soft skills that make a real difference when working on projects and with teams.

“We’re very ready to have people take tests, have certifications, and look at the onesy-twosy things that they know,” says Kwon. “What we’re finding though, is just as important as the actual content that they know, is their actual work ethic, their personalities. Do they fit in with other people? Do they work well in groups? Are they life-long learners? These types of personal skills are as important as technical skills,” Kwon says. “We can teach the technical skills. It’s hard to teach the work ethic.”

Flexible Work Schedules
Two stereotypes of the modern tech age are all-night coders working in perk-laden offices and fueled by free food, lattes and energy drinks. On the other hand are virtual meetings populated by individuals spread out across the nation or the globe, sitting in home offices or bedrooms, working on their laptops. For many, working from home is no longer a privilege. It’s either a right or at least, an opportunity to make work and life balance out. Have to wait for a plumber to fix the leaky sink? No problem: dial in remotely. In the District of Columbia, the government and many employers encourage regular telework as a means to reduce traffic and congestion — as well as for convenience.

For some, working from home also inevitably draws questions. IBM, for years one of the staunchest supporters of telework, now backtracks on the culture it built, telling workers they need to regularly be in the office if they want to stay employed. The policy shift follows similar moves by Yahoo!, among others.

GDIT’s Hiltbrand says because its staff works at company locations as well as on government sites, remote work is common.

“We have a large population of people who have full or part-time teleworking,” he says. “We are not backing away from that model. If anything, we’re trying to expand on that culture of being able to work from anywhere, anytime and on any device.”

Of course, that’s not possible for everyone. Staff working at military and intelligence agencies don’t typically have that flexibility. “But aside from that,” adds Hiltbrand, “we’re putting a priority on the most flexible work arrangements possible to satisfy employee needs.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

From Amazon’s Alexa speech recognition technology to Facebook’s uncanny ability to recognize our faces in photos and the coming wave of self-driving cars, artificial intelligence (AI) and machine learning (ML) are changing the way we look at the world – and how it looks at us.

Nascent efforts to embrace natural language processing to power AI chatbots on government websites and call centers are among the leading short-term AI applications in the government space. But AI also has potential application in virtually every government sector, from health and safety research to transportation safety, agriculture and weather prediction and cyber defense.

The ideas behind artificial intelligence are not new. Indeed, the U.S. Postal Service has used machine vision to automatically read and route hand-written envelopes for nearly 20 years. What’s different today is that the plunging price of data storage and the increasing speed and scalability of computing power using cloud services from Amazon Web Services (AWS) and Microsoft Azure, among others, are converging with new software to make AI solutions easier and less costly to execute than ever before.

Justin Herman, emerging citizen technology lead at the General Services Administration’s Technology Transformation Service, is a sort of AI evangelist for the agency. His job, he says: “GSA helps other agencies and prove AI is real.”

That means talking to feds, lawmakers and vendors to spread an understanding of how AI and machine learning can transform at least some parts of government.

“What are agencies actually doing and thinking about?” he asked at the recent Advanced Technology Academic Research Center’s Artificial Intelligence Applications for Government Summit. “You’ve got to ignore the hype and bring it down to a level that’s actionable…. We want to talk about the use cases, the problems, where we think the data sets are. But we’re not prescribing the solutions.”

GSA set up an “Emerging Citizen Technology Atlas” this fall, essentially an online portal for AI government applications, and established an AI user group that holds its first meeting Dec. 13. And an AI Assistant Pilot program, which so far lists more than two dozen instances where agencies hope to employ AI, includes a range of aspirational projects including:

  • Department of Health and Human Services: Develop responses for Amazon’s Alexa platform to help users quit smoking and answer questions about food safety
  • Department of Housing and Urban Development: Automate or assist with customer service using existing site content
  • National Forest Service: Provide alerts, notices and information about campgrounds, trails and recreation areas
  • Federal Student Aid: Automate responses to queries on social media about applying for and receiving aid
  • Defense Logistics Agency: Help businesses answer frequently asked questions, access requests for quotes and identify commercial and government entity (CAGE) codes

Separately, NASA used the Amazon Lex platform to train its “Rov-E” robotic ambassador to follow voice commands and answer students’ questions about Mars, a novel AI application for outreach. And chatbots – rare just two years ago – now are ubiquitous on websites, Facebook and other social media.

Facebook Messenger instant messaging to communicate with citizens. In all, there are now more than 100,000 chatbots on Facebook Messenger. Chatbots are common features but customer service chatbots are the most basic of applications.

“The challenge for government, as is always the case with new technology, is finding the right applications for use and breaking down the walls of security or privacy concerns that might block the way forward,” says Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology Health and Civilian Solutions Division. “For now, figuring out how to really make AI practical for enhanced customer experience and enriched data, and with a clear return on investment, is going to take thoughtful consideration and a certain amount of trial and error.”

But as with cloud in years past, progress can be rapid. “There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal says. AI can follow the same path. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

That view is shared by Hila Mehr, a fellow at the Ash Center for Democratic Governance and Innovation at Harvard University’s Kennedy School of Government and a member of IBM’s Market Development and Insight strategy team. “Al becomes powerful with machine learning, where the computer learns from supervised training and inputs over time to improve responses,” she wrote in Artificial Intelligence for Citizen Services and Government an Ash Center white paper published in August.

In addition to chatbots, she sees translation services and facial recognition and other kinds of image identification as perfectly suited applications where “AI can reduce administrative burdens, help resolve resource allocation problems and take on significantly complex tasks.”

Open government – the act of making government data broadly available for new and innovative uses – is another promise. As Herman notes, challenging his fellow feds: “Your agencies are collecting voluminous amounts of data that are just sitting there, collecting dust. How can we make that actionable?”

Emerging Technology
Historically, most of that data wasn’t actionable. Paper forms and digital scans lack the structure and metadata to lend themselves to big data applications. But those days are rapidly fading. Electronic health records are turning the tide with medical data; website traffic data is helping government understand what citizens want when visiting, providing insights and feedback that can be used to improve the customer experience.

And that’s just the beginning. According to Fiaz Mohamed, head of solutions enablement for Intel’s AI Products Group, data volumes are growing exponentially. “By 2020, the average internet user will generate 1.5 GB of traffic per day; each self-driving car will generate 4,000 GB/day; connected planes will generate 40,000 GB/day,” he says.

At the same time, advances in hardware will enable faster and faster processing of that data, driving down the compute-intensive costs associated with AI number crunching. Facial recognition historically required extensive human training simply to teach the system the critical factors to look for, such as the distance between the eyes and the nose. “But now neural networks can take multiple samples of a photo of [an individual], and automatically detect what features are important,” he says. “The system actually learns what the key features are. Training yields the ability to infer.”

Intel, long known for its microprocessor technologies, is investing heavily in AI through internal development and external acquisitions. Intel bought machine-learning specialist Nervana in 2016 and programmable chip specialist Altera the year before. The combination is key to the company’s integrated AI strategy, Mohamed says. “What we are doing is building a full-stack solution for deploying AI at scale,” Mohamed says. “Building a proof-of-concept is one thing. But actually taking this technology and deploying it at the scale that a federal agency would want is a whole different thing.”

Many potential AI applications pose similar challenges.

FINRA, the Financial Industry Regulatory Authority, is among the government’s biggest users of AWS cloud services. Its market surveillance system captures and stores 75 billion financial records every day, then analyzes that data to detect fraud. “We process every day what Visa and Mastercard process in six months,” says Steve Randich, FINRA’s chief information officer in a presentation captured on video. “We stitch all this data together and run complex sophisticated surveillance queries against that data to look for suspicious activity.” The payoff: a 400 percent increase in performance.

Other uses include predictive fleet maintenance. IBM put its Watson AI engine to work last year in a proof-of-concept test of Watson’s ability to perform predictive maintenance for the U.S. Army’s 350 Stryker armored vehicles. In September, the Army’s Logistics Support Activity (LOGSA) signed a contract adding Watson’s cognitive services to other cloud services it gets from IBM.

“We’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service,” said LOGSA Commander Col. John D. Kuenzli. He said Watson holds the potential to “truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support.”

AI applications share a few things in common. They use large data sets to gain an understanding of a problem and advanced computing to learn through experience. Many applications share a basic construct even if the objectives are different. Identifying military vehicles in satellite images is not unlike identifying tumors in mammograms or finding illegal contraband in x-ray images of carry-on baggage. The specifics of the challenge are different, but the fundamentals are the same. Ultimately, machines will be able to do that more accurately – and faster – than people, freeing humans to do higher-level work.

“The same type of neural network can be applied to different domains so long as the function is similar,” Mohamed says. So a system built to detect tumors for medical purposes could be adapted and trained instead to detect pedestrians in a self-driving automotive application.

Neural net processors will help because they are simply more efficient at this kind of computation than conventional central processing units. Initially these processors will reside in data centers or the cloud, but Intel already has plans to scale the technology to meet the low-power requirements of edge applications that might support remote, mobile users, such as in military or border patrol applications.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Evidence-Based Policy Act Could Change How Feds Use, Share Data

Evidence-Based Policy Act Could Change How Feds Use, Share Data

As government CIOs try to get their arms around how the Modernizing Government Technology (MGT) Act will affect their lives and programs, the next big IT measure to hit Congress is coming into focus: House Speaker Paul Ryan’s (R-Wis.) “Foundations for Evidence-Based Policymaking Act of 2017.”

A bipartisan measure now pending in both the House and Senate, the bill has profound implications for how federal agencies manage and organize data – the keys to being able to put data for informed policy decisions into the public domain in the future. Sponsored by Ryan in the House and by Sen. Patty Murray (D-Wash.) in the Senate, the measure would:

  • Make data open by default. That means government data must be accessible for research and statistical purposes – while still protecting privacy, intellectual property, proprietary business information and the like. Exceptions are allowed for national security
  • Appoint a chief data officer responsible for ensuring data quality, governance and availability
  • Appoint a chief evaluation officer to “continually assess the coverage, quality, methods, consistency, effectiveness, independence and balance of the portfolio of evaluations, policy research, and ongoing evaluation activities of the agency”

“We’ve got to get off of this input, effort-based system [of measuring government performance], this 20th century relic, and onto clearly identifiable, evidence-based terms, conditions, data, results and outcomes,” Ryan said on the House floor Nov. 15. “It’s going to mean a real sea change in how government solves problems and how government actually works.”

Measuring program performance in government is an old challenge. The Bush and Obama administrations each struggled to implement viable performance measurement systems. But the advent of Big Data, advanced analytics and automation technologies holds promise for quickly understanding how programs perform and whether or not results match desired expectations. It also holds promise for both agencies and private sector players to devise new ways to use and share government data.

“Data is the lifeblood of the modern enterprise,” said Stan Tyliszczak, vice president of technology integration with systems integrator General Dynamics Information Technology. “Everything an organization does is captured:  client data; sensor and monitoring data; regulatory data, or even internal IT operating data. That data can be analyzed and processed by increasingly sophisticated tools to find previously unseen correlations and conclusions. And with open and accessible data, we’re no longer restricted to a small community of insiders.

“The real challenge, though, will be in the execution,” Tyliszczak says. “You have to make the data accessible. You have to establish policies for both open sharing and security. And you have to organize for change – because new methods and techniques are sure to emerge once people start looking at that data.”

Indeed, the bill would direct the Office of Management and Budget, Office of Government Information Services and General Services Administration to develop and maintain an online repository of tools, best practices and schema standards to facilitate open data practices across the Federal Government. Individual agency Chief Data Officers (CDO) would be responsible for applying those standards to ensure data assets are properly inventoried, tagged and cataloged, complete with metadata descriptions that enable users to consume and use the data for any number of purpose.

Making data usable by outsiders is key. “Look at what happened when weather data was opened up,” says Tyliszczak. “A whole new class of web-based apps for weather forecasting emerged. Now, anyone with a smartphone can access up-to-the-minute weather forecasts from anywhere on the planet. That’s the power of open data.”

Choosing Standards
Most of the government’s data today is not open. Some is locked up in legacy systems that were never intended to be shared with the public. Some lacks the metadata and organization that would make it truly useful by helping users understand what individual fields represent. And most is pre-digested – that is, the information is bound up in PDF reports and documents rather consumable by analytics tools.

Overcoming all that will require discipline in technology, organization and execution.

“Simply publishing data in a well-known format is open, but it is not empowering,” says Mike Pizzo, co-chair of the Oasis Open Data Protocol (OData) Technical Committee and a software architect at Microsoft. “Data published as files is hard to find, hard to understand and tends to get stale as soon as it’s published.… To be useful, data must be accurate, consumable and interoperable.”

Some federal agencies are already embracing OData for externally-facing APIs. The Department of Labor, for example, built a public-facing API portal providing access to 175 data tables within 32 datasets, and with more planned in the future. Pizzo says other agencies both inside and outside the U.S. have used the standard to share, or “expose” labor, city, health, revenue, planning and election information.

Some agencies are already driving in this direction by creating a data ecosystem built around data and application programming interfaces (APIs). The Department of Veterans Affairs disclosed in October it is developing plans to build a management platform called Lighthouse, intended to “create APIs that are managed as products to be consumed by developers within and outside of VA.”

The VA described the project as “a deliberate shift” to becoming an “API-driven digital enterprise,” according to a request for information published on FedBizOps.gov. Throughout VA, APIs will be the vehicles through which different VA systems communicate and share data, underpinning both research and the delivery of benefits to veterans and allowing a more rapid migration from legacy systems to commercial off-the-shelf and Software-as-a-Service (SaaS) solutions. “It will enable creation of new, high-value experiences for our Veterans [and] VA’s provider partners, and allow VA’s employees to provide better service to Veterans,” the RFI states.

Standardizing the approach to building those APIs will be critical.

Modern APIs are based on ReST (Representational State Transfer), a “style” for interacting with web resources, and on JSON (JavaScript Object Notation), a popular format for data interchange that is more efficient than XML (eXtensible Markup Language). These standards by themselves do not solve the interoperability problem, however, because they offer no standard way of identifying metadata, Pizzo says. This is what OData provides: a metadata description language intended to establish common conventions and best practices for metadata that can be applied on top of REST and JSON. Once applied, OData provides interoperable open data access, where records are searchable, recognizable, accessible and protectable – all, largely, because of the metadata.

OData is an OASIS and ISO standard and is widely supported across the industry, including by Microsoft, IBM, Oracle, Salesforce among many.

“There are something like 5,000 or 6,000 APIs published on the programmable web, but you couldn’t write a single application that would interact with two of them,” he says. “What we did with OData was to look across those APIs, take the best practices and define those as common conventions.” In effect, OData sets a standard for implementing ReST with a JSON payload. Adopting the standard means providing a shortcut to choose the best way to implement request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats and query options, so the more important work using the data can be the focus of development activity.

This has value whether or not a data owner plans to share data openly or not.

Whether APIs will be accessed by an agency’s own systems – as with one VA system tapping into the database of another agency’s system – or by consumers – as in the case of a veteran accessing a user portal – doesn’t matter. In the Pentagon, one question never goes away: “How do we improve data interoperability from a joint force perspective?” said Robert Vietmeyer, associate director for Cloud Computing and Agile Development, Enterprise Services and Integration Directorate in the Office of the Defense Department CIO at a recent Defense Systems forum.

“I talk to people all the time, and they say, ‘I’m just going to put this into this cloud, or that cloud, and they have a data integration engine or big data capability, or machine learning, and once it’s all there all my problems will be solved.’ No, it’s not. So when the next person comes in and says, ‘You have data I really need, open up your API, expose your data so I can access it, and support that function over time,’ they’re not prepared. The program manager says: ‘I don’t have money to support that.’”

Vietmeyer acknowledges the Pentagon is behind in establishing best practices and standards. “The standards program has been in flux,” he said. “We haven’t set a lot, but it’s one of those areas we’re trying to fix right now. I’m looking for all ideas to see what we can do. Regardless, however, he sees a silver lining in the growing openness to cloud solutions. “The cloud makes it much easier to look at new models which can enable that data to become consumable by others,” he said.

Standards – whether by happenstance or by design – are particularly valuable in fulfilling unforeseen needs, Pizzo says. “Even if your service never ends up needing to be interoperable, it still has those good bones so that you know it can grow, it can evolve, and when you start scratching your head about a problem there are standards in place for how to answer that need,” he says.

By using established discipline at the start, data owners are better prepared to manage changing needs and requirements later, and to capitalize on new and innovative ways to use their data in the future.

“Ultimately, we want to automate as much of this activity as possible, and standards help make that possible,” says GDIT’s Tyliszczak. “Automation and machine learning will open up entirely new areas of exploration, insight, modernization and efficiency. We’ll never be able to achieve really large-scale integration if we rely on human-centered analysis.

“It makes more sense – and opens up a whole world of new opportunities – to leverage commercial standards and best practices to link back-end operations with front-end cloud solutions,” he adds. “That’s when we can start to truly take advantage of the power of the Cloud.”

Tobias Naegele has covered defense, military, and technology issues as an editor and reporter for more than 25 years, most of that time as editor-in-chief at Defense News and Military Times.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Calculating Technical Debt Can Focus Modernization Efforts

Calculating Technical Debt Can Focus Modernization Efforts

Your agency’s legacy computer systems can be a lot like your family minivan: Keep up the required maintenance and it can keep driving for years. Skimp on oil changes and ignore warning lights, however, and you’re living on borrowed time.

For information technology systems, unfunded maintenance – what developers call technical debt – accumulates rapidly. Each line of code builds on the rest, and when some of that code isn’t up to snuff it has implications for everything that follows. Ignoring a problem might save money now, but could cost a bundle later – especially if it leads to a system failure or breach.

The concept of technical debt has been around since computer scientist Ward Cunningham coined the term in 1992 as a means of explaining the future costs of fixing known software problems. More recently, it’s become popular among agile programmers, whose rapid cycle times demand short-term tradeoffs in order to meet near-term deadlines. Yet until recently it’s been seen more as metaphor than measure.

Now that’s changing.

“The industrial view is that anything I’ve got to spend money to fix – that constitutes corrective maintenance – really is a technical debt,” explains Bill Curtis, executive director of the Consortium for IT Software Quality (CISQ), a non-profit organization dedicated to improving software quality while reducing cost and risk. “If I’ve got a suboptimal design and it’s taking more cycles to process, then I’ve got performance issues that I’m paying for – and if that’s in the cloud, I may be paying a lot. And if my programmers are slow in making changes because the original code is kind of messy, well, that’s interest too. It’s costing us extra money and taking extra time to do maintenance and enhancement work because of things in the code we haven’t fixed.”

CISQ has proposed an industry standard for defining and measuring technical debt by analyzing software code and identifying potential defects. The number of hours needed to fix those defects, multiplied by developers’ fully loaded hourly rate equals the principal portion of a firm’s technical debt. The interest portion of the debt is more complicated, encompassing a number of additional factors.

The standard is now under review at the standards-setting Object Management Group, and Curtis expects approval in December. Once adopted, the standard can be incorporated into analysis and other tools, providing a common, uniform means of calculating technical debt.

Defining a uniform measure has been a dream for years. “People began to realize they were making quick, suboptimal decisions to get software built and delivered in short cycles, and they knew they’d have to go back and fix it,” Curtis says.

If they could instead figure out the economic impact of these decisions before they were implemented, it would have huge implications on long-term quality as well as the bottom line.

Ipek Ozkaya, principal researcher and deputy manager in the software architecture practice at Carnegie Mellon University’s Software Engineering Institute – a federally funded research and development center – says the concept may not be as well understood in government, but the issues are just as pressing, if not more so.

“Government cannot move as quickly as industry,” she says. “So they have to live with the consequences much longer, especially in terms of cost and resources spent.”

The Elements of Technical Debt
Technical debt may be best viewed by breaking it down into several components, Curtis says:

  • Principal – future cost of fixing known structural weaknesses, flaws or inefficiencies
  • Interest – continuing costs directly attributable to the principal debt, such as: excess programming time, poor performance or excessive server costs due to inefficient code
  • Risk and liability – potential costs that could stem from issues waiting to be fixed, including system outages and security breaches
  • Opportunity costs – missed or delayed opportunities because of time and resources focused on working around or paying down technical debt

Human factors, such as the lost institutional memory resulting from excessive staff turnover or the lack of good documentation, can also contribute to the debt. If it takes more time to do the work, the debt grows.

For program managers, system owners, chief information officers or even agency heads, recognizing and tracking each of these components helps translate developers’ technical challenges into strategic factors that can be managed, balanced and prioritized.

Detecting these problems is getting simpler. Static analysis software tools can scan and identify most flaws automatically. But taking those reports and calculating a technical debt figure is another matter. Several software analysis tools are now on the market, such as those from Cast Software or SonarQube, which include technical debt calculators among their system features. But without standards to build on, those estimates can be all over the place.

The CISQ effort, built on surveys of technical managers from both the customer and supplier side of the development equation, aims to establish a baseline for the time factors involved with fixing a range of known defects that can affect security, maintainability and adherence to architectural design standards.

“Code quality is important, process quality is important,” Ozkaya says. “But … it’s really about trying to figure out those architectural aspects of the systems that require significant refactoring, re-architecting, sometimes even shutting down the system [and replacing it], as may happen in a lot of modernization challenges.” This, she says, is where technical debt is most valuable most critical, providing a detailed understanding not only of what a system owner has now, but what it will take to get it to a sustainable state later.

“It comes down to ensuring agile delivery teams understand the vision of the product they’re building,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions. “The ability to decompose big projects into a roadmap of smaller components that can be delivered in an incremental manner requires skill in both software architecture and business acumen. Building a technically great solution that no one uses doesn’t benefit anyone. Likewise, if an agile team delivers needed functionality in a rapid fashion but without a strong design, the product will suffer in the long run. Incremental design and incremental delivery require a high amount of discipline.”

Still, it’s one thing to understand the concept of technical debt; it’s another to measure it. “If you can’t quantify it,” Ozkaya says, “what’s the point?”

Curtis agrees: “Management wants an understanding of what their future maintenance costs will be and which of their applications have the most technical debt, because that they will need to allocate more resources there. And [they want to know] how much technical debt I will need to remove before I’m at a sustainable level.”

These challenges hit customers in every sector, from banks to internet giants to government agencies. Those relying solely on in-house developers can rally around specific tools and approaches to their use, but for government customers – where outside developers are the norm – that’s not the case.

One of the challenges in government is the number of players, notes Marc Jones, North American vice president for public sector at Cast Software. “Vendor A writes the software, so he’s concerned with functionality, then the sustainment contractors come on not knowing what technical debt is already accumulated,” he says. “And government is not in the position to tell them.”

Worse, if the vendors and the government customer all use different metrics to calculate that debt, no one will be able to agree on the scale of the challenge, let alone how to manage it. “The definitions need to be something both sides of the buy-sell equation can agree on,” Jones says.

Once a standard is set, technical debt can become a powerful management tool. Consider an agency with multiple case management solutions. Maintaining multiple systems is costly and narrowing to a single solution makes sense. Each system has its champions and each likely has built up a certain amount of technical debt over the years. Choosing which one to keep and which to jettison might typically involve internal debate and emotions built up around personal preferences. By analyzing each system’s code and calculating technical debt, however, managers can turn an emotional debate into an economic choice.

Establishing technical debt as a performance metric in IT contracts is also beneficial. Contracting officers can require technical debt be monitored and reported, giving program managers insights into the quality of the software under development, and also the impact of decisions – whether on the part of either party – on long-term maintainability, sustainability, security and cost. That’s valuable to both sides and helps everyone understand how design decisions, modifications and requirements can impact a program over the long haul, as well as at a given point in time.

“To get that into a contract is not the status quo,” Jones says. “Quality is hard to put in. This really is a call to leadership.” By focusing on the issue at the contract level, he says, “Agencies can communicate to developers that technically acceptable now includes a minimum of quality and security. Today, security is seen as a must, while quality is perceived as nice to have. But the reality is that you can’t secure bad code. Security is an element of quality, not the other way around.”

Adopting a technical debt metric with periodic reporting ensures that everyone – developers and managers, contractors and customers – share a window on progress. In an agile development process, that means every third or fourth sprint can be focused on fixing problems and retiring technical debt in order to ensure that the debt never reaches an unmanageable level. Alternatively, GDIT’s Zach says developers may also aim to retire a certain amount of technical debt on each successive sprint. “If technical debt can take up between 10 and 20 percent of every sprint scope,” he says, “that slow trickle of ‘debt payments’ will help to avoid having to invest large spikes of work later just to pay down principal.”

For legacy systems, establishing a baseline and then working to reduce known technical debt is also valuable, especially in trying to decide whether it makes sense to keep that system, adapt it to cloud or abandon it in favor of another option.

“Although we modernize line by line, we don’t necessarily make decisions line by line,” says Ozkaya. By aggregating the effect of those line-by-line changes, managers gain a clearer view of the impact each individual decision has on the long-term health of a system. It’s not that going into debt for the right reasons doesn’t make sense, because it can. “Borrowing money to buy a house is a good thing,” Ozkaya says. “But borrowing too much can get you in trouble.”

It’s the same way with technical debt. Accepting imperfect code is reasonable as long as you have a plan to go back and fix it quickly. Choosing not to do so, though, is like paying just the minimum on your credit card bill. The debt keeps growing and can quickly get out of control.

“The impacts of technical debt are unavoidable,” Zach says.  “But what you do about it is a matter of choice. Managed properly, it helps you prioritize decisions and extend the longevity of your product or system. Quantifying the quality of a given code base is a powerful way to improve that prioritization. From there, real business decisions can be made.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
What the White House’s Final IT Modernization Report Could Look Like

What the White House’s Final IT Modernization Report Could Look Like

Modernization is the hot topic in Washington tech circles these days. There are breakfasts, summits and roundtables almost weekly. Anticipation is building as the White House and its American Technology Council readies the final version of its Report to the President on IT Modernization and the next steps in the national cyber response plan near public release.

At the same time, flexible funding sources for IT modernization are also coming into view as the Modernizing Government Technology (MGT) bill, which unanimously passed the House in the spring and passed by the Senate as part of the 2018 National Defense Authorization Act (NDAA). Barring any surprises, the measure will become law later this fall when the NDAA conference is complete, providing federal agencies with a revolving fund for modernization initiatives, and a centralized mechanism for prioritizing projects across government.

The strategy and underlying policy for moving forward will flow from the final Report on IT Modernization. Released in draft form on Aug. 30, it generated 93 formal responses from industry groups, vendors and individuals. Most praised its focus on consolidated networks and common, cloud-based services, but also raised concerns about elements of the council’s approach. Among the themes to emerge from the formal responses:

  • The report’s aggressive schedule of data collection and reporting deadlines drew praise, but its emphasis on reporting – while necessary for transparency and accountability – was seen by some as emphasizing bureaucratic process ahead of results. “The implementation plan should do more than generate additional plans and reports,” wrote the Arlington, Va.-based Professional Services Council (PSC) in its comments. Added the American Council for Technology–Industry Advisory Council (ACT-IAC), of Fairfax, Va.: “Action-oriented recommendations could help set the stage for meaningful change.” For example, ACT-IAC recommended requiring agencies to implement Software Asset Management within six or nine months.
  • While the draft report suggests agencies “consider immediately pausing or halting upcoming procurement actions that further develop or enhance legacy IT systems,” commenters warned against that approach. “Given the difficulty in allocating resources and the length of the federal acquisition lifecycle, pausing procurements or reallocating resources to other procurements may be difficult to execute and could adversely impact agency operations,” warned PSC. “Delaying the work on such contracts could increase security exposure of the systems being modernized and negatively impact the continuity of services.”
  • The initial draft names Google, Salesforce, Amazon and Microsoft as potential partners in a pilot program to test a new way of acquiring software licenses across the federal sector, as well as specifying General Services Administration’s (GSA) new Enterprise Information Services (EIS) contract as a preferred contract vehicle not just for networking, but also shared services. Commenters emphasized that the White House should be focused on setting desired objectives at this stage rather than prescribing solutions. “The report should be vendor and product agnostic,” wrote Kenneth Allen, ATC-IAC executive director. “Not being so could result in contracting issues later, as well as possibly skew pilot outcomes.”
  • Responders generally praised the notion of consolidating agencies under a single IT network, but raised concerns about the risks of focusing too much on a notional perimeter rather than on end-to-end solutions for securing data, devices and identity management across that network. “Instead of beginning detection mitigation at the network perimeter a cloud security provider is able to begin mitigation closer to where threats begin” and often is better situated and equipped to respond, noted Akamai Technologies, of Cambridge, Mass. PSC added that the layered security approach recommended in the draft report should be extended to include security already built into cloud computing services.

Few would argue with the report’s assertion that “The current model of IT acquisition has contributed to a fractured IT landscape,” or with its advocacy for category management as a means to better manage the purchase and implementation of commodity IT products and services. But concerns did arise over the report’s concept to leverage the government’s EIS contract as a single, go-to source for a host of network cybersecurity products and services.

“The report does not provide guidance regarding other contract vehicles with scope similar to EIS,” says the IT Alliance for Public Sector (ITAPS), a division of the Information Technology Industry Council (ITIC) a trade group, Alliant, NITAAC CIO-CS and CIO-SP3 may offer agencies more options than EIS. PSC agreed: “While EIS provides a significant opportunity for all agencies, it is only one potential solution. The final report should encourage small agencies to evaluate resources available from not only GSA, but also other federal agencies rather than presuming that consolidation will lead to the desired outcomes, agencies should make an economic and business analysis to validate that presumption.”

The challenge is how to make modernization work effectively in an environment where different agencies have vastly different capabilities. The problem today, says Grant Schneider, acting federal chief information security officer, is that “we expect the smallest agencies to have the same capabilities as the Department of Defense or the Department of Homeland Security, and that’s not realistic.”

The American Technology Council Report attempts to address IT modernization at several levels, in terms of both architecture and acquisition. The challenge is clear, says Schneider: “We have a lot of very old stuff. So, as we’re looking at our IT modernization, we have to modernize in such a way that we don’t build the next decade’s legacy systems tomorrow. We are focused on how we change the way we deliver services, moving toward cloud as well as shared services.”

Standardizing and simplifying those services will be key, says Stan Tyliszczak, chief engineer with systems integrator General Dynamics Information Technology. “If you look at this from an enterprise perspective, it makes sense to standardize instead of continuing with a ‘to-each-his-own’ approach,” Tyliszczak says. “Standardization enables consolidation, simplification and automation, which in turn will increase security, improve performance and reduce costs. Those are the end goals everybody wants.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250