Defense

Building a Cutting-Edge Supply Chain

Building a Cutting-Edge Supply Chain

In a non-descript office park in northern Virginia, there are several unmarked warehouses filled with prepositioned wooden crates waiting to be dispatched to destinations all over the world.  Office personnel are busy processing orders, while warehouse technicians hustle to fill them. Their work, invisible by design, ensures the secure and reliable door-to-door delivery of mission-critical supplies to locations everywhere — often to unfriendly environments. This supply chain process is similar to commercial processes but more specialized due to unique government requirements.

Supply chain management is defined by what happens behind the scenes, where strict processes and procedures are implemented to successfully fulfill order requirements in a timely manner. At both the Virginia warehouse and commercial operations, advanced software makes the system more efficient. Hand-held computers process items faster, but require talented personnel to do it successfully.

The difference between commercial supply chain management and government supply chain management is the need to meet stringent government compliance mandates in addition to customers’ requirements. Commercial vendors focus on low prices and fast delivery. Those who have the government as a customer fulfill requirements that ordinary consumers would never think of: such as opening their books to government auditors, complying with an array of Federal Acquisition Regulations that restrict how, when and where orders can be shipped and by delivering specialty items rarely available to mainstream consumers.

“While the commercial business model is not what we strive to replicate, we can leverage many of the same technologies and approaches they use and apply those to our operations,” says Phil Jones, vice president at General Dynamics Information Technology.

Warehouse shelves are labeled with barcodes defining their locations and products are shelved in bins the warehouse management software determines are most sensible. Handheld scanners instantly update inventory as items are placed on shelves or removed for order fulfillment. The same software calculates the best routes for warehouse technicians moving through aisles to fulfill an order. These are the tools for building a world-class supply chain operation.

But they’re only tools, notes Steve Tracey, executive director of Penn State University’s Smeal College of Business’ Center for Supply Chain Research™. He dismisses the “world class” label as simplistic.

“World class logistics’ is a misnomer, in my professional opinion,” Tracey told GovTechWorks. “There are common best practices in supply chains. There are firms that use those practices, but there will never be any one firm that will be best at all of them.”

Expertise is what makes any organization excel or fail. As with information technology or virtually any other service management business, supply chain management boils down to three essential components: people, process and technology.

“Anybody can go out and buy SAP or Oracle software,” Tracey continues. “But buying it and using it well are not the same thing. If you don’t have the people and processes to use that software, if you don’t understand the specific areas of expertise in your operation, then it’s like giving a five-year old a Ferrari: He can sit in it, but he can’t drive it.”

As soon as they were introduced, computers revolutionized the supply chain process. First, manufacturing operations used statistical process controls to monitor quality and increase yields in the 1970s. Then, they introduced just-in-time delivery in the 1980s to squeeze out warehouses and middlemen, forging closer relationships with a small number of suppliers who repaid the favor with better service and more precise delivery.

“The same factors are in play today as warehouse managers seek efficiency by reducing the time any product sits on shelves – even to the point of having suppliers direct-ship products by bypassing the warehouse altogether,” says Jeff Waller, president of Atlanta-based Waller & Associates LLC, a supply chain consultancy. For some supply chains, especially those directly supporting the federal government, that may not be an option, but the concept holds: The closer the logisticians are with their suppliers, the more efficient their operations and cash flows will be. Similarly, analytics can be used to better understand and anticipate customer needs.

Government is Different
To Penn State’s Tracey, differences between government and non-government supply chains start with the mission and extend all the way to foiling those who might seek to disrupt it.

“For both military and non-military government agencies operating overseas, the risks posed to supply lines by nation-state and terrorist actors are real and persistent,” he says.

Because enemies may have interests in penetrating and disrupting government supply chains, both physical and digital security is essential.

Other differences are no less challenging, Tracey says. Disruptions caused by the political process – from government shutdowns in extreme cases to the long-term effects of spending caps through sequestration – are unique to government.

“Financially, business has a continuity to it,” says Tracey. “We close the books periodically, maybe at year’s end, but it doesn’t typically affect the operations of the business. That’s not true for government agencies, which operate under significant constraints. Even though the fiscal year starts Oct. 1, some entities might not know how much they have to spend until spring.”

A second problem is sequestration, Tracey says, because “it arbitrarily limits what can be spent in certain categories and so, misaligns resources, overspending in some areas and underspending in others.”

“Those differences limit the ability to mimic what private industry does, because they make you make different choices,” he says.

Size and Scale
Though government agencies can be big customers, “big” is a relative term.

Commercial vendors process billions of orders a year. Annual online orders surpass hundreds of billions of dollars. Compared to that, a typical government supply chain contract averages about 10,000 to 20,000 orders per year; a tiny fraction of commercial order volumes.

Scale matters because in business, scale translates to influence. For many commercial firms, the challenges of meeting unique government requirements simply aren’t worth the cost.

Other differences: commercial prices are dynamic, changing constantly throughout the day in response to market movements. Government institutions are somewhat static, preferring firm price schedules that support advance planning. Commercial vendors offer their own wares and those of others in their online marketplaces. If a customer can’t find what they’re looking for in one vendor’s marketplace, they are free to shop for it someplace else. By contrast, government supply chain contracts require contractors to locate and deliver as quickly as possible and at the lowest possible price, anything for which the customer might ask.

Best of Both Worlds
While there are clear differences between a commercial and government supply chain, government agencies can gain some of the advantages pioneered in the commercial sector.

Waller believes government-focused operations can leverage the same technologies that the big guys use. “It’s a hybrid model,” Waller says. “Take the best of commercial practices – cost effectiveness and speed – and bring it into the government context.”

To do that, he says, organizations should not look at the entire supply chain and try to change everything at once. “You have to take the individual chunks of the supply chain and look at each piece individually,” he says. How do you manage inventory? Process orders? Pack and ship? Track performance?

“Start small,” he says. “Tackle your upstream logistics. Then tackle your procurement. Take it one bite at a time. Don’t wait for perfection to implement something. Make the transformation and then tweak it to get where you want to be.”

By breaking it down into pieces, you can see where problems crop up and look for ways to eliminate them – whether adding or upgrading technology, changing business processes or adding, training or replacing people.

“You want to make proactive use of big data,” Waller said. “Predict what you need, where you need it. That’s rapid fulfillment.”

Starting small doesn’t mean thinking small. Waller urges supply chain managers look beyond their own warehouses to their suppliers. How does the material arrive? From where does it come? Can savings in time or cost, be achieved by changing any of that? The bottom line: challenge everything and opportunities are sure to emerge.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

From Amazon’s Alexa speech recognition technology to Facebook’s uncanny ability to recognize our faces in photos and the coming wave of self-driving cars, artificial intelligence (AI) and machine learning (ML) are changing the way we look at the world – and how it looks at us.

Nascent efforts to embrace natural language processing to power AI chatbots on government websites and call centers are among the leading short-term AI applications in the government space. But AI also has potential application in virtually every government sector, from health and safety research to transportation safety, agriculture and weather prediction and cyber defense.

The ideas behind artificial intelligence are not new. Indeed, the U.S. Postal Service has used machine vision to automatically read and route hand-written envelopes for nearly 20 years. What’s different today is that the plunging price of data storage and the increasing speed and scalability of computing power using cloud services from Amazon Web Services (AWS) and Microsoft Azure, among others, are converging with new software to make AI solutions easier and less costly to execute than ever before.

Justin Herman, emerging citizen technology lead at the General Services Administration’s Technology Transformation Service, is a sort of AI evangelist for the agency. His job, he says: “GSA helps other agencies and prove AI is real.”

That means talking to feds, lawmakers and vendors to spread an understanding of how AI and machine learning can transform at least some parts of government.

“What are agencies actually doing and thinking about?” he asked at the recent Advanced Technology Academic Research Center’s Artificial Intelligence Applications for Government Summit. “You’ve got to ignore the hype and bring it down to a level that’s actionable…. We want to talk about the use cases, the problems, where we think the data sets are. But we’re not prescribing the solutions.”

GSA set up an “Emerging Citizen Technology Atlas” this fall, essentially an online portal for AI government applications, and established an AI user group that holds its first meeting Dec. 13. And an AI Assistant Pilot program, which so far lists more than two dozen instances where agencies hope to employ AI, includes a range of aspirational projects including:

  • Department of Health and Human Services: Develop responses for Amazon’s Alexa platform to help users quit smoking and answer questions about food safety
  • Department of Housing and Urban Development: Automate or assist with customer service using existing site content
  • National Forest Service: Provide alerts, notices and information about campgrounds, trails and recreation areas
  • Federal Student Aid: Automate responses to queries on social media about applying for and receiving aid
  • Defense Logistics Agency: Help businesses answer frequently asked questions, access requests for quotes and identify commercial and government entity (CAGE) codes

Separately, NASA used the Amazon Lex platform to train its “Rov-E” robotic ambassador to follow voice commands and answer students’ questions about Mars, a novel AI application for outreach. And chatbots – rare just two years ago – now are ubiquitous on websites, Facebook and other social media.

In all, there are now more than 100,000 chatbots on Facebook Messenger. Chatbots are common features but customer service chatbots are the most basic of applications.

“The challenge for government, as is always the case with new technology, is finding the right applications for use and breaking down the walls of security or privacy concerns that might block the way forward,” says Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology Health and Civilian Solutions Division. “For now, figuring out how to really make AI practical for enhanced customer experience and enriched data, and with a clear return on investment, is going to take thoughtful consideration and a certain amount of trial and error.”

But as with cloud in years past, progress can be rapid. “There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal says. AI can follow the same path. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

That view is shared by Hila Mehr, a fellow at the Ash Center for Democratic Governance and Innovation at Harvard University’s Kennedy School of Government and a member of IBM’s Market Development and Insight strategy team. “Al becomes powerful with machine learning, where the computer learns from supervised training and inputs over time to improve responses,” she wrote in Artificial Intelligence for Citizen Services and Government an Ash Center white paper published in August.

In addition to chatbots, she sees translation services and facial recognition and other kinds of image identification as perfectly suited applications where “AI can reduce administrative burdens, help resolve resource allocation problems and take on significantly complex tasks.”

Open government – the act of making government data broadly available for new and innovative uses – is another promise. As Herman notes, challenging his fellow feds: “Your agencies are collecting voluminous amounts of data that are just sitting there, collecting dust. How can we make that actionable?”

Emerging Technology
Historically, most of that data wasn’t actionable. Paper forms and digital scans lack the structure and metadata to lend themselves to big data applications. But those days are rapidly fading. Electronic health records are turning the tide with medical data; website traffic data is helping government understand what citizens want when visiting, providing insights and feedback that can be used to improve the customer experience.

And that’s just the beginning. According to Fiaz Mohamed, head of solutions enablement for Intel’s AI Products Group, data volumes are growing exponentially. “By 2020, the average internet user will generate 1.5 GB of traffic per day; each self-driving car will generate 4,000 GB/day; connected planes will generate 40,000 GB/day,” he says.

At the same time, advances in hardware will enable faster and faster processing of that data, driving down the compute-intensive costs associated with AI number crunching. Facial recognition historically required extensive human training simply to teach the system the critical factors to look for, such as the distance between the eyes and the nose. “But now neural networks can take multiple samples of a photo of [an individual], and automatically detect what features are important,” he says. “The system actually learns what the key features are. Training yields the ability to infer.”

Intel, long known for its microprocessor technologies, is investing heavily in AI through internal development and external acquisitions. Intel bought machine-learning specialist Nervana in 2016 and programmable chip specialist Altera the year before. The combination is key to the company’s integrated AI strategy, Mohamed says. “What we are doing is building a full-stack solution for deploying AI at scale,” Mohamed says. “Building a proof-of-concept is one thing. But actually taking this technology and deploying it at the scale that a federal agency would want is a whole different thing.”

Many potential AI applications pose similar challenges.

FINRA, the Financial Industry Regulatory Authority, is among the government’s biggest users of AWS cloud services. Its market surveillance system captures and stores 75 billion financial records every day, then analyzes that data to detect fraud. “We process every day what Visa and Mastercard process in six months,” says Steve Randich, FINRA’s chief information officer in a presentation captured on video. “We stitch all this data together and run complex sophisticated surveillance queries against that data to look for suspicious activity.” The payoff: a 400 percent increase in performance.

Other uses include predictive fleet maintenance. IBM put its Watson AI engine to work last year in a proof-of-concept test of Watson’s ability to perform predictive maintenance for the U.S. Army’s 350 Stryker armored vehicles. In September, the Army’s Logistics Support Activity (LOGSA) signed a contract adding Watson’s cognitive services to other cloud services it gets from IBM.

“We’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service,” said LOGSA Commander Col. John D. Kuenzli. He said Watson holds the potential to “truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support.”

AI applications share a few things in common. They use large data sets to gain an understanding of a problem and advanced computing to learn through experience. Many applications share a basic construct even if the objectives are different. Identifying military vehicles in satellite images is not unlike identifying tumors in mammograms or finding illegal contraband in x-ray images of carry-on baggage. The specifics of the challenge are different, but the fundamentals are the same. Ultimately, machines will be able to do that more accurately – and faster – than people, freeing humans to do higher-level work.

“The same type of neural network can be applied to different domains so long as the function is similar,” Mohamed says. So a system built to detect tumors for medical purposes could be adapted and trained instead to detect pedestrians in a self-driving automotive application.

Neural net processors will help because they are simply more efficient at this kind of computation than conventional central processing units. Initially these processors will reside in data centers or the cloud, but Intel already has plans to scale the technology to meet the low-power requirements of edge applications that might support remote, mobile users, such as in military or border patrol applications.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Evidence-Based Policy Act Could Change How Feds Use, Share Data

Evidence-Based Policy Act Could Change How Feds Use, Share Data

As government CIOs try to get their arms around how the Modernizing Government Technology (MGT) Act will affect their lives and programs, the next big IT measure to hit Congress is coming into focus: House Speaker Paul Ryan’s (R-Wis.) “Foundations for Evidence-Based Policymaking Act of 2017.”

A bipartisan measure now pending in both the House and Senate, the bill has profound implications for how federal agencies manage and organize data – the keys to being able to put data for informed policy decisions into the public domain in the future. Sponsored by Ryan in the House and by Sen. Patty Murray (D-Wash.) in the Senate, the measure would:

  • Make data open by default. That means government data must be accessible for research and statistical purposes – while still protecting privacy, intellectual property, proprietary business information and the like. Exceptions are allowed for national security
  • Appoint a chief data officer responsible for ensuring data quality, governance and availability
  • Appoint a chief evaluation officer to “continually assess the coverage, quality, methods, consistency, effectiveness, independence and balance of the portfolio of evaluations, policy research, and ongoing evaluation activities of the agency”

“We’ve got to get off of this input, effort-based system [of measuring government performance], this 20th century relic, and onto clearly identifiable, evidence-based terms, conditions, data, results and outcomes,” Ryan said on the House floor Nov. 15. “It’s going to mean a real sea change in how government solves problems and how government actually works.”

Measuring program performance in government is an old challenge. The Bush and Obama administrations each struggled to implement viable performance measurement systems. But the advent of Big Data, advanced analytics and automation technologies holds promise for quickly understanding how programs perform and whether or not results match desired expectations. It also holds promise for both agencies and private sector players to devise new ways to use and share government data.

“Data is the lifeblood of the modern enterprise,” said Stan Tyliszczak, vice president of technology integration with systems integrator General Dynamics Information Technology. “Everything an organization does is captured:  client data; sensor and monitoring data; regulatory data, or even internal IT operating data. That data can be analyzed and processed by increasingly sophisticated tools to find previously unseen correlations and conclusions. And with open and accessible data, we’re no longer restricted to a small community of insiders.

“The real challenge, though, will be in the execution,” Tyliszczak says. “You have to make the data accessible. You have to establish policies for both open sharing and security. And you have to organize for change – because new methods and techniques are sure to emerge once people start looking at that data.”

Indeed, the bill would direct the Office of Management and Budget, Office of Government Information Services and General Services Administration to develop and maintain an online repository of tools, best practices and schema standards to facilitate open data practices across the Federal Government. Individual agency Chief Data Officers (CDO) would be responsible for applying those standards to ensure data assets are properly inventoried, tagged and cataloged, complete with metadata descriptions that enable users to consume and use the data for any number of purpose.

Making data usable by outsiders is key. “Look at what happened when weather data was opened up,” says Tyliszczak. “A whole new class of web-based apps for weather forecasting emerged. Now, anyone with a smartphone can access up-to-the-minute weather forecasts from anywhere on the planet. That’s the power of open data.”

Choosing Standards
Most of the government’s data today is not open. Some is locked up in legacy systems that were never intended to be shared with the public. Some lacks the metadata and organization that would make it truly useful by helping users understand what individual fields represent. And most is pre-digested – that is, the information is bound up in PDF reports and documents rather consumable by analytics tools.

Overcoming all that will require discipline in technology, organization and execution.

“Simply publishing data in a well-known format is open, but it is not empowering,” says Mike Pizzo, co-chair of the Oasis Open Data Protocol (OData) Technical Committee and a software architect at Microsoft. “Data published as files is hard to find, hard to understand and tends to get stale as soon as it’s published.… To be useful, data must be accurate, consumable and interoperable.”

Some federal agencies are already embracing OData for externally-facing APIs. The Department of Labor, for example, built a public-facing API portal providing access to 175 data tables within 32 datasets, and with more planned in the future. Pizzo says other agencies both inside and outside the U.S. have used the standard to share, or “expose” labor, city, health, revenue, planning and election information.

Some agencies are already driving in this direction by creating a data ecosystem built around data and application programming interfaces (APIs). The Department of Veterans Affairs disclosed in October it is developing plans to build a management platform called Lighthouse, intended to “create APIs that are managed as products to be consumed by developers within and outside of VA.”

The VA described the project as “a deliberate shift” to becoming an “API-driven digital enterprise,” according to a request for information published on FedBizOps.gov. Throughout VA, APIs will be the vehicles through which different VA systems communicate and share data, underpinning both research and the delivery of benefits to veterans and allowing a more rapid migration from legacy systems to commercial off-the-shelf and Software-as-a-Service (SaaS) solutions. “It will enable creation of new, high-value experiences for our Veterans [and] VA’s provider partners, and allow VA’s employees to provide better service to Veterans,” the RFI states.

Standardizing the approach to building those APIs will be critical.

Modern APIs are based on ReST (Representational State Transfer), a “style” for interacting with web resources, and on JSON (JavaScript Object Notation), a popular format for data interchange that is more efficient than XML (eXtensible Markup Language). These standards by themselves do not solve the interoperability problem, however, because they offer no standard way of identifying metadata, Pizzo says. This is what OData provides: a metadata description language intended to establish common conventions and best practices for metadata that can be applied on top of REST and JSON. Once applied, OData provides interoperable open data access, where records are searchable, recognizable, accessible and protectable – all, largely, because of the metadata.

OData is an OASIS and ISO standard and is widely supported across the industry, including by Microsoft, IBM, Oracle, Salesforce among many.

“There are something like 5,000 or 6,000 APIs published on the programmable web, but you couldn’t write a single application that would interact with two of them,” he says. “What we did with OData was to look across those APIs, take the best practices and define those as common conventions.” In effect, OData sets a standard for implementing ReST with a JSON payload. Adopting the standard means providing a shortcut to choose the best way to implement request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats and query options, so the more important work using the data can be the focus of development activity.

This has value whether or not a data owner plans to share data openly or not.

Whether APIs will be accessed by an agency’s own systems – as with one VA system tapping into the database of another agency’s system – or by consumers – as in the case of a veteran accessing a user portal – doesn’t matter. In the Pentagon, one question never goes away: “How do we improve data interoperability from a joint force perspective?” said Robert Vietmeyer, associate director for Cloud Computing and Agile Development, Enterprise Services and Integration Directorate in the Office of the Defense Department CIO at a recent Defense Systems forum.

“I talk to people all the time, and they say, ‘I’m just going to put this into this cloud, or that cloud, and they have a data integration engine or big data capability, or machine learning, and once it’s all there all my problems will be solved.’ No, it’s not. So when the next person comes in and says, ‘You have data I really need, open up your API, expose your data so I can access it, and support that function over time,’ they’re not prepared. The program manager says: ‘I don’t have money to support that.’”

Vietmeyer acknowledges the Pentagon is behind in establishing best practices and standards. “The standards program has been in flux,” he said. “We haven’t set a lot, but it’s one of those areas we’re trying to fix right now. I’m looking for all ideas to see what we can do. Regardless, however, he sees a silver lining in the growing openness to cloud solutions. “The cloud makes it much easier to look at new models which can enable that data to become consumable by others,” he said.

Standards – whether by happenstance or by design – are particularly valuable in fulfilling unforeseen needs, Pizzo says. “Even if your service never ends up needing to be interoperable, it still has those good bones so that you know it can grow, it can evolve, and when you start scratching your head about a problem there are standards in place for how to answer that need,” he says.

By using established discipline at the start, data owners are better prepared to manage changing needs and requirements later, and to capitalize on new and innovative ways to use their data in the future.

“Ultimately, we want to automate as much of this activity as possible, and standards help make that possible,” says GDIT’s Tyliszczak. “Automation and machine learning will open up entirely new areas of exploration, insight, modernization and efficiency. We’ll never be able to achieve really large-scale integration if we rely on human-centered analysis.

“It makes more sense – and opens up a whole world of new opportunities – to leverage commercial standards and best practices to link back-end operations with front-end cloud solutions,” he adds. “That’s when we can start to truly take advantage of the power of the Cloud.”

Tobias Naegele has covered defense, military, and technology issues as an editor and reporter for more than 25 years, most of that time as editor-in-chief at Defense News and Military Times.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Calculating Technical Debt Can Focus Modernization Efforts

Calculating Technical Debt Can Focus Modernization Efforts

Your agency’s legacy computer systems can be a lot like your family minivan: Keep up the required maintenance and it can keep driving for years. Skimp on oil changes and ignore warning lights, however, and you’re living on borrowed time.

For information technology systems, unfunded maintenance – what developers call technical debt – accumulates rapidly. Each line of code builds on the rest, and when some of that code isn’t up to snuff it has implications for everything that follows. Ignoring a problem might save money now, but could cost a bundle later – especially if it leads to a system failure or breach.

The concept of technical debt has been around since computer scientist Ward Cunningham coined the term in 1992 as a means of explaining the future costs of fixing known software problems. More recently, it’s become popular among agile programmers, whose rapid cycle times demand short-term tradeoffs in order to meet near-term deadlines. Yet until recently it’s been seen more as metaphor than measure.

Now that’s changing.

“The industrial view is that anything I’ve got to spend money to fix – that constitutes corrective maintenance – really is a technical debt,” explains Bill Curtis, executive director of the Consortium for IT Software Quality (CISQ), a non-profit organization dedicated to improving software quality while reducing cost and risk. “If I’ve got a suboptimal design and it’s taking more cycles to process, then I’ve got performance issues that I’m paying for – and if that’s in the cloud, I may be paying a lot. And if my programmers are slow in making changes because the original code is kind of messy, well, that’s interest too. It’s costing us extra money and taking extra time to do maintenance and enhancement work because of things in the code we haven’t fixed.”

CISQ has proposed an industry standard for defining and measuring technical debt by analyzing software code and identifying potential defects. The number of hours needed to fix those defects, multiplied by developers’ fully loaded hourly rate equals the principal portion of a firm’s technical debt. The interest portion of the debt is more complicated, encompassing a number of additional factors.

The standard is now under review at the standards-setting Object Management Group, and Curtis expects approval in December. Once adopted, the standard can be incorporated into analysis and other tools, providing a common, uniform means of calculating technical debt.

Defining a uniform measure has been a dream for years. “People began to realize they were making quick, suboptimal decisions to get software built and delivered in short cycles, and they knew they’d have to go back and fix it,” Curtis says.

If they could instead figure out the economic impact of these decisions before they were implemented, it would have huge implications on long-term quality as well as the bottom line.

Ipek Ozkaya, principal researcher and deputy manager in the software architecture practice at Carnegie Mellon University’s Software Engineering Institute – a federally funded research and development center – says the concept may not be as well understood in government, but the issues are just as pressing, if not more so.

“Government cannot move as quickly as industry,” she says. “So they have to live with the consequences much longer, especially in terms of cost and resources spent.”

The Elements of Technical Debt
Technical debt may be best viewed by breaking it down into several components, Curtis says:

  • Principal – future cost of fixing known structural weaknesses, flaws or inefficiencies
  • Interest – continuing costs directly attributable to the principal debt, such as: excess programming time, poor performance or excessive server costs due to inefficient code
  • Risk and liability – potential costs that could stem from issues waiting to be fixed, including system outages and security breaches
  • Opportunity costs – missed or delayed opportunities because of time and resources focused on working around or paying down technical debt

Human factors, such as the lost institutional memory resulting from excessive staff turnover or the lack of good documentation, can also contribute to the debt. If it takes more time to do the work, the debt grows.

For program managers, system owners, chief information officers or even agency heads, recognizing and tracking each of these components helps translate developers’ technical challenges into strategic factors that can be managed, balanced and prioritized.

Detecting these problems is getting simpler. Static analysis software tools can scan and identify most flaws automatically. But taking those reports and calculating a technical debt figure is another matter. Several software analysis tools are now on the market, such as those from Cast Software or SonarQube, which include technical debt calculators among their system features. But without standards to build on, those estimates can be all over the place.

The CISQ effort, built on surveys of technical managers from both the customer and supplier side of the development equation, aims to establish a baseline for the time factors involved with fixing a range of known defects that can affect security, maintainability and adherence to architectural design standards.

“Code quality is important, process quality is important,” Ozkaya says. “But … it’s really about trying to figure out those architectural aspects of the systems that require significant refactoring, re-architecting, sometimes even shutting down the system [and replacing it], as may happen in a lot of modernization challenges.” This, she says, is where technical debt is most valuable most critical, providing a detailed understanding not only of what a system owner has now, but what it will take to get it to a sustainable state later.

“It comes down to ensuring agile delivery teams understand the vision of the product they’re building,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions. “The ability to decompose big projects into a roadmap of smaller components that can be delivered in an incremental manner requires skill in both software architecture and business acumen. Building a technically great solution that no one uses doesn’t benefit anyone. Likewise, if an agile team delivers needed functionality in a rapid fashion but without a strong design, the product will suffer in the long run. Incremental design and incremental delivery require a high amount of discipline.”

Still, it’s one thing to understand the concept of technical debt; it’s another to measure it. “If you can’t quantify it,” Ozkaya says, “what’s the point?”

Curtis agrees: “Management wants an understanding of what their future maintenance costs will be and which of their applications have the most technical debt, because that they will need to allocate more resources there. And [they want to know] how much technical debt I will need to remove before I’m at a sustainable level.”

These challenges hit customers in every sector, from banks to internet giants to government agencies. Those relying solely on in-house developers can rally around specific tools and approaches to their use, but for government customers – where outside developers are the norm – that’s not the case.

One of the challenges in government is the number of players, notes Marc Jones, North American vice president for public sector at Cast Software. “Vendor A writes the software, so he’s concerned with functionality, then the sustainment contractors come on not knowing what technical debt is already accumulated,” he says. “And government is not in the position to tell them.”

Worse, if the vendors and the government customer all use different metrics to calculate that debt, no one will be able to agree on the scale of the challenge, let alone how to manage it. “The definitions need to be something both sides of the buy-sell equation can agree on,” Jones says.

Once a standard is set, technical debt can become a powerful management tool. Consider an agency with multiple case management solutions. Maintaining multiple systems is costly and narrowing to a single solution makes sense. Each system has its champions and each likely has built up a certain amount of technical debt over the years. Choosing which one to keep and which to jettison might typically involve internal debate and emotions built up around personal preferences. By analyzing each system’s code and calculating technical debt, however, managers can turn an emotional debate into an economic choice.

Establishing technical debt as a performance metric in IT contracts is also beneficial. Contracting officers can require technical debt be monitored and reported, giving program managers insights into the quality of the software under development, and also the impact of decisions – whether on the part of either party – on long-term maintainability, sustainability, security and cost. That’s valuable to both sides and helps everyone understand how design decisions, modifications and requirements can impact a program over the long haul, as well as at a given point in time.

“To get that into a contract is not the status quo,” Jones says. “Quality is hard to put in. This really is a call to leadership.” By focusing on the issue at the contract level, he says, “Agencies can communicate to developers that technically acceptable now includes a minimum of quality and security. Today, security is seen as a must, while quality is perceived as nice to have. But the reality is that you can’t secure bad code. Security is an element of quality, not the other way around.”

Adopting a technical debt metric with periodic reporting ensures that everyone – developers and managers, contractors and customers – share a window on progress. In an agile development process, that means every third or fourth sprint can be focused on fixing problems and retiring technical debt in order to ensure that the debt never reaches an unmanageable level. Alternatively, GDIT’s Zach says developers may also aim to retire a certain amount of technical debt on each successive sprint. “If technical debt can take up between 10 and 20 percent of every sprint scope,” he says, “that slow trickle of ‘debt payments’ will help to avoid having to invest large spikes of work later just to pay down principal.”

For legacy systems, establishing a baseline and then working to reduce known technical debt is also valuable, especially in trying to decide whether it makes sense to keep that system, adapt it to cloud or abandon it in favor of another option.

“Although we modernize line by line, we don’t necessarily make decisions line by line,” says Ozkaya. By aggregating the effect of those line-by-line changes, managers gain a clearer view of the impact each individual decision has on the long-term health of a system. It’s not that going into debt for the right reasons doesn’t make sense, because it can. “Borrowing money to buy a house is a good thing,” Ozkaya says. “But borrowing too much can get you in trouble.”

It’s the same way with technical debt. Accepting imperfect code is reasonable as long as you have a plan to go back and fix it quickly. Choosing not to do so, though, is like paying just the minimum on your credit card bill. The debt keeps growing and can quickly get out of control.

“The impacts of technical debt are unavoidable,” Zach says.  “But what you do about it is a matter of choice. Managed properly, it helps you prioritize decisions and extend the longevity of your product or system. Quantifying the quality of a given code base is a powerful way to improve that prioritization. From there, real business decisions can be made.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
What the White House’s Final IT Modernization Report Could Look Like

What the White House’s Final IT Modernization Report Could Look Like

Modernization is the hot topic in Washington tech circles these days. There are breakfasts, summits and roundtables almost weekly. Anticipation is building as the White House and its American Technology Council readies the final version of its Report to the President on IT Modernization and the next steps in the national cyber response plan near public release.

At the same time, flexible funding sources for IT modernization are also coming into view as the Modernizing Government Technology (MGT) bill, which unanimously passed the House in the spring and passed by the Senate as part of the 2018 National Defense Authorization Act (NDAA). Barring any surprises, the measure will become law later this fall when the NDAA conference is complete, providing federal agencies with a revolving fund for modernization initiatives, and a centralized mechanism for prioritizing projects across government.

The strategy and underlying policy for moving forward will flow from the final Report on IT Modernization. Released in draft form on Aug. 30, it generated 93 formal responses from industry groups, vendors and individuals. Most praised its focus on consolidated networks and common, cloud-based services, but also raised concerns about elements of the council’s approach. Among the themes to emerge from the formal responses:

  • The report’s aggressive schedule of data collection and reporting deadlines drew praise, but its emphasis on reporting – while necessary for transparency and accountability – was seen by some as emphasizing bureaucratic process ahead of results. “The implementation plan should do more than generate additional plans and reports,” wrote the Arlington, Va.-based Professional Services Council (PSC) in its comments. Added the American Council for Technology–Industry Advisory Council (ACT-IAC), of Fairfax, Va.: “Action-oriented recommendations could help set the stage for meaningful change.” For example, ACT-IAC recommended requiring agencies to implement Software Asset Management within six or nine months.
  • While the draft report suggests agencies “consider immediately pausing or halting upcoming procurement actions that further develop or enhance legacy IT systems,” commenters warned against that approach. “Given the difficulty in allocating resources and the length of the federal acquisition lifecycle, pausing procurements or reallocating resources to other procurements may be difficult to execute and could adversely impact agency operations,” warned PSC. “Delaying the work on such contracts could increase security exposure of the systems being modernized and negatively impact the continuity of services.”
  • The initial draft names Google, Salesforce, Amazon and Microsoft as potential partners in a pilot program to test a new way of acquiring software licenses across the federal sector, as well as specifying General Services Administration’s (GSA) new Enterprise Information Services (EIS) contract as a preferred contract vehicle not just for networking, but also shared services. Commenters emphasized that the White House should be focused on setting desired objectives at this stage rather than prescribing solutions. “The report should be vendor and product agnostic,” wrote Kenneth Allen, ATC-IAC executive director. “Not being so could result in contracting issues later, as well as possibly skew pilot outcomes.”
  • Responders generally praised the notion of consolidating agencies under a single IT network, but raised concerns about the risks of focusing too much on a notional perimeter rather than on end-to-end solutions for securing data, devices and identity management across that network. “Instead of beginning detection mitigation at the network perimeter a cloud security provider is able to begin mitigation closer to where threats begin” and often is better situated and equipped to respond, noted Akamai Technologies, of Cambridge, Mass. PSC added that the layered security approach recommended in the draft report should be extended to include security already built into cloud computing services.

Few would argue with the report’s assertion that “The current model of IT acquisition has contributed to a fractured IT landscape,” or with its advocacy for category management as a means to better manage the purchase and implementation of commodity IT products and services. But concerns did arise over the report’s concept to leverage the government’s EIS contract as a single, go-to source for a host of network cybersecurity products and services.

“The report does not provide guidance regarding other contract vehicles with scope similar to EIS,” says the IT Alliance for Public Sector (ITAPS), a division of the Information Technology Industry Council (ITIC) a trade group, Alliant, NITAAC CIO-CS and CIO-SP3 may offer agencies more options than EIS. PSC agreed: “While EIS provides a significant opportunity for all agencies, it is only one potential solution. The final report should encourage small agencies to evaluate resources available from not only GSA, but also other federal agencies rather than presuming that consolidation will lead to the desired outcomes, agencies should make an economic and business analysis to validate that presumption.”

The challenge is how to make modernization work effectively in an environment where different agencies have vastly different capabilities. The problem today, says Grant Schneider, acting federal chief information security officer, is that “we expect the smallest agencies to have the same capabilities as the Department of Defense or the Department of Homeland Security, and that’s not realistic.”

The American Technology Council Report attempts to address IT modernization at several levels, in terms of both architecture and acquisition. The challenge is clear, says Schneider: “We have a lot of very old stuff. So, as we’re looking at our IT modernization, we have to modernize in such a way that we don’t build the next decade’s legacy systems tomorrow. We are focused on how we change the way we deliver services, moving toward cloud as well as shared services.”

Standardizing and simplifying those services will be key, says Stan Tyliszczak, chief engineer with systems integrator General Dynamics Information Technology. “If you look at this from an enterprise perspective, it makes sense to standardize instead of continuing with a ‘to-each-his-own’ approach,” Tyliszczak says. “Standardization enables consolidation, simplification and automation, which in turn will increase security, improve performance and reduce costs. Those are the end goals everybody wants.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
One Big Risk With Big Data: Format Lock-In

One Big Risk With Big Data: Format Lock-In

Insider threat programs and other long-term Big Data projects demand users take a longer view than is necessary with most technologies.

If the rapid development of new technologies over the past three decades has taught us anything, it’s that each successive new technology will undoubtedly be replaced by another. Vinyl records gave way to cassettes and then compact discs and MP3 files; VHS tapes gave way to DVD and video streaming.

Saving and using large databases present similar challenges. As agencies retain security data to track behavior patterns over years and even decades, ensuring the information remains accessible for future audit and forensic investigations is critical. Today, agency requirements call for saving system logs for a minimum of five years. But there’s no magic to that timeframe, which is arguably not long enough.

The records of many notorious trusted insiders who later went rogue – from Aldrich Ames at the CIA to Robert Hansen at the FBI to Harold T. Martin III at NSA suggest the first indications of trouble began a decade or longer before they were caught. It stands to reason, then, that longer-term tracking should make it harder for moles to remain undetected.

But how can agencies ensure data saved today will still be readable in 20 or 30 years? The answer is in normalizing data and standardizing the way data is saved.

“This is actually going on now where you have to convert your ArcSight databases into Elastic,” says David Sarmanian, an enterprise architect with General Dynamics Information Technology (GDIT). The company helps manage a variety of programs involving large, longitudinal databases for government customers. “There is a process here of taking all the old data – where we have data that is seven years old – and converting that into a new format for Elastic.”

Java Script Object Notation (JSON) is an open source standard for data interchange favored by many integrators and vendors. As a lightweight data-interchange format, it is easy for humans to read and write and also easy for machines to parse and generate. Non-proprietary and widely used, it is common in both web application development, java programming and in the popular Elasticsearch search engine.

To convert data to JSON for one customer, GDIT’s Sarmanian says, “We had to write a special script that did that conversion.”  Converting to a common, widely used standard helps ensure data will be accessible in the future, but history suggests that any format used today is likely to change in the future – as will file storage. Whether in an on-premises data center or in the cloud, agencies need to be concerned about how best to ensure long-term access to the data years or decades from now.

“If you put it in the cloud now, what happens in the future if you want to change? How do you get it out if you want to go from Amazon to Microsoft Azure – or the other way – 15 years from now? There could be something better than Hadoop or Google, but the cost could be prohibitive,” says Sarmanian.

JSON emerged as a favored standard, supported by a diverse range of vendors from Amazon Web Services to Elastic and IBM to Oracle, along with the Elasticsearch search engine. In a world where technologies and businesses can come and go rapidly, its wide use is reassuring to government customers with a long-range outlook.

“Elasticsearch is open source,” says Michael Paquette, director of products for security markets with Elastic, developer of the Elasticsearch distributed search and analytical engine. “Therefore, you can have it forever. If Elasticsearch ever stopped being used, you can keep an old copy of it and access data forever. If you choose to use encryption, then you take on the obligation of managing the keys that go with that encryption and decryption.”

In time, some new format may be favored, necessitating a conversion similar to what Sarmanian is doing today to help their customer convert to JSON. Conversion itself will have a cost, of course. But by using an open source standard today, it’s far less likely that you’ll need custom code to make that conversion tomorrow.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train