Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

From Amazon’s Alexa speech recognition technology to Facebook’s uncanny ability to recognize our faces in photos and the coming wave of self-driving cars, artificial intelligence (AI) and machine learning (ML) are changing the way we look at the world – and how it looks at us.

Nascent efforts to embrace natural language processing to power AI chatbots on government websites and call centers are among the leading short-term AI applications in the government space. But AI also has potential application in virtually every government sector, from health and safety research to transportation safety, agriculture and weather prediction and cyber defense.

The ideas behind artificial intelligence are not new. Indeed, the U.S. Postal Service has used machine vision to automatically read and route hand-written envelopes for nearly 20 years. What’s different today is that the plunging price of data storage and the increasing speed and scalability of computing power using cloud services from Amazon Web Services (AWS) and Microsoft Azure, among others, are converging with new software to make AI solutions easier and less costly to execute than ever before.

Justin Herman, emerging citizen technology lead at the General Services Administration’s Technology Transformation Service, is a sort of AI evangelist for the agency. His job, he says: “GSA helps other agencies and prove AI is real.”

That means talking to feds, lawmakers and vendors to spread an understanding of how AI and machine learning can transform at least some parts of government.

“What are agencies actually doing and thinking about?” he asked at the recent Advanced Technology Academic Research Center’s Artificial Intelligence Applications for Government Summit. “You’ve got to ignore the hype and bring it down to a level that’s actionable…. We want to talk about the use cases, the problems, where we think the data sets are. But we’re not prescribing the solutions.”

GSA set up an “Emerging Citizen Technology Atlas” this fall, essentially an online portal for AI government applications, and established an AI user group that holds its first meeting Dec. 13. And an AI Assistant Pilot program, which so far lists more than two dozen instances where agencies hope to employ AI, includes a range of aspirational projects including:

  • Department of Health and Human Services: Develop responses for Amazon’s Alexa platform to help users quit smoking and answer questions about food safety
  • Department of Housing and Urban Development: Automate or assist with customer service using existing site content
  • National Forest Service: Provide alerts, notices and information about campgrounds, trails and recreation areas
  • Federal Student Aid: Automate responses to queries on social media about applying for and receiving aid
  • Defense Logistics Agency: Help businesses answer frequently asked questions, access requests for quotes and identify commercial and government entity (CAGE) codes

Separately, NASA used the Amazon Lex platform to train its “Rov-E” robotic ambassador to follow voice commands and answer students’ questions about Mars, a novel AI application for outreach. And chatbots – rare just two years ago – now are ubiquitous on websites, Facebook and other social media.

Facebook Messenger instant messaging to communicate with citizens. In all, there are now more than 100,000 chatbots on Facebook Messenger. Chatbots are common features but customer service chatbots are the most basic of applications.

“The challenge for government, as is always the case with new technology, is finding the right applications for use and breaking down the walls of security or privacy concerns that might block the way forward,” says Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology Health and Civilian Solutions Division. “For now, figuring out how to really make AI practical for enhanced customer experience and enriched data, and with a clear return on investment, is going to take thoughtful consideration and a certain amount of trial and error.”

But as with cloud in years past, progress can be rapid. “There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal says. AI can follow the same path. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

That view is shared by Hila Mehr, a fellow at the Ash Center for Democratic Governance and Innovation at Harvard University’s Kennedy School of Government and a member of IBM’s Market Development and Insight strategy team. “Al becomes powerful with machine learning, where the computer learns from supervised training and inputs over time to improve responses,” she wrote in Artificial Intelligence for Citizen Services and Government an Ash Center white paper published in August.

In addition to chatbots, she sees translation services and facial recognition and other kinds of image identification as perfectly suited applications where “AI can reduce administrative burdens, help resolve resource allocation problems and take on significantly complex tasks.”

Open government – the act of making government data broadly available for new and innovative uses – is another promise. As Herman notes, challenging his fellow feds: “Your agencies are collecting voluminous amounts of data that are just sitting there, collecting dust. How can we make that actionable?”

Emerging Technology
Historically, most of that data wasn’t actionable. Paper forms and digital scans lack the structure and metadata to lend themselves to big data applications. But those days are rapidly fading. Electronic health records are turning the tide with medical data; website traffic data is helping government understand what citizens want when visiting, providing insights and feedback that can be used to improve the customer experience.

And that’s just the beginning. According to Fiaz Mohamed, head of solutions enablement for Intel’s AI Products Group, data volumes are growing exponentially. “By 2020, the average internet user will generate 1.5 GB of traffic per day; each self-driving car will generate 4,000 GB/day; connected planes will generate 40,000 GB/day,” he says.

At the same time, advances in hardware will enable faster and faster processing of that data, driving down the compute-intensive costs associated with AI number crunching. Facial recognition historically required extensive human training simply to teach the system the critical factors to look for, such as the distance between the eyes and the nose. “But now neural networks can take multiple samples of a photo of [an individual], and automatically detect what features are important,” he says. “The system actually learns what the key features are. Training yields the ability to infer.”

Intel, long known for its microprocessor technologies, is investing heavily in AI through internal development and external acquisitions. Intel bought machine-learning specialist Nervana in 2016 and programmable chip specialist Altera the year before. The combination is key to the company’s integrated AI strategy, Mohamed says. “What we are doing is building a full-stack solution for deploying AI at scale,” Mohamed says. “Building a proof-of-concept is one thing. But actually taking this technology and deploying it at the scale that a federal agency would want is a whole different thing.”

Many potential AI applications pose similar challenges.

FINRA, the Financial Industry Regulatory Authority, is among the government’s biggest users of AWS cloud services. Its market surveillance system captures and stores 75 billion financial records every day, then analyzes that data to detect fraud. “We process every day what Visa and Mastercard process in six months,” says Steve Randich, FINRA’s chief information officer in a presentation captured on video. “We stitch all this data together and run complex sophisticated surveillance queries against that data to look for suspicious activity.” The payoff: a 400 percent increase in performance.

Other uses include predictive fleet maintenance. IBM put its Watson AI engine to work last year in a proof-of-concept test of Watson’s ability to perform predictive maintenance for the U.S. Army’s 350 Stryker armored vehicles. In September, the Army’s Logistics Support Activity (LOGSA) signed a contract adding Watson’s cognitive services to other cloud services it gets from IBM.

“We’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service,” said LOGSA Commander Col. John D. Kuenzli. He said Watson holds the potential to “truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support.”

AI applications share a few things in common. They use large data sets to gain an understanding of a problem and advanced computing to learn through experience. Many applications share a basic construct even if the objectives are different. Identifying military vehicles in satellite images is not unlike identifying tumors in mammograms or finding illegal contraband in x-ray images of carry-on baggage. The specifics of the challenge are different, but the fundamentals are the same. Ultimately, machines will be able to do that more accurately – and faster – than people, freeing humans to do higher-level work.

“The same type of neural network can be applied to different domains so long as the function is similar,” Mohamed says. So a system built to detect tumors for medical purposes could be adapted and trained instead to detect pedestrians in a self-driving automotive application.

Neural net processors will help because they are simply more efficient at this kind of computation than conventional central processing units. Initially these processors will reside in data centers or the cloud, but Intel already has plans to scale the technology to meet the low-power requirements of edge applications that might support remote, mobile users, such as in military or border patrol applications.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Evidence-Based Policy Act Could Change How Feds Use, Share Data

Evidence-Based Policy Act Could Change How Feds Use, Share Data

As government CIOs try to get their arms around how the Modernizing Government Technology (MGT) Act will affect their lives and programs, the next big IT measure to hit Congress is coming into focus: House Speaker Paul Ryan’s (R-Wis.) “Foundations for Evidence-Based Policymaking Act of 2017.”

A bipartisan measure now pending in both the House and Senate, the bill has profound implications for how federal agencies manage and organize data – the keys to being able to put data for informed policy decisions into the public domain in the future. Sponsored by Ryan in the House and by Sen. Patty Murray (D-Wash.) in the Senate, the measure would:

  • Make data open by default. That means government data must be accessible for research and statistical purposes – while still protecting privacy, intellectual property, proprietary business information and the like. Exceptions are allowed for national security
  • Appoint a chief data officer responsible for ensuring data quality, governance and availability
  • Appoint a chief evaluation officer to “continually assess the coverage, quality, methods, consistency, effectiveness, independence and balance of the portfolio of evaluations, policy research, and ongoing evaluation activities of the agency”

“We’ve got to get off of this input, effort-based system [of measuring government performance], this 20th century relic, and onto clearly identifiable, evidence-based terms, conditions, data, results and outcomes,” Ryan said on the House floor Nov. 15. “It’s going to mean a real sea change in how government solves problems and how government actually works.”

Measuring program performance in government is an old challenge. The Bush and Obama administrations each struggled to implement viable performance measurement systems. But the advent of Big Data, advanced analytics and automation technologies holds promise for quickly understanding how programs perform and whether or not results match desired expectations. It also holds promise for both agencies and private sector players to devise new ways to use and share government data.

“Data is the lifeblood of the modern enterprise,” said Stan Tyliszczak, vice president of technology integration with systems integrator General Dynamics Information Technology. “Everything an organization does is captured:  client data; sensor and monitoring data; regulatory data, or even internal IT operating data. That data can be analyzed and processed by increasingly sophisticated tools to find previously unseen correlations and conclusions. And with open and accessible data, we’re no longer restricted to a small community of insiders.

“The real challenge, though, will be in the execution,” Tyliszczak says. “You have to make the data accessible. You have to establish policies for both open sharing and security. And you have to organize for change – because new methods and techniques are sure to emerge once people start looking at that data.”

Indeed, the bill would direct the Office of Management and Budget, Office of Government Information Services and General Services Administration to develop and maintain an online repository of tools, best practices and schema standards to facilitate open data practices across the Federal Government. Individual agency Chief Data Officers (CDO) would be responsible for applying those standards to ensure data assets are properly inventoried, tagged and cataloged, complete with metadata descriptions that enable users to consume and use the data for any number of purpose.

Making data usable by outsiders is key. “Look at what happened when weather data was opened up,” says Tyliszczak. “A whole new class of web-based apps for weather forecasting emerged. Now, anyone with a smartphone can access up-to-the-minute weather forecasts from anywhere on the planet. That’s the power of open data.”

Choosing Standards
Most of the government’s data today is not open. Some is locked up in legacy systems that were never intended to be shared with the public. Some lacks the metadata and organization that would make it truly useful by helping users understand what individual fields represent. And most is pre-digested – that is, the information is bound up in PDF reports and documents rather consumable by analytics tools.

Overcoming all that will require discipline in technology, organization and execution.

“Simply publishing data in a well-known format is open, but it is not empowering,” says Mike Pizzo, co-chair of the Oasis Open Data Protocol (OData) Technical Committee and a software architect at Microsoft. “Data published as files is hard to find, hard to understand and tends to get stale as soon as it’s published.… To be useful, data must be accurate, consumable and interoperable.”

Some federal agencies are already embracing OData for externally-facing APIs. The Department of Labor, for example, built a public-facing API portal providing access to 175 data tables within 32 datasets, and with more planned in the future. Pizzo says other agencies both inside and outside the U.S. have used the standard to share, or “expose” labor, city, health, revenue, planning and election information.

Some agencies are already driving in this direction by creating a data ecosystem built around data and application programming interfaces (APIs). The Department of Veterans Affairs disclosed in October it is developing plans to build a management platform called Lighthouse, intended to “create APIs that are managed as products to be consumed by developers within and outside of VA.”

The VA described the project as “a deliberate shift” to becoming an “API-driven digital enterprise,” according to a request for information published on FedBizOps.gov. Throughout VA, APIs will be the vehicles through which different VA systems communicate and share data, underpinning both research and the delivery of benefits to veterans and allowing a more rapid migration from legacy systems to commercial off-the-shelf and Software-as-a-Service (SaaS) solutions. “It will enable creation of new, high-value experiences for our Veterans [and] VA’s provider partners, and allow VA’s employees to provide better service to Veterans,” the RFI states.

Standardizing the approach to building those APIs will be critical.

Modern APIs are based on ReST (Representational State Transfer), a “style” for interacting with web resources, and on JSON (JavaScript Object Notation), a popular format for data interchange that is more efficient than XML (eXtensible Markup Language). These standards by themselves do not solve the interoperability problem, however, because they offer no standard way of identifying metadata, Pizzo says. This is what OData provides: a metadata description language intended to establish common conventions and best practices for metadata that can be applied on top of REST and JSON. Once applied, OData provides interoperable open data access, where records are searchable, recognizable, accessible and protectable – all, largely, because of the metadata.

OData is an OASIS and ISO standard and is widely supported across the industry, including by Microsoft, IBM, Oracle, Salesforce among many.

“There are something like 5,000 or 6,000 APIs published on the programmable web, but you couldn’t write a single application that would interact with two of them,” he says. “What we did with OData was to look across those APIs, take the best practices and define those as common conventions.” In effect, OData sets a standard for implementing ReST with a JSON payload. Adopting the standard means providing a shortcut to choose the best way to implement request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats and query options, so the more important work using the data can be the focus of development activity.

This has value whether or not a data owner plans to share data openly or not.

Whether APIs will be accessed by an agency’s own systems – as with one VA system tapping into the database of another agency’s system – or by consumers – as in the case of a veteran accessing a user portal – doesn’t matter. In the Pentagon, one question never goes away: “How do we improve data interoperability from a joint force perspective?” said Robert Vietmeyer, associate director for Cloud Computing and Agile Development, Enterprise Services and Integration Directorate in the Office of the Defense Department CIO at a recent Defense Systems forum.

“I talk to people all the time, and they say, ‘I’m just going to put this into this cloud, or that cloud, and they have a data integration engine or big data capability, or machine learning, and once it’s all there all my problems will be solved.’ No, it’s not. So when the next person comes in and says, ‘You have data I really need, open up your API, expose your data so I can access it, and support that function over time,’ they’re not prepared. The program manager says: ‘I don’t have money to support that.’”

Vietmeyer acknowledges the Pentagon is behind in establishing best practices and standards. “The standards program has been in flux,” he said. “We haven’t set a lot, but it’s one of those areas we’re trying to fix right now. I’m looking for all ideas to see what we can do. Regardless, however, he sees a silver lining in the growing openness to cloud solutions. “The cloud makes it much easier to look at new models which can enable that data to become consumable by others,” he said.

Standards – whether by happenstance or by design – are particularly valuable in fulfilling unforeseen needs, Pizzo says. “Even if your service never ends up needing to be interoperable, it still has those good bones so that you know it can grow, it can evolve, and when you start scratching your head about a problem there are standards in place for how to answer that need,” he says.

By using established discipline at the start, data owners are better prepared to manage changing needs and requirements later, and to capitalize on new and innovative ways to use their data in the future.

“Ultimately, we want to automate as much of this activity as possible, and standards help make that possible,” says GDIT’s Tyliszczak. “Automation and machine learning will open up entirely new areas of exploration, insight, modernization and efficiency. We’ll never be able to achieve really large-scale integration if we rely on human-centered analysis.

“It makes more sense – and opens up a whole world of new opportunities – to leverage commercial standards and best practices to link back-end operations with front-end cloud solutions,” he adds. “That’s when we can start to truly take advantage of the power of the Cloud.”

Tobias Naegele has covered defense, military, and technology issues as an editor and reporter for more than 25 years, most of that time as editor-in-chief at Defense News and Military Times.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Calculating Technical Debt Can Focus Modernization Efforts

Calculating Technical Debt Can Focus Modernization Efforts

Your agency’s legacy computer systems can be a lot like your family minivan: Keep up the required maintenance and it can keep driving for years. Skimp on oil changes and ignore warning lights, however, and you’re living on borrowed time.

For information technology systems, unfunded maintenance – what developers call technical debt – accumulates rapidly. Each line of code builds on the rest, and when some of that code isn’t up to snuff it has implications for everything that follows. Ignoring a problem might save money now, but could cost a bundle later – especially if it leads to a system failure or breach.

The concept of technical debt has been around since computer scientist Ward Cunningham coined the term in 1992 as a means of explaining the future costs of fixing known software problems. More recently, it’s become popular among agile programmers, whose rapid cycle times demand short-term tradeoffs in order to meet near-term deadlines. Yet until recently it’s been seen more as metaphor than measure.

Now that’s changing.

“The industrial view is that anything I’ve got to spend money to fix – that constitutes corrective maintenance – really is a technical debt,” explains Bill Curtis, executive director of the Consortium for IT Software Quality (CISQ), a non-profit organization dedicated to improving software quality while reducing cost and risk. “If I’ve got a suboptimal design and it’s taking more cycles to process, then I’ve got performance issues that I’m paying for – and if that’s in the cloud, I may be paying a lot. And if my programmers are slow in making changes because the original code is kind of messy, well, that’s interest too. It’s costing us extra money and taking extra time to do maintenance and enhancement work because of things in the code we haven’t fixed.”

CISQ has proposed an industry standard for defining and measuring technical debt by analyzing software code and identifying potential defects. The number of hours needed to fix those defects, multiplied by developers’ fully loaded hourly rate equals the principal portion of a firm’s technical debt. The interest portion of the debt is more complicated, encompassing a number of additional factors.

The standard is now under review at the standards-setting Object Management Group, and Curtis expects approval in December. Once adopted, the standard can be incorporated into analysis and other tools, providing a common, uniform means of calculating technical debt.

Defining a uniform measure has been a dream for years. “People began to realize they were making quick, suboptimal decisions to get software built and delivered in short cycles, and they knew they’d have to go back and fix it,” Curtis says.

If they could instead figure out the economic impact of these decisions before they were implemented, it would have huge implications on long-term quality as well as the bottom line.

Ipek Ozkaya, principal researcher and deputy manager in the software architecture practice at Carnegie Mellon University’s Software Engineering Institute – a federally funded research and development center – says the concept may not be as well understood in government, but the issues are just as pressing, if not more so.

“Government cannot move as quickly as industry,” she says. “So they have to live with the consequences much longer, especially in terms of cost and resources spent.”

The Elements of Technical Debt
Technical debt may be best viewed by breaking it down into several components, Curtis says:

  • Principal – future cost of fixing known structural weaknesses, flaws or inefficiencies
  • Interest – continuing costs directly attributable to the principal debt, such as: excess programming time, poor performance or excessive server costs due to inefficient code
  • Risk and liability – potential costs that could stem from issues waiting to be fixed, including system outages and security breaches
  • Opportunity costs – missed or delayed opportunities because of time and resources focused on working around or paying down technical debt

Human factors, such as the lost institutional memory resulting from excessive staff turnover or the lack of good documentation, can also contribute to the debt. If it takes more time to do the work, the debt grows.

For program managers, system owners, chief information officers or even agency heads, recognizing and tracking each of these components helps translate developers’ technical challenges into strategic factors that can be managed, balanced and prioritized.

Detecting these problems is getting simpler. Static analysis software tools can scan and identify most flaws automatically. But taking those reports and calculating a technical debt figure is another matter. Several software analysis tools are now on the market, such as those from Cast Software or SonarQube, which include technical debt calculators among their system features. But without standards to build on, those estimates can be all over the place.

The CISQ effort, built on surveys of technical managers from both the customer and supplier side of the development equation, aims to establish a baseline for the time factors involved with fixing a range of known defects that can affect security, maintainability and adherence to architectural design standards.

“Code quality is important, process quality is important,” Ozkaya says. “But … it’s really about trying to figure out those architectural aspects of the systems that require significant refactoring, re-architecting, sometimes even shutting down the system [and replacing it], as may happen in a lot of modernization challenges.” This, she says, is where technical debt is most valuable most critical, providing a detailed understanding not only of what a system owner has now, but what it will take to get it to a sustainable state later.

“It comes down to ensuring agile delivery teams understand the vision of the product they’re building,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions. “The ability to decompose big projects into a roadmap of smaller components that can be delivered in an incremental manner requires skill in both software architecture and business acumen. Building a technically great solution that no one uses doesn’t benefit anyone. Likewise, if an agile team delivers needed functionality in a rapid fashion but without a strong design, the product will suffer in the long run. Incremental design and incremental delivery require a high amount of discipline.”

Still, it’s one thing to understand the concept of technical debt; it’s another to measure it. “If you can’t quantify it,” Ozkaya says, “what’s the point?”

Curtis agrees: “Management wants an understanding of what their future maintenance costs will be and which of their applications have the most technical debt, because that they will need to allocate more resources there. And [they want to know] how much technical debt I will need to remove before I’m at a sustainable level.”

These challenges hit customers in every sector, from banks to internet giants to government agencies. Those relying solely on in-house developers can rally around specific tools and approaches to their use, but for government customers – where outside developers are the norm – that’s not the case.

One of the challenges in government is the number of players, notes Marc Jones, North American vice president for public sector at Cast Software. “Vendor A writes the software, so he’s concerned with functionality, then the sustainment contractors come on not knowing what technical debt is already accumulated,” he says. “And government is not in the position to tell them.”

Worse, if the vendors and the government customer all use different metrics to calculate that debt, no one will be able to agree on the scale of the challenge, let alone how to manage it. “The definitions need to be something both sides of the buy-sell equation can agree on,” Jones says.

Once a standard is set, technical debt can become a powerful management tool. Consider an agency with multiple case management solutions. Maintaining multiple systems is costly and narrowing to a single solution makes sense. Each system has its champions and each likely has built up a certain amount of technical debt over the years. Choosing which one to keep and which to jettison might typically involve internal debate and emotions built up around personal preferences. By analyzing each system’s code and calculating technical debt, however, managers can turn an emotional debate into an economic choice.

Establishing technical debt as a performance metric in IT contracts is also beneficial. Contracting officers can require technical debt be monitored and reported, giving program managers insights into the quality of the software under development, and also the impact of decisions – whether on the part of either party – on long-term maintainability, sustainability, security and cost. That’s valuable to both sides and helps everyone understand how design decisions, modifications and requirements can impact a program over the long haul, as well as at a given point in time.

“To get that into a contract is not the status quo,” Jones says. “Quality is hard to put in. This really is a call to leadership.” By focusing on the issue at the contract level, he says, “Agencies can communicate to developers that technically acceptable now includes a minimum of quality and security. Today, security is seen as a must, while quality is perceived as nice to have. But the reality is that you can’t secure bad code. Security is an element of quality, not the other way around.”

Adopting a technical debt metric with periodic reporting ensures that everyone – developers and managers, contractors and customers – share a window on progress. In an agile development process, that means every third or fourth sprint can be focused on fixing problems and retiring technical debt in order to ensure that the debt never reaches an unmanageable level. Alternatively, GDIT’s Zach says developers may also aim to retire a certain amount of technical debt on each successive sprint. “If technical debt can take up between 10 and 20 percent of every sprint scope,” he says, “that slow trickle of ‘debt payments’ will help to avoid having to invest large spikes of work later just to pay down principal.”

For legacy systems, establishing a baseline and then working to reduce known technical debt is also valuable, especially in trying to decide whether it makes sense to keep that system, adapt it to cloud or abandon it in favor of another option.

“Although we modernize line by line, we don’t necessarily make decisions line by line,” says Ozkaya. By aggregating the effect of those line-by-line changes, managers gain a clearer view of the impact each individual decision has on the long-term health of a system. It’s not that going into debt for the right reasons doesn’t make sense, because it can. “Borrowing money to buy a house is a good thing,” Ozkaya says. “But borrowing too much can get you in trouble.”

It’s the same way with technical debt. Accepting imperfect code is reasonable as long as you have a plan to go back and fix it quickly. Choosing not to do so, though, is like paying just the minimum on your credit card bill. The debt keeps growing and can quickly get out of control.

“The impacts of technical debt are unavoidable,” Zach says.  “But what you do about it is a matter of choice. Managed properly, it helps you prioritize decisions and extend the longevity of your product or system. Quantifying the quality of a given code base is a powerful way to improve that prioritization. From there, real business decisions can be made.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
What the White House’s Final IT Modernization Report Could Look Like

What the White House’s Final IT Modernization Report Could Look Like

Modernization is the hot topic in Washington tech circles these days. There are breakfasts, summits and roundtables almost weekly. Anticipation is building as the White House and its American Technology Council readies the final version of its Report to the President on IT Modernization and the next steps in the national cyber response plan near public release.

At the same time, flexible funding sources for IT modernization are also coming into view as the Modernizing Government Technology (MGT) bill, which unanimously passed the House in the spring and passed by the Senate as part of the 2018 National Defense Authorization Act (NDAA). Barring any surprises, the measure will become law later this fall when the NDAA conference is complete, providing federal agencies with a revolving fund for modernization initiatives, and a centralized mechanism for prioritizing projects across government.

The strategy and underlying policy for moving forward will flow from the final Report on IT Modernization. Released in draft form on Aug. 30, it generated 93 formal responses from industry groups, vendors and individuals. Most praised its focus on consolidated networks and common, cloud-based services, but also raised concerns about elements of the council’s approach. Among the themes to emerge from the formal responses:

  • The report’s aggressive schedule of data collection and reporting deadlines drew praise, but its emphasis on reporting – while necessary for transparency and accountability – was seen by some as emphasizing bureaucratic process ahead of results. “The implementation plan should do more than generate additional plans and reports,” wrote the Arlington, Va.-based Professional Services Council (PSC) in its comments. Added the American Council for Technology–Industry Advisory Council (ACT-IAC), of Fairfax, Va.: “Action-oriented recommendations could help set the stage for meaningful change.” For example, ACT-IAC recommended requiring agencies to implement Software Asset Management within six or nine months.
  • While the draft report suggests agencies “consider immediately pausing or halting upcoming procurement actions that further develop or enhance legacy IT systems,” commenters warned against that approach. “Given the difficulty in allocating resources and the length of the federal acquisition lifecycle, pausing procurements or reallocating resources to other procurements may be difficult to execute and could adversely impact agency operations,” warned PSC. “Delaying the work on such contracts could increase security exposure of the systems being modernized and negatively impact the continuity of services.”
  • The initial draft names Google, Salesforce, Amazon and Microsoft as potential partners in a pilot program to test a new way of acquiring software licenses across the federal sector, as well as specifying General Services Administration’s (GSA) new Enterprise Information Services (EIS) contract as a preferred contract vehicle not just for networking, but also shared services. Commenters emphasized that the White House should be focused on setting desired objectives at this stage rather than prescribing solutions. “The report should be vendor and product agnostic,” wrote Kenneth Allen, ATC-IAC executive director. “Not being so could result in contracting issues later, as well as possibly skew pilot outcomes.”
  • Responders generally praised the notion of consolidating agencies under a single IT network, but raised concerns about the risks of focusing too much on a notional perimeter rather than on end-to-end solutions for securing data, devices and identity management across that network. “Instead of beginning detection mitigation at the network perimeter a cloud security provider is able to begin mitigation closer to where threats begin” and often is better situated and equipped to respond, noted Akamai Technologies, of Cambridge, Mass. PSC added that the layered security approach recommended in the draft report should be extended to include security already built into cloud computing services.

Few would argue with the report’s assertion that “The current model of IT acquisition has contributed to a fractured IT landscape,” or with its advocacy for category management as a means to better manage the purchase and implementation of commodity IT products and services. But concerns did arise over the report’s concept to leverage the government’s EIS contract as a single, go-to source for a host of network cybersecurity products and services.

“The report does not provide guidance regarding other contract vehicles with scope similar to EIS,” says the IT Alliance for Public Sector (ITAPS), a division of the Information Technology Industry Council (ITIC) a trade group, Alliant, NITAAC CIO-CS and CIO-SP3 may offer agencies more options than EIS. PSC agreed: “While EIS provides a significant opportunity for all agencies, it is only one potential solution. The final report should encourage small agencies to evaluate resources available from not only GSA, but also other federal agencies rather than presuming that consolidation will lead to the desired outcomes, agencies should make an economic and business analysis to validate that presumption.”

The challenge is how to make modernization work effectively in an environment where different agencies have vastly different capabilities. The problem today, says Grant Schneider, acting federal chief information security officer, is that “we expect the smallest agencies to have the same capabilities as the Department of Defense or the Department of Homeland Security, and that’s not realistic.”

The American Technology Council Report attempts to address IT modernization at several levels, in terms of both architecture and acquisition. The challenge is clear, says Schneider: “We have a lot of very old stuff. So, as we’re looking at our IT modernization, we have to modernize in such a way that we don’t build the next decade’s legacy systems tomorrow. We are focused on how we change the way we deliver services, moving toward cloud as well as shared services.”

Standardizing and simplifying those services will be key, says Stan Tyliszczak, chief engineer with systems integrator General Dynamics Information Technology. “If you look at this from an enterprise perspective, it makes sense to standardize instead of continuing with a ‘to-each-his-own’ approach,” Tyliszczak says. “Standardization enables consolidation, simplification and automation, which in turn will increase security, improve performance and reduce costs. Those are the end goals everybody wants.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Just about every federal employee has a mobile device: Many carry two – one for work and one for personal use. Yet by official policy, most federal workers cannot access work email or files from a personal phone or tablet. Those with government-owned devices usually are limited to using it for email, calendar or Internet searches.

Meanwhile, many professionals use a work or personal phone to do a myriad of tasks. In a world where more than 70 percent of Internet traffic includes a mobile device, government workers are frequently taking matters into their own hands.

According to a recent FedScoop study of 168 federal employees and others in the federal sector, only 35 percent said their managers supported the use of personal mobile devices for official business. Yet 74 percent said they regularly use personally-owned tablets to get their work done. Another 49 percent said they regularly used personal smartphones.

In other words, employees routinely flout the rules – either knowingly or otherwise – to make themselves more productive.

“They’re used to having all this power in their hand, being able to upgrade and download apps, do all kinds of things instantaneously, no matter where they are,” says Michael Wilkerson, senior director for end-user computing and mobility at VMWare Federal, the underwriter for the research study conducted by FedScoop. “The workforce is getting younger and employees are coming in with certain expectations.”

Those expectations include mobile. At the General Services Administration (GSA), where more than 90 percent of jobs are approved for telework and where most staff do not have permanent desks or offices, each employee is issued a mobile device and a laptop. “There’s a philosophy of anytime, anywhere, any device,” says Rick Jones, Federal Mobility 2.0 Manager at GSA. Employees can log into virtual desktop infrastructure to access any of their work files from any device. “Telework is actually a requirement at GSA. You are expected to work remotely one or two days a week,” he says, so the agency is really serious about making employees entirely independent of conventional infrastructure. “We don’t even have desks,” he says. “You need to register for a cube in advance.”

That kind of mobility is likely to increase in the future, especially as fifth-generation (5G) mobile services come into play. With more wireless connections installed more densely, 5G promises data speeds that could replace conventional wired infrastructure, save wiring costs and increase network flexibility – all while significantly increasing the number of mobile-enabled workers.

Shadow IT
When Information Technology (IT) departments don’t give employees the tools and applications they need or want to get work done, they’re likely to go out and find it themselves, using cloud-based apps they can download to their phones, tablets and laptops.

Rajiv Gupta, president of Skyhigh Networks of Campbell, Calif., which provides a cloud-access security service, says his company found that users in any typical organization – federal, military or commercial –access more than 1,400 cloud-based services, often invisibly to IT managers. Such uses may be business or personal, but either can have an impact on security if devices are being used for both. Staff may be posting on Facebook, Twitter and LinkedIn, any of which could be personal but could also be official or in support of professional aims. Collaboration tools like Basecamp, Box, DropBox or Slack are often easy means of setting up unofficial work groups to share files when solutions like SharePoint come up short. Because such uses are typically invisible to the organization, he says, they create a “more insidious situation” – the potential for accidental information leaks or purposeful data ex-filtrations by bad actors inside the organization.

“If you’re using a collaboration service like Microsoft 365 or Box, and I share a file with you, what I’m doing is sharing a link – there’s nothing on the network that I can observe to see the files moving,” he says. “More than 50 percent of all data security leaks in a service like 365 is through these side doors.”

The organization may offer users the ability to use OneDrive or Slack, but if users perceive those as difficult or the access controls as unwieldly (user authentication is among mobile government users’ biggest frustrations, according to the VMWare/FedScoop study), they will opt for their own solutions, using email to move data out of the network and then collaborating beyond the reach of the IT and security staff.

While some such instances may be nefarious – as in the case of a disgruntled employee for example – most are simply manifestations of well-meaning employees trying to get their work done as efficiently as possible.

“So employees are using services that you and I have never even heard of,” Gupta says, services like Zippyshare, Footlocker and Findspace. Since most of these are simply classified as “Internet services,” standard controls may not be effective in blocking them, because shutting down the whole category is not an option, Gupta says. “If you did you would have mutiny on your hands.” So network access controls need to be narrowly defined and operationalized through whitelisting or blacklisting of sites and apps.

Free services are a particular problem because employees don’t see the risk, says Sean Kelley, chief information security officer at the Environmental Protection Agency (EPA). At an Institute for Critical Infrastructure conference in May, he said the problem traces back to the notion that free or subscription services aren’t the same as information technology. “A lot of folks said, well, it’s cloud, so it’s not IT,” he said. “But as we move from network-based security to data security, we need to know where our data is going.”

The Federal Information Technology Acquisition Reform Act was supposed to empower chief information officers (CIOs) by giving them more control over such purchases. But regulating free services and understanding the extent to which users may be using them is extremely difficult, whether in government or the private sector. David Summitt, chief information security officer (CISO) at the Moffit Cancer Center in Tampa, Fla., described an email he received from a salesman representing a cloud service provider. The email contained a list of more than 100 Moffit researchers who were using his company’s technology – all unbeknownst to the CISO. His immediate reply: “I said thank you very much – they won’t be using your service tomorrow.” Then he shut down access to that domain.

Controlling Mobile Use
Jon Johnson, program manager for enterprise mobility at GSA acknowledges that even providing access to email opens the door to much wider use of mobile technology. “I too download and open documents to read on the Metro,” he said. “The mobile devices themselves do make it more efficient to run a business. The question is, how can a CIO create tools and structures so their employees are more empowered to execute their mission effectively, and in a way that takes advantage not only of the mobile devices themselves, but also helps achieve a more efficient way of operating the business?”

Whether agencies choose to whitelist approved apps or blacklist high-risk ones, Johnson said, every agency needs to nail down the solution that best applies to its needs. “Whether they have the tools that can continually monitor those applications on the end point, whether they use vetting tools,” he said, each agency must make its own case. “Many agencies, depending on their security posture, are going to have those applications vetted before they even deploy their Enterprise Mobility Management (EMM) onto that device. There is no standard for this because the security posture for the Defense Information Systems Agency (DISA) and the FBI are different from GSA and the Department of Education.

“There’s always going to be a tradeoff between the risk of allowing your users to use something in a way that you may not necessarily predict versus locking everything down,” says Johnson.

Johnson and GSA have worked with a cross-agency mobile technology tiger team for years to try to nail down standards and policies that can make rolling out a broader mobile strategy easier on agency leaders. “Mobility is more than carrier services and devices,” he says. “We’ve looked at application vetting, endpoint protection, telecommunication expense management and emerging tools like virtual mobile interfaces.” He adds they’ve also examined the evolution of mobile device management solutions to more modern enterprise mobility management systems that take a wider view of the mobile world.

Today, agencies are trying to catch up to the private sector and overcome the government’s traditionally limited approach to mobility. At the United States Agency for International Development (USAID), Lon Gowan, chief technologist and special advisor to the CIO, says even though half the agency’s staff are in far-flung remote locations, many of them austere. “We generally treat everyone as a mobile worker,” Gowan says.

Federal agencies remain leery of adopting bring-your-own-device policies, just as many federal employees are leery of giving their agencies access to their personal information. While older mobile device management software gave organizations the ability to monitor activity and wipe entire devices; today’s enterprise management solutions enable devices to effectively be split, containing both personal and business data. And never the twain shall meet.

“We can either allow a fully managed device or one that’s self-owned, where IT manages a certain portion of it,” says VMWare’s Wilkerson. “You can have a folder that has a secure browser, secure mail, secure apps and all of that only works within that container. You can set up secure tunneling so each app can do its own VPN tunnel back to the corporate enterprise. Then, if the tunnel gets shut down or compromised, it shuts off the application, browser — or whatever — is leveraging that tunnel.

Another option is to use mobile-enabled virtual desktops where applications and data reside in a protected cloud environment, according to Chris Barnett, chief technology officer for GDIT’s Intelligence Solutions Division. “With virtual desktops, only a screen image needs to be encrypted and communicated to the mobile device. All the sensitive information remains back in the highly-secure portion of the Enterprise. That maintains the necessary levels of protection while at the same time enabling user access anywhere, anytime.”

When it comes to classified systems, of course, the bar moves higher as risks associated with a compromise increase. Neil Mazuranic of DISA’s, Mobility Capabilities branch chief in the DoD Mobility Portfolio Management Office, says his team can hardly keep up with demand. “Our biggest problem at DISA at the secret level and top secret level, is that we don’t have enough devices to go around,” he says. “Demand is much greater than the supply. We’re taking actions to push more phones and tablets out there.” But capacity will likely be a problem for a while.

The value is huge however, because the devices allow senior leaders “to make critical, real-world, real-time decisions without having to be tied to a specific place,” he says. “We want to stop tying people to their desks and allow them to work wherever they need to work, whether it’s classified work or unclassified.”

DISA is working on increasing the numbers of classified phones using Windows devices that provide greater ability to lock down security than possible with iOS or Android devices. By using products not in the mainstream, the software can be better controlled. In the unclassified realm, DISA secures both iOS and Android devices using managed solutions allowing dual office and personal use. For iOS, a managed device solution establishes a virtual wall in which some apps and data are managed and controlled by DISA, while others are not.

“All applications that go on the managed side of the devices, we evaluate and make sure they’re approved to use,” DISA’s Mazuranic told GovTechWorks. “There’s a certain segment that passes with flying colors and that we approve, and then there are some questionable ones that we send to the authorizing official to accept the risk. And there are others that we just reject outright. They’re just crazy ones.”

Segmenting the devices, however, gives users freedom to download apps for their personal use with a high level of assurance that those apps cannot access the controlled side of the device. “On the iOS device, all of the ‘for official use only’ (FOUO) data is on the managed side of the device,” he said. “All your contacts, your email, your downloaded documents, they’re all on the managed side. So when you go to the Apple App Store and download an app, that’s on the unmanaged side. There’s a wall between the two. So if something is trying to get at your contacts or your data, it can’t, because of that wall. On the Android device, it’s similar: There’s a container on the device, and all the FOUO data on the device is in that container.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250