Cloud / Data Center

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

Close the Data Center, Skip the TIC – How One Agency Bought Big Into Cloud

It’s no longer a question of whether the federal government is going to fully embrace cloud computing. It’s how fast.

With the White House pushing for cloud services as part of its broader cybersecurity strategy and budgets getting squeezed by the administration and Congress, chief information officers (CIOs) are coming around to the idea that the faster they can modernize their systems, the faster they’ll be able to meet security requirements. And that once in the cloud, market dynamics will help them drive down costs.

“The reality is, our data-center-centric model of computing in the federal government no longer works,” says Chad Sheridan, CIO at the Department of Agriculture’s Risk Management Agency. “The evidence is that there is no way we can run a federal data center at the moderate level or below better than what industry can do for us. We don’t have the resources, we don’t have the energy and we are going to be mired with this millstone around our neck of modernization for ever and ever.”

Budget pressure, demand for modernization and concern about security all combine as a forcing function that should be pushing most agencies rapidly toward broad cloud adoption.

Joe Paiva, CIO at the International Trade Administration (ITA), agrees. He used an expiring lease as leverage to force his agency into the cloud soon after he joined ITA three years ago. Time and again the lease was presented to him for a signature and time and again, he says, he tore it up and threw it away.

Finally, with the clock ticking on his data center, Paiva’s staff had to perform a massive “lift and shift” operation to keep services running. Systems were moved to the Amazon cloud.  Not a pretty transition, he admits, but good enough to make the move without incident.

“Sometimes lift and shift actually makes sense,” Paiva told federal IT specialists at the Advanced Technology Academic Research Center’s (ATARC) Cloud and Data Center Summit. “Lift and shift actually gets you there, and for me that was the key – we had to get there.”

At first, he said, “we were no worse off or no better off.” With systems and processes that hadn’t been designed for cloud, however, costs were high. “But then we started doing the rationalization and we dropped our bill 40 percent. We were able to rationalize the way we used the service, we were able to start using more reserve things instead of ad hoc.”

That rationalization included cutting out software and services licenses that duplicated other enterprise solutions. Microsoft Office 365, for example, provided every user with a OneDrive account in the cloud. By getting users to save their work there, meant his team no longer had to support local storage and backup, and the move to shared virtual drives instead of local ones improved worker productivity.

With 226 offices around the world, offloading all that backup was significant. To date, all but a few remote locations have made the switch. Among the surprise benefits: happier users. Once they saw how much easier things were with shared drives that were accessible from anywhere, he says, “they didn’t even care how much money we were saving or how much more secure they were – they cared about how much more functional they suddenly became.”

Likewise, Office 365 provided Skype for Business – meaning the agency could eliminate expensive stand-alone conferencing services – another benefit providing additional savings.

Cost savings matter. Operating in the cloud, ITA’s annual IT costs per user are about $15,000 – less than half the average for the Commerce Department as a whole ($38,000/user/year), or the federal government writ large ($39,000/user/year), he said.

“Those are crazy high numbers,” Paiva says. “That is why I believe we all have to go to the cloud.”

In addition to Office 365, ITA uses Amazon Web Services (AWS) for infrastructure and Salesforce to manage the businesses it supports, along with several other cloud services.

“Government IT spending is out of freaking control,” Paiva says, noting that budget cuts provide incentive for driving change that might not come otherwise. “No one will make the big decisions if they’re not forced to make them.”

Architecture and Planning
If getting to the cloud is now a common objective, figuring out how best to make the move is unique to every user.

“When most organizations consider a move to the cloud, they focus on the ‘front-end’ of the cloud experience – whether or not they should move to the cloud, and if so, how will they get there,” says Srini Singaraju, chief cloud architect at General Dynamics Information Technology, a systems integrator. “However, organizations commonly don’t give as much thought to the ‘back-end’ of their cloud journey: the new operational dynamics that need to be considered in a cloud environment or how operations can be optimized for the cloud, or what cloud capabilities they can leverage once they are there.”

Rather than lift and shift and then start looking for savings, Singaraju advocates planning carefully what to move and what to leave behind. Designing systems and processes to take advantage of its speed and avoiding some of the potential pitfalls not only makes things go more smoothly, it saves money over time.

“Sometimes it just makes more sense to retire and replace an application instead of trying to lift and shift,” Singaraju says. “How long can government maintain and support legacy applications that can pose security and functionality related challenges?”

The challenge is getting there. The number of cloud providers that have won provisional authority to operate under the 5-year-old Federal Risk and Authorization Management Program (FedRAMP) is still relatively small: just 86 with another 75 still in the pipeline. FedRAMP’s efforts to speed up the process are supposed to cut the time it takes to earn a provisional authority to operate (P-ATO) from as much as two years to as little as four months. But so far only three cloud providers have managed to get a product through FedRAMP Accelerated – the new, faster process, according to FedRAMP Director Matt Goodrich. Three more are in the pipeline with a few others lined up behind those, he said.

Once an agency or the FedRAMP Joint Authorization Board has authorized a cloud solution, other agencies can leverage their work with relatively little effort. But even then, moving an application from its current environment is an engineering challenge. Determining how to manage workflow and the infrastructure needed to make a massive move to the cloud work is complicated.

At ITA, for example, Paiva determined that cloud providers like AWS, Microsoft Office 365 and Salesforce had sufficient security controls in place that they could be treated as a part of his internal network. That meant user traffic could be routed directly to them, rather than through his agency’s Trusted Internet Connection (TIC). That provided a huge infrastructure savings because he didn’t have to widen that TIC gateway to accommodate all that routine work traffic, all of which in the past would have stayed inside his agency’s network.

Rather than a conventional “castle-and-moat” architecture, Paiva said he had to interpret the mandate to use the TIC “in a way that made sense for a borderless network.”

“I am not violating the mandate,” he said. “All my traffic that goes to the wild goes through the TIC. I want to be very clear about that. If you want to go to www-dot-name-my-whatever-dot-com, you’re going through the TIC. Office 365? Salesforce? Service Now? Those FedRAMP-approved, fully ATO’d applications that I run in my environment? They’re not external. My Amazon cloud is not external. It is my data center. It is my network. I am fulfilling the intent and letter of the mandate – it’s just that the definition of what is my network has changed.”

Todd Gagorik, senior manager for federal services at AWS, said this approach is starting to take root across the federal government. “People are beginning to understand this clear reality: If FedRAMP has any teeth, if any of this has any meaning, then let’s embrace it and actually use it as it’s intended to be used most efficiently and most securely. If you extend your data center into AWS or Azure, those cloud environments already have these certifications. They’re no different than your data center in terms of the certifications that they run under. What’s important is to separate that traffic from the wild.”

ATARC has organized a working group of government technology leaders to study the network boundary issue and recommend possible changes to the policy, said Tom Suder, ATARC president. “When we started the TIC, that was really kind of pre-cloud, or at least the early stages of cloud,” he said. “It was before FedRAMP. So like any policy, we need to look at that again.” Acting Federal CIO Margie Graves is a reasonable player, he said, and will be open to changes that makes sense, given how much has changed since then.

Indeed, the whole concept of a network’s perimeter has been changed by the introduction of cloud services, Office of Management and Budget’s Grant Schneider, the acting federal chief information security officer (CISO), told GovTechWorks earlier this year.

Limiting what needs to go through the TIC and what does not could have significant implications for cost savings, Paiva said. “It’s not chump change,” he said. “That little architectural detail right there could be billions across the government that could be avoided.”

But changing the network perimeter isn’t trivial. “Agency CIOs and CISOs must take into account the risks and sensitivities of their particular environment and then ensure their security architecture addresses all of those risks,” says GDIT’s Singaraju. “A FedRAMP-certified cloud is a part of the solution, but it’s only that – a part of the solution. You still need to have a complete security architecture built around it. You can’t just go to a cloud service provider without thinking all that through first.”

Sheridan and others involved in the nascent Cloud Center of Excellence sees the continued drive to the cloud as inevitable. “The world has changed,” he says. “It’s been 11 years since these things first appeared on the landscape. We are in exponential growth of technology, and if we hang on to our old ideas we will not continue. We will fail.”

His ad-hoc, unfunded group includes some 130 federal employees from 48 agencies and sub-agencies that operate independent of vendors, think tanks, lobbyists or others with a political or financial interest in the group’s output. “We are a group of people who are struggling to drive our mission forward and coming together to share ideas and experience to solve our common problems and help others to adopt the cloud,” Sheridan says. “It’s about changing the culture.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Employees Wanting Mobile Access May Get it —As 5G Services Come Into Play

Just about every federal employee has a mobile device: Many carry two – one for work and one for personal use. Yet by official policy, most federal workers cannot access work email or files from a personal phone or tablet. Those with government-owned devices usually are limited to using it for email, calendar or Internet searches.

Meanwhile, many professionals use a work or personal phone to do a myriad of tasks. In a world where more than 70 percent of Internet traffic includes a mobile device, government workers are frequently taking matters into their own hands.

According to a recent FedScoop study of 168 federal employees and others in the federal sector, only 35 percent said their managers supported the use of personal mobile devices for official business. Yet 74 percent said they regularly use personally-owned tablets to get their work done. Another 49 percent said they regularly used personal smartphones.

In other words, employees routinely flout the rules – either knowingly or otherwise – to make themselves more productive.

“They’re used to having all this power in their hand, being able to upgrade and download apps, do all kinds of things instantaneously, no matter where they are,” says Michael Wilkerson, senior director for end-user computing and mobility at VMWare Federal, the underwriter for the research study conducted by FedScoop. “The workforce is getting younger and employees are coming in with certain expectations.”

Those expectations include mobile. At the General Services Administration (GSA), where more than 90 percent of jobs are approved for telework and where most staff do not have permanent desks or offices, each employee is issued a mobile device and a laptop. “There’s a philosophy of anytime, anywhere, any device,” says Rick Jones, Federal Mobility 2.0 Manager at GSA. Employees can log into virtual desktop infrastructure to access any of their work files from any device. “Telework is actually a requirement at GSA. You are expected to work remotely one or two days a week,” he says, so the agency is really serious about making employees entirely independent of conventional infrastructure. “We don’t even have desks,” he says. “You need to register for a cube in advance.”

That kind of mobility is likely to increase in the future, especially as fifth-generation (5G) mobile services come into play. With more wireless connections installed more densely, 5G promises data speeds that could replace conventional wired infrastructure, save wiring costs and increase network flexibility – all while significantly increasing the number of mobile-enabled workers.

Shadow IT
When Information Technology (IT) departments don’t give employees the tools and applications they need or want to get work done, they’re likely to go out and find it themselves, using cloud-based apps they can download to their phones, tablets and laptops.

Rajiv Gupta, president of Skyhigh Networks of Campbell, Calif., which provides a cloud-access security service, says his company found that users in any typical organization – federal, military or commercial –access more than 1,400 cloud-based services, often invisibly to IT managers. Such uses may be business or personal, but either can have an impact on security if devices are being used for both. Staff may be posting on Facebook, Twitter and LinkedIn, any of which could be personal but could also be official or in support of professional aims. Collaboration tools like Basecamp, Box, DropBox or Slack are often easy means of setting up unofficial work groups to share files when solutions like SharePoint come up short. Because such uses are typically invisible to the organization, he says, they create a “more insidious situation” – the potential for accidental information leaks or purposeful data ex-filtrations by bad actors inside the organization.

“If you’re using a collaboration service like Microsoft 365 or Box, and I share a file with you, what I’m doing is sharing a link – there’s nothing on the network that I can observe to see the files moving,” he says. “More than 50 percent of all data security leaks in a service like 365 is through these side doors.”

The organization may offer users the ability to use OneDrive or Slack, but if users perceive those as difficult or the access controls as unwieldly (user authentication is among mobile government users’ biggest frustrations, according to the VMWare/FedScoop study), they will opt for their own solutions, using email to move data out of the network and then collaborating beyond the reach of the IT and security staff.

While some such instances may be nefarious – as in the case of a disgruntled employee for example – most are simply manifestations of well-meaning employees trying to get their work done as efficiently as possible.

“So employees are using services that you and I have never even heard of,” Gupta says, services like Zippyshare, Footlocker and Findspace. Since most of these are simply classified as “Internet services,” standard controls may not be effective in blocking them, because shutting down the whole category is not an option, Gupta says. “If you did you would have mutiny on your hands.” So network access controls need to be narrowly defined and operationalized through whitelisting or blacklisting of sites and apps.

Free services are a particular problem because employees don’t see the risk, says Sean Kelley, chief information security officer at the Environmental Protection Agency (EPA). At an Institute for Critical Infrastructure conference in May, he said the problem traces back to the notion that free or subscription services aren’t the same as information technology. “A lot of folks said, well, it’s cloud, so it’s not IT,” he said. “But as we move from network-based security to data security, we need to know where our data is going.”

The Federal Information Technology Acquisition Reform Act was supposed to empower chief information officers (CIOs) by giving them more control over such purchases. But regulating free services and understanding the extent to which users may be using them is extremely difficult, whether in government or the private sector. David Summitt, chief information security officer (CISO) at the Moffit Cancer Center in Tampa, Fla., described an email he received from a salesman representing a cloud service provider. The email contained a list of more than 100 Moffit researchers who were using his company’s technology – all unbeknownst to the CISO. His immediate reply: “I said thank you very much – they won’t be using your service tomorrow.” Then he shut down access to that domain.

Controlling Mobile Use
Jon Johnson, program manager for enterprise mobility at GSA acknowledges that even providing access to email opens the door to much wider use of mobile technology. “I too download and open documents to read on the Metro,” he said. “The mobile devices themselves do make it more efficient to run a business. The question is, how can a CIO create tools and structures so their employees are more empowered to execute their mission effectively, and in a way that takes advantage not only of the mobile devices themselves, but also helps achieve a more efficient way of operating the business?”

Whether agencies choose to whitelist approved apps or blacklist high-risk ones, Johnson said, every agency needs to nail down the solution that best applies to its needs. “Whether they have the tools that can continually monitor those applications on the end point, whether they use vetting tools,” he said, each agency must make its own case. “Many agencies, depending on their security posture, are going to have those applications vetted before they even deploy their Enterprise Mobility Management (EMM) onto that device. There is no standard for this because the security posture for the Defense Information Systems Agency (DISA) and the FBI are different from GSA and the Department of Education.

“There’s always going to be a tradeoff between the risk of allowing your users to use something in a way that you may not necessarily predict versus locking everything down,” says Johnson.

Johnson and GSA have worked with a cross-agency mobile technology tiger team for years to try to nail down standards and policies that can make rolling out a broader mobile strategy easier on agency leaders. “Mobility is more than carrier services and devices,” he says. “We’ve looked at application vetting, endpoint protection, telecommunication expense management and emerging tools like virtual mobile interfaces.” He adds they’ve also examined the evolution of mobile device management solutions to more modern enterprise mobility management systems that take a wider view of the mobile world.

Today, agencies are trying to catch up to the private sector and overcome the government’s traditionally limited approach to mobility. At the United States Agency for International Development (USAID), Lon Gowan, chief technologist and special advisor to the CIO, says even though half the agency’s staff are in far-flung remote locations, many of them austere. “We generally treat everyone as a mobile worker,” Gowan says.

Federal agencies remain leery of adopting bring-your-own-device policies, just as many federal employees are leery of giving their agencies access to their personal information. While older mobile device management software gave organizations the ability to monitor activity and wipe entire devices; today’s enterprise management solutions enable devices to effectively be split, containing both personal and business data. And never the twain shall meet.

“We can either allow a fully managed device or one that’s self-owned, where IT manages a certain portion of it,” says VMWare’s Wilkerson. “You can have a folder that has a secure browser, secure mail, secure apps and all of that only works within that container. You can set up secure tunneling so each app can do its own VPN tunnel back to the corporate enterprise. Then, if the tunnel gets shut down or compromised, it shuts off the application, browser — or whatever — is leveraging that tunnel.

Another option is to use mobile-enabled virtual desktops where applications and data reside in a protected cloud environment, according to Chris Barnett, chief technology officer for GDIT’s Intelligence Solutions Division. “With virtual desktops, only a screen image needs to be encrypted and communicated to the mobile device. All the sensitive information remains back in the highly-secure portion of the Enterprise. That maintains the necessary levels of protection while at the same time enabling user access anywhere, anytime.”

When it comes to classified systems, of course, the bar moves higher as risks associated with a compromise increase. Neil Mazuranic of DISA’s, Mobility Capabilities branch chief in the DoD Mobility Portfolio Management Office, says his team can hardly keep up with demand. “Our biggest problem at DISA at the secret level and top secret level, is that we don’t have enough devices to go around,” he says. “Demand is much greater than the supply. We’re taking actions to push more phones and tablets out there.” But capacity will likely be a problem for a while.

The value is huge however, because the devices allow senior leaders “to make critical, real-world, real-time decisions without having to be tied to a specific place,” he says. “We want to stop tying people to their desks and allow them to work wherever they need to work, whether it’s classified work or unclassified.”

DISA is working on increasing the numbers of classified phones using Windows devices that provide greater ability to lock down security than possible with iOS or Android devices. By using products not in the mainstream, the software can be better controlled. In the unclassified realm, DISA secures both iOS and Android devices using managed solutions allowing dual office and personal use. For iOS, a managed device solution establishes a virtual wall in which some apps and data are managed and controlled by DISA, while others are not.

“All applications that go on the managed side of the devices, we evaluate and make sure they’re approved to use,” DISA’s Mazuranic told GovTechWorks. “There’s a certain segment that passes with flying colors and that we approve, and then there are some questionable ones that we send to the authorizing official to accept the risk. And there are others that we just reject outright. They’re just crazy ones.”

Segmenting the devices, however, gives users freedom to download apps for their personal use with a high level of assurance that those apps cannot access the controlled side of the device. “On the iOS device, all of the ‘for official use only’ (FOUO) data is on the managed side of the device,” he said. “All your contacts, your email, your downloaded documents, they’re all on the managed side. So when you go to the Apple App Store and download an app, that’s on the unmanaged side. There’s a wall between the two. So if something is trying to get at your contacts or your data, it can’t, because of that wall. On the Android device, it’s similar: There’s a container on the device, and all the FOUO data on the device is in that container.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

Feds Look to AI Solutions to Solve Problems from Customer Service to Cyber Defense

From Amazon’s Alexa speech recognition technology to Facebook’s uncanny ability to recognize our faces in photos and the coming wave of self-driving cars, artificial intelligence (AI) and machine learning (ML) are changing the way we look at the world – and how it looks at us.

Nascent efforts to embrace natural language processing to power AI chatbots on government websites and call centers are among the leading short-term AI applications in the government space. But AI also has potential application in virtually every government sector, from health and safety research to transportation safety, agriculture and weather prediction and cyber defense.

The ideas behind artificial intelligence are not new. Indeed, the U.S. Postal Service has used machine vision to automatically read and route hand-written envelopes for nearly 20 years. What’s different today is that the plunging price of data storage and the increasing speed and scalability of computing power using cloud services from Amazon Web Services (AWS) and Microsoft Azure, among others, are converging with new software to make AI solutions easier and less costly to execute than ever before.

Justin Herman, emerging citizen technology lead at the General Services Administration’s Technology Transformation Service, is a sort of AI evangelist for the agency. His job, he says: “GSA helps other agencies and prove AI is real.”

That means talking to feds, lawmakers and vendors to spread an understanding of how AI and machine learning can transform at least some parts of government.

“What are agencies actually doing and thinking about?” he asked at the recent Advanced Technology Academic Research Center’s Artificial Intelligence Applications for Government Summit. “You’ve got to ignore the hype and bring it down to a level that’s actionable…. We want to talk about the use cases, the problems, where we think the data sets are. But we’re not prescribing the solutions.”

GSA set up an “Emerging Citizen Technology Atlas” this fall, essentially an online portal for AI government applications, and established an AI user group that holds its first meeting Dec. 13. And an AI Assistant Pilot program, which so far lists more than two dozen instances where agencies hope to employ AI, includes a range of aspirational projects including:

  • Department of Health and Human Services: Develop responses for Amazon’s Alexa platform to help users quit smoking and answer questions about food safety
  • Department of Housing and Urban Development: Automate or assist with customer service using existing site content
  • National Forest Service: Provide alerts, notices and information about campgrounds, trails and recreation areas
  • Federal Student Aid: Automate responses to queries on social media about applying for and receiving aid
  • Defense Logistics Agency: Help businesses answer frequently asked questions, access requests for quotes and identify commercial and government entity (CAGE) codes

Separately, NASA used the Amazon Lex platform to train its “Rov-E” robotic ambassador to follow voice commands and answer students’ questions about Mars, a novel AI application for outreach. And chatbots – rare just two years ago – now are ubiquitous on websites, Facebook and other social media.

In all, there are now more than 100,000 chatbots on Facebook Messenger. Chatbots are common features but customer service chatbots are the most basic of applications.

“The challenge for government, as is always the case with new technology, is finding the right applications for use and breaking down the walls of security or privacy concerns that might block the way forward,” says Michael G. Rozendaal, vice president for health analytics at General Dynamics Information Technology Health and Civilian Solutions Division. “For now, figuring out how to really make AI practical for enhanced customer experience and enriched data, and with a clear return on investment, is going to take thoughtful consideration and a certain amount of trial and error.”

But as with cloud in years past, progress can be rapid. “There comes a tipping point where challenges and concerns fade and the floodgates open to take advantage of a new technology,” Rozendaal says. AI can follow the same path. “Over the coming year, the speed of those successes and lessons learned will push AI to that tipping point.”

That view is shared by Hila Mehr, a fellow at the Ash Center for Democratic Governance and Innovation at Harvard University’s Kennedy School of Government and a member of IBM’s Market Development and Insight strategy team. “Al becomes powerful with machine learning, where the computer learns from supervised training and inputs over time to improve responses,” she wrote in Artificial Intelligence for Citizen Services and Government an Ash Center white paper published in August.

In addition to chatbots, she sees translation services and facial recognition and other kinds of image identification as perfectly suited applications where “AI can reduce administrative burdens, help resolve resource allocation problems and take on significantly complex tasks.”

Open government – the act of making government data broadly available for new and innovative uses – is another promise. As Herman notes, challenging his fellow feds: “Your agencies are collecting voluminous amounts of data that are just sitting there, collecting dust. How can we make that actionable?”

Emerging Technology
Historically, most of that data wasn’t actionable. Paper forms and digital scans lack the structure and metadata to lend themselves to big data applications. But those days are rapidly fading. Electronic health records are turning the tide with medical data; website traffic data is helping government understand what citizens want when visiting, providing insights and feedback that can be used to improve the customer experience.

And that’s just the beginning. According to Fiaz Mohamed, head of solutions enablement for Intel’s AI Products Group, data volumes are growing exponentially. “By 2020, the average internet user will generate 1.5 GB of traffic per day; each self-driving car will generate 4,000 GB/day; connected planes will generate 40,000 GB/day,” he says.

At the same time, advances in hardware will enable faster and faster processing of that data, driving down the compute-intensive costs associated with AI number crunching. Facial recognition historically required extensive human training simply to teach the system the critical factors to look for, such as the distance between the eyes and the nose. “But now neural networks can take multiple samples of a photo of [an individual], and automatically detect what features are important,” he says. “The system actually learns what the key features are. Training yields the ability to infer.”

Intel, long known for its microprocessor technologies, is investing heavily in AI through internal development and external acquisitions. Intel bought machine-learning specialist Nervana in 2016 and programmable chip specialist Altera the year before. The combination is key to the company’s integrated AI strategy, Mohamed says. “What we are doing is building a full-stack solution for deploying AI at scale,” Mohamed says. “Building a proof-of-concept is one thing. But actually taking this technology and deploying it at the scale that a federal agency would want is a whole different thing.”

Many potential AI applications pose similar challenges.

FINRA, the Financial Industry Regulatory Authority, is among the government’s biggest users of AWS cloud services. Its market surveillance system captures and stores 75 billion financial records every day, then analyzes that data to detect fraud. “We process every day what Visa and Mastercard process in six months,” says Steve Randich, FINRA’s chief information officer in a presentation captured on video. “We stitch all this data together and run complex sophisticated surveillance queries against that data to look for suspicious activity.” The payoff: a 400 percent increase in performance.

Other uses include predictive fleet maintenance. IBM put its Watson AI engine to work last year in a proof-of-concept test of Watson’s ability to perform predictive maintenance for the U.S. Army’s 350 Stryker armored vehicles. In September, the Army’s Logistics Support Activity (LOGSA) signed a contract adding Watson’s cognitive services to other cloud services it gets from IBM.

“We’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service,” said LOGSA Commander Col. John D. Kuenzli. He said Watson holds the potential to “truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support.”

AI applications share a few things in common. They use large data sets to gain an understanding of a problem and advanced computing to learn through experience. Many applications share a basic construct even if the objectives are different. Identifying military vehicles in satellite images is not unlike identifying tumors in mammograms or finding illegal contraband in x-ray images of carry-on baggage. The specifics of the challenge are different, but the fundamentals are the same. Ultimately, machines will be able to do that more accurately – and faster – than people, freeing humans to do higher-level work.

“The same type of neural network can be applied to different domains so long as the function is similar,” Mohamed says. So a system built to detect tumors for medical purposes could be adapted and trained instead to detect pedestrians in a self-driving automotive application.

Neural net processors will help because they are simply more efficient at this kind of computation than conventional central processing units. Initially these processors will reside in data centers or the cloud, but Intel already has plans to scale the technology to meet the low-power requirements of edge applications that might support remote, mobile users, such as in military or border patrol applications.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
Calculating Technical Debt Can Focus Modernization Efforts

Calculating Technical Debt Can Focus Modernization Efforts

Your agency’s legacy computer systems can be a lot like your family minivan: Keep up the required maintenance and it can keep driving for years. Skimp on oil changes and ignore warning lights, however, and you’re living on borrowed time.

For information technology systems, unfunded maintenance – what developers call technical debt – accumulates rapidly. Each line of code builds on the rest, and when some of that code isn’t up to snuff it has implications for everything that follows. Ignoring a problem might save money now, but could cost a bundle later – especially if it leads to a system failure or breach.

The concept of technical debt has been around since computer scientist Ward Cunningham coined the term in 1992 as a means of explaining the future costs of fixing known software problems. More recently, it’s become popular among agile programmers, whose rapid cycle times demand short-term tradeoffs in order to meet near-term deadlines. Yet until recently it’s been seen more as metaphor than measure.

Now that’s changing.

“The industrial view is that anything I’ve got to spend money to fix – that constitutes corrective maintenance – really is a technical debt,” explains Bill Curtis, executive director of the Consortium for IT Software Quality (CISQ), a non-profit organization dedicated to improving software quality while reducing cost and risk. “If I’ve got a suboptimal design and it’s taking more cycles to process, then I’ve got performance issues that I’m paying for – and if that’s in the cloud, I may be paying a lot. And if my programmers are slow in making changes because the original code is kind of messy, well, that’s interest too. It’s costing us extra money and taking extra time to do maintenance and enhancement work because of things in the code we haven’t fixed.”

CISQ has proposed an industry standard for defining and measuring technical debt by analyzing software code and identifying potential defects. The number of hours needed to fix those defects, multiplied by developers’ fully loaded hourly rate equals the principal portion of a firm’s technical debt. The interest portion of the debt is more complicated, encompassing a number of additional factors.

The standard is now under review at the standards-setting Object Management Group, and Curtis expects approval in December. Once adopted, the standard can be incorporated into analysis and other tools, providing a common, uniform means of calculating technical debt.

Defining a uniform measure has been a dream for years. “People began to realize they were making quick, suboptimal decisions to get software built and delivered in short cycles, and they knew they’d have to go back and fix it,” Curtis says.

If they could instead figure out the economic impact of these decisions before they were implemented, it would have huge implications on long-term quality as well as the bottom line.

Ipek Ozkaya, principal researcher and deputy manager in the software architecture practice at Carnegie Mellon University’s Software Engineering Institute – a federally funded research and development center – says the concept may not be as well understood in government, but the issues are just as pressing, if not more so.

“Government cannot move as quickly as industry,” she says. “So they have to live with the consequences much longer, especially in terms of cost and resources spent.”

The Elements of Technical Debt
Technical debt may be best viewed by breaking it down into several components, Curtis says:

  • Principal – future cost of fixing known structural weaknesses, flaws or inefficiencies
  • Interest – continuing costs directly attributable to the principal debt, such as: excess programming time, poor performance or excessive server costs due to inefficient code
  • Risk and liability – potential costs that could stem from issues waiting to be fixed, including system outages and security breaches
  • Opportunity costs – missed or delayed opportunities because of time and resources focused on working around or paying down technical debt

Human factors, such as the lost institutional memory resulting from excessive staff turnover or the lack of good documentation, can also contribute to the debt. If it takes more time to do the work, the debt grows.

For program managers, system owners, chief information officers or even agency heads, recognizing and tracking each of these components helps translate developers’ technical challenges into strategic factors that can be managed, balanced and prioritized.

Detecting these problems is getting simpler. Static analysis software tools can scan and identify most flaws automatically. But taking those reports and calculating a technical debt figure is another matter. Several software analysis tools are now on the market, such as those from Cast Software or SonarQube, which include technical debt calculators among their system features. But without standards to build on, those estimates can be all over the place.

The CISQ effort, built on surveys of technical managers from both the customer and supplier side of the development equation, aims to establish a baseline for the time factors involved with fixing a range of known defects that can affect security, maintainability and adherence to architectural design standards.

“Code quality is important, process quality is important,” Ozkaya says. “But … it’s really about trying to figure out those architectural aspects of the systems that require significant refactoring, re-architecting, sometimes even shutting down the system [and replacing it], as may happen in a lot of modernization challenges.” This, she says, is where technical debt is most valuable most critical, providing a detailed understanding not only of what a system owner has now, but what it will take to get it to a sustainable state later.

“It comes down to ensuring agile delivery teams understand the vision of the product they’re building,” says Matthew Zach, director of software engineering at General Dynamics Information Technology’s Health Solutions. “The ability to decompose big projects into a roadmap of smaller components that can be delivered in an incremental manner requires skill in both software architecture and business acumen. Building a technically great solution that no one uses doesn’t benefit anyone. Likewise, if an agile team delivers needed functionality in a rapid fashion but without a strong design, the product will suffer in the long run. Incremental design and incremental delivery require a high amount of discipline.”

Still, it’s one thing to understand the concept of technical debt; it’s another to measure it. “If you can’t quantify it,” Ozkaya says, “what’s the point?”

Curtis agrees: “Management wants an understanding of what their future maintenance costs will be and which of their applications have the most technical debt, because that they will need to allocate more resources there. And [they want to know] how much technical debt I will need to remove before I’m at a sustainable level.”

These challenges hit customers in every sector, from banks to internet giants to government agencies. Those relying solely on in-house developers can rally around specific tools and approaches to their use, but for government customers – where outside developers are the norm – that’s not the case.

One of the challenges in government is the number of players, notes Marc Jones, North American vice president for public sector at Cast Software. “Vendor A writes the software, so he’s concerned with functionality, then the sustainment contractors come on not knowing what technical debt is already accumulated,” he says. “And government is not in the position to tell them.”

Worse, if the vendors and the government customer all use different metrics to calculate that debt, no one will be able to agree on the scale of the challenge, let alone how to manage it. “The definitions need to be something both sides of the buy-sell equation can agree on,” Jones says.

Once a standard is set, technical debt can become a powerful management tool. Consider an agency with multiple case management solutions. Maintaining multiple systems is costly and narrowing to a single solution makes sense. Each system has its champions and each likely has built up a certain amount of technical debt over the years. Choosing which one to keep and which to jettison might typically involve internal debate and emotions built up around personal preferences. By analyzing each system’s code and calculating technical debt, however, managers can turn an emotional debate into an economic choice.

Establishing technical debt as a performance metric in IT contracts is also beneficial. Contracting officers can require technical debt be monitored and reported, giving program managers insights into the quality of the software under development, and also the impact of decisions – whether on the part of either party – on long-term maintainability, sustainability, security and cost. That’s valuable to both sides and helps everyone understand how design decisions, modifications and requirements can impact a program over the long haul, as well as at a given point in time.

“To get that into a contract is not the status quo,” Jones says. “Quality is hard to put in. This really is a call to leadership.” By focusing on the issue at the contract level, he says, “Agencies can communicate to developers that technically acceptable now includes a minimum of quality and security. Today, security is seen as a must, while quality is perceived as nice to have. But the reality is that you can’t secure bad code. Security is an element of quality, not the other way around.”

Adopting a technical debt metric with periodic reporting ensures that everyone – developers and managers, contractors and customers – share a window on progress. In an agile development process, that means every third or fourth sprint can be focused on fixing problems and retiring technical debt in order to ensure that the debt never reaches an unmanageable level. Alternatively, GDIT’s Zach says developers may also aim to retire a certain amount of technical debt on each successive sprint. “If technical debt can take up between 10 and 20 percent of every sprint scope,” he says, “that slow trickle of ‘debt payments’ will help to avoid having to invest large spikes of work later just to pay down principal.”

For legacy systems, establishing a baseline and then working to reduce known technical debt is also valuable, especially in trying to decide whether it makes sense to keep that system, adapt it to cloud or abandon it in favor of another option.

“Although we modernize line by line, we don’t necessarily make decisions line by line,” says Ozkaya. By aggregating the effect of those line-by-line changes, managers gain a clearer view of the impact each individual decision has on the long-term health of a system. It’s not that going into debt for the right reasons doesn’t make sense, because it can. “Borrowing money to buy a house is a good thing,” Ozkaya says. “But borrowing too much can get you in trouble.”

It’s the same way with technical debt. Accepting imperfect code is reasonable as long as you have a plan to go back and fix it quickly. Choosing not to do so, though, is like paying just the minimum on your credit card bill. The debt keeps growing and can quickly get out of control.

“The impacts of technical debt are unavoidable,” Zach says.  “But what you do about it is a matter of choice. Managed properly, it helps you prioritize decisions and extend the longevity of your product or system. Quantifying the quality of a given code base is a powerful way to improve that prioritization. From there, real business decisions can be made.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
What the White House’s Final IT Modernization Report Could Look Like

What the White House’s Final IT Modernization Report Could Look Like

Modernization is the hot topic in Washington tech circles these days. There are breakfasts, summits and roundtables almost weekly. Anticipation is building as the White House and its American Technology Council readies the final version of its Report to the President on IT Modernization and the next steps in the national cyber response plan near public release.

At the same time, flexible funding sources for IT modernization are also coming into view as the Modernizing Government Technology (MGT) bill, which unanimously passed the House in the spring and passed by the Senate as part of the 2018 National Defense Authorization Act (NDAA). Barring any surprises, the measure will become law later this fall when the NDAA conference is complete, providing federal agencies with a revolving fund for modernization initiatives, and a centralized mechanism for prioritizing projects across government.

The strategy and underlying policy for moving forward will flow from the final Report on IT Modernization. Released in draft form on Aug. 30, it generated 93 formal responses from industry groups, vendors and individuals. Most praised its focus on consolidated networks and common, cloud-based services, but also raised concerns about elements of the council’s approach. Among the themes to emerge from the formal responses:

  • The report’s aggressive schedule of data collection and reporting deadlines drew praise, but its emphasis on reporting – while necessary for transparency and accountability – was seen by some as emphasizing bureaucratic process ahead of results. “The implementation plan should do more than generate additional plans and reports,” wrote the Arlington, Va.-based Professional Services Council (PSC) in its comments. Added the American Council for Technology–Industry Advisory Council (ACT-IAC), of Fairfax, Va.: “Action-oriented recommendations could help set the stage for meaningful change.” For example, ACT-IAC recommended requiring agencies to implement Software Asset Management within six or nine months.
  • While the draft report suggests agencies “consider immediately pausing or halting upcoming procurement actions that further develop or enhance legacy IT systems,” commenters warned against that approach. “Given the difficulty in allocating resources and the length of the federal acquisition lifecycle, pausing procurements or reallocating resources to other procurements may be difficult to execute and could adversely impact agency operations,” warned PSC. “Delaying the work on such contracts could increase security exposure of the systems being modernized and negatively impact the continuity of services.”
  • The initial draft names Google, Salesforce, Amazon and Microsoft as potential partners in a pilot program to test a new way of acquiring software licenses across the federal sector, as well as specifying General Services Administration’s (GSA) new Enterprise Information Services (EIS) contract as a preferred contract vehicle not just for networking, but also shared services. Commenters emphasized that the White House should be focused on setting desired objectives at this stage rather than prescribing solutions. “The report should be vendor and product agnostic,” wrote Kenneth Allen, ATC-IAC executive director. “Not being so could result in contracting issues later, as well as possibly skew pilot outcomes.”
  • Responders generally praised the notion of consolidating agencies under a single IT network, but raised concerns about the risks of focusing too much on a notional perimeter rather than on end-to-end solutions for securing data, devices and identity management across that network. “Instead of beginning detection mitigation at the network perimeter a cloud security provider is able to begin mitigation closer to where threats begin” and often is better situated and equipped to respond, noted Akamai Technologies, of Cambridge, Mass. PSC added that the layered security approach recommended in the draft report should be extended to include security already built into cloud computing services.

Few would argue with the report’s assertion that “The current model of IT acquisition has contributed to a fractured IT landscape,” or with its advocacy for category management as a means to better manage the purchase and implementation of commodity IT products and services. But concerns did arise over the report’s concept to leverage the government’s EIS contract as a single, go-to source for a host of network cybersecurity products and services.

“The report does not provide guidance regarding other contract vehicles with scope similar to EIS,” says the IT Alliance for Public Sector (ITAPS), a division of the Information Technology Industry Council (ITIC) a trade group, Alliant, NITAAC CIO-CS and CIO-SP3 may offer agencies more options than EIS. PSC agreed: “While EIS provides a significant opportunity for all agencies, it is only one potential solution. The final report should encourage small agencies to evaluate resources available from not only GSA, but also other federal agencies rather than presuming that consolidation will lead to the desired outcomes, agencies should make an economic and business analysis to validate that presumption.”

The challenge is how to make modernization work effectively in an environment where different agencies have vastly different capabilities. The problem today, says Grant Schneider, acting federal chief information security officer, is that “we expect the smallest agencies to have the same capabilities as the Department of Defense or the Department of Homeland Security, and that’s not realistic.”

The American Technology Council Report attempts to address IT modernization at several levels, in terms of both architecture and acquisition. The challenge is clear, says Schneider: “We have a lot of very old stuff. So, as we’re looking at our IT modernization, we have to modernize in such a way that we don’t build the next decade’s legacy systems tomorrow. We are focused on how we change the way we deliver services, moving toward cloud as well as shared services.”

Standardizing and simplifying those services will be key, says Stan Tyliszczak, chief engineer with systems integrator General Dynamics Information Technology. “If you look at this from an enterprise perspective, it makes sense to standardize instead of continuing with a ‘to-each-his-own’ approach,” Tyliszczak says. “Standardization enables consolidation, simplification and automation, which in turn will increase security, improve performance and reduce costs. Those are the end goals everybody wants.”

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train
One Big Risk With Big Data: Format Lock-In

One Big Risk With Big Data: Format Lock-In

Insider threat programs and other long-term Big Data projects demand users take a longer view than is necessary with most technologies.

If the rapid development of new technologies over the past three decades has taught us anything, it’s that each successive new technology will undoubtedly be replaced by another. Vinyl records gave way to cassettes and then compact discs and MP3 files; VHS tapes gave way to DVD and video streaming.

Saving and using large databases present similar challenges. As agencies retain security data to track behavior patterns over years and even decades, ensuring the information remains accessible for future audit and forensic investigations is critical. Today, agency requirements call for saving system logs for a minimum of five years. But there’s no magic to that timeframe, which is arguably not long enough.

The records of many notorious trusted insiders who later went rogue – from Aldrich Ames at the CIA to Robert Hansen at the FBI to Harold T. Martin III at NSA suggest the first indications of trouble began a decade or longer before they were caught. It stands to reason, then, that longer-term tracking should make it harder for moles to remain undetected.

But how can agencies ensure data saved today will still be readable in 20 or 30 years? The answer is in normalizing data and standardizing the way data is saved.

“This is actually going on now where you have to convert your ArcSight databases into Elastic,” says David Sarmanian, an enterprise architect with General Dynamics Information Technology (GDIT). The company helps manage a variety of programs involving large, longitudinal databases for government customers. “There is a process here of taking all the old data – where we have data that is seven years old – and converting that into a new format for Elastic.”

Java Script Object Notation (JSON) is an open source standard for data interchange favored by many integrators and vendors. As a lightweight data-interchange format, it is easy for humans to read and write and also easy for machines to parse and generate. Non-proprietary and widely used, it is common in both web application development, java programming and in the popular Elasticsearch search engine.

To convert data to JSON for one customer, GDIT’s Sarmanian says, “We had to write a special script that did that conversion.”  Converting to a common, widely used standard helps ensure data will be accessible in the future, but history suggests that any format used today is likely to change in the future – as will file storage. Whether in an on-premises data center or in the cloud, agencies need to be concerned about how best to ensure long-term access to the data years or decades from now.

“If you put it in the cloud now, what happens in the future if you want to change? How do you get it out if you want to go from Amazon to Microsoft Azure – or the other way – 15 years from now? There could be something better than Hadoop or Google, but the cost could be prohibitive,” says Sarmanian.

JSON emerged as a favored standard, supported by a diverse range of vendors from Amazon Web Services to Elastic and IBM to Oracle, along with the Elasticsearch search engine. In a world where technologies and businesses can come and go rapidly, its wide use is reassuring to government customers with a long-range outlook.

“Elasticsearch is open source,” says Michael Paquette, director of products for security markets with Elastic, developer of the Elasticsearch distributed search and analytical engine. “Therefore, you can have it forever. If Elasticsearch ever stopped being used, you can keep an old copy of it and access data forever. If you choose to use encryption, then you take on the obligation of managing the keys that go with that encryption and decryption.”

In time, some new format may be favored, necessitating a conversion similar to what Sarmanian is doing today to help their customer convert to JSON. Conversion itself will have a cost, of course. But by using an open source standard today, it’s far less likely that you’ll need custom code to make that conversion tomorrow.

Related Articles

AFDC Cyber Summit18 300×600
WEST Conference
NPR Morning Edition 250×250

Upcoming Events

AFDC Cyber Summit18 250×250
GDIT Recruitment 250×250
USNI News: 250×250
Nextgov Newsletter 250×250
GDIT HCSD SCM 1 250×250 Train