Open Source is Safe, But Not Risk Free

Open Source is Safe, But Not Risk Free

Open-source software can accelerate development schedules, cut licensing costs and leverage a robust community of international developers. Still, those strengths can also be exploited as security weaknesses.

For agencies trying to stretch their IT investment, the question isn’t simply to use or not to use Open Source. Rather, it’s how to do so safely and securely.

Scott Gregory, deputy director of the Office of Digital Innovation for the State of California’s Department of Technology, said Open Source software benefits from its large user communities, which have a shared interest in quickly fixing vulnerabilities. He’s fond of the saying “Given enough eyeballs, all bugs are shallow,” an axiom known as Linus’s Law and named for Linus Torvalds, the creator of the Open-Source Linux operating system.

More formally, Linus’s Law states that getting code in front of more developers and beta testers ensures that almost every problem will be characterized quickly and through crowd sourcing, a simple fix will follow. Gregory says the concept works, if you choose an Open Source with a strong and vibrant user base.

“We’re kind of looking for that sweet spot, those that have been tried and true and have gained notoriety as a very stable platform,” Gregory said.

Still, it takes more than a crowd to secure platforms, said Mike Pittenger, vice president of security strategy at Black Duck Software, a Burlington, Mass., provider of specialized tools that secure and manage Open Source software. A Black Duck audit of vulnerabilities in Open Source solutions found that on average, most were more than five years old, yet still remained embedded in some solutions.

“The issue is not that these vulnerabilities aren’t identified by security researchers,” Pittenger said. “Overwhelmingly, individuals analyzing Open Source projects are the source of these disclosures.”

Rather, the problem was in instituting necessary changes to close up the vulnerabilities. Reasons for delays included:

  • The number of discrete open source components in commercial applications turned out to be twice what code owners thought were – averaging about 100. That meant those in charge of tracking and fixing security problems were unaware of some modules used in their organizations’ software.
  • Open Source software doesn’t have a company behind it that takes responsibility for pushing out fixes like proprietary software does. Instead, Open Source users must discover these updates, pull them into their systems and apply their own patches or updates.
  • The pace of published vulnerabilities continues to increase, as do instructions for how to exploit them that are posted both on the public Internet as well as on the dark web.

“One of the ways Open Source can move so quickly from lab to production is by bypassing critical bottlenecks like security reviews, which are known to take a long time.” said Andy Ma, senior software architect for General Dynamics Information Technology, a provider of software solutions for the federal government. “Open Source can accelerate deployment, but developers and system owners still need to pay attention to security issues, since they are responsible for making sure that no vulnerabilities exist.”

With the growing popularity of Open Source, that poses a risk: Bad actors face a target-rich environment, making it relatively easy to test a known exploit of a commonly used Open Source component and against IT addresses to see which might be vulnerable.

Indeed, Pittenger predicts a 20 percent rise in the number of cyber-attacks on Open Source components over the next year. Yet that doesn’t mean Open Source is any more vulnerable than proprietary software, Pitenger said. All software has vulnerabilities. Close monitoring and rapid updates are just part of the deal when organizations adopt Open Source, as is sharing with other users.

“We should continue to encourage public and private sector organizations to contribute back to Open Source projects, [both] more research by individuals that uncover complex bugs and the responsible disclosure of vulnerabilities,” Pittenger said. “This will help make Open Source more secure, and better leverage the ‘many eyes’ theory by getting more ‘security eyes’ on the code.”

The Mozilla Open Source Support program does just that. Launched in the wake of such major security bugs as Heartbleed and Shellshock, which affected core pieces of popular Open Source software, the program aims to increase “security in the Open Source ecosystem” by:

  • Contracting to pay professional security firms to audit other projects’ code
  • Working with the project maintainers to support and implement fixes and manage disclosure
  • Investing in independent verification, to ensure that published fixes do in fact work

The organization recently examined five Open Source components, identifying one critical, one high- and 12 medium-rated vulnerabilities.

Of course, it falls to individual system owners to apply those fixes to their own implementations.

In New York, the New York Office of Information Technology Services (ITS) standardized its WebNY project on Drupal, a popular Open Source content management system. To ensure security, ITS maintains a continuous review of security patching and leverages work done elsewhere that identifies potential risks and fixes.

At the national level, Ann Duncan, former chief information officer at the U.S. Environmental Protection Agency, agreed that more eyes and a bigger community should help ensure security, saying that popularity is important when selecting Open Source code.

“If you stay with an Open Source tool in the top one to 5 percent of solutions, which means the largest number of users and contributors, then you’re likely to be pretty safe,” Duncan said, “because there are lots of eyes on that technology and it’s used a whole lot.”

Large user bases usually spell faster updates, suggested Eric Mill, a senior advisor on technology for the GSA’s Technology Transformation Service.

“There are all sorts of patches being applied throughout the day as the community does its work,” Mill said. “That feels good from a security perspective, especially given that patch management and update management is a huge security problem in the enterprise.”

 

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Open Source Software Carries Hidden Costs

Open Source Software Carries Hidden Costs

Open source software can save bundles in licensing and development costs. Whether you’re using the open-source Linux operating system or a content management platform like WordPress or Drupal, open source software provides quality tools for little or no cost.

But that doesn’t mean it’s free.

“It’s free – like a puppy,” said Scott Gregory, deputy director of the Office of Digital Innovation for the State of California’s Department of Technology (CDT). “You’ve still got to give it shots. You’ve still got to care for it.”

For information technology departments, that means ensuring that systems updates and security patches are installed and that applications and plug-ins remain up-to-date. And when something goes wrong, it’s you, the IT manager, who’s responsible for getting it fixed.

What Open Source provides is a means to focus resources. The New York Times and CNN, for example, both use WordPress to host their blogs. Roughly one in four websites runs on WordPress (so does GovTechWorks) because that large installed base has created a vibrant worldwide community that’s constantly developing add-ons, extensions and improvements. Some are free, some are not. But either way, users have a lot of options.

That’s the magic of open source software.

CDT has built procedures and protocols to test open source WordPress plug-ins for an internal web publishing platform, centralizing a mandatory review process before any extensions are made available to its IT community. That testing takes time, money and resources – another reason open source software is not totally free.

Not all open source software is equal. Gregory studies every open source solution he examines to determine the strength and reliability of its user community. The larger and more active that community is, the more reliable the software will be. In addition to WordPress, he cited Drupal, a content management system with millions of actively engaged developers, as another good example.

Once open source software is cleared by the California Technology Department, Gregory said, the state tries to make it easy for users to try it out. A state-run Innovation Lab offers a secure cloud where government developers can build tools and applications using Open Source software, testing them in a virtualized environment. That sandbox gives developers an opportunity to see how things work without putting real systems at risk.

Eric Mill, a senior advisor on technology for the GSA’s Technology Transformation Service (TTS), said Open Source is a critical piece of what the service is trying to do – push down costs and streamline development timelines – it is not a cure-all. There are no great open source email systems, for example, so there may be no escaping large-scale proprietary solutions.

But Open Source excels in many areas. Much of the web and web services are built on open source software, he noted, so if a project involves creating and publishing a web site, his team is likely to do it using open source tools.

Case in point: analytics.usa.gov is a site that aggregates and displays web traffic on federal web sites. The site programming was written to keep the user-facing front end separate from the data-crunching back end, making it easier for others to reuse parts in subsequent projects. Code for the site was shared on GitHub, a site popular with developers for sharing open source software.

While a supplier of open source software, GSA doesn’t commit to supporting it all by itself. That is left to the community, which is why Mill noted that looking at the level of activity in a group is so important. If an available tool has not been updated in a while, it’s probably not a good candidate to depend on. Conversely, a lively community is much more likely to reliably support its open source platform.

For its part, GSA’s TTS actively contributes to the communities it draws from, illustrating how open source support works: people and organizations notice a problem, work out a fix, test it, and then give it to others.

Mill said such support communities are like an ecosystem in which everyone potentially has a role. “If you have the capability to participate in that ecosystem, then that’s something you can consider doing. The community, which in a lot of cases may be other companies, is collectively the support system for that project,” Mill said.

There’s also another important element. Since WordPress, Drupal, and Linux are all well-known and widely used examples of open source software, the support demands are high. However, most open source tools are much less widely used and may be much more specific to the problems they solve. Such tools do not have the same community activity or need the same level of support as the big boys.

As for where Open Source may be headed in the future in a public sector context and what that means in terms of support, Ann Duncan, chief information officer of the U.S. Environmental Protection Agency (EPA), said, “One of the things we’ve been trying to embrace is using open source software as much as we can, as much as makes sense in our work.”

Open Source programming is considered for all new programs and those undergoing complete replacement, she added. The motivation? Often the end result saves money, even if it’s necessary to purchase support.

The EPA has some specialized needs, such as its regulatory responsibilities. But it also has many business processes no different than what is used elsewhere, Duncan said. Those processes could have possible open source solutions.

What’s important is for everyone to understand is that the security of Open Source applications has to be treated as “unknown.” By definition, its sources are “untrusted.” And although the largest communities work hard to police themselves, there’s always the chance a bad actor could inject nefarious code into an Open Source platform.

“DHS recognized the need for Software Security early,” said Bernie Thuman, principal software solutions engineer with integrator General Dynamics Information Technology. “They created CarWash and SWAMP – their equivalent of security scanning to help insure the integrity of commercially developed applications.”

Open Source offers the opportunity for customization, which can be both a benefit and a curse. The allure of tweaking software to fit particular use cases is powerful, but can turn an open and supported system into a one-off solution. Eventually, the level of customization may reach a point where moving to the next version of the underlying software becomes effectively impossible.

Duncan noted that every place she’s worked considered itself special and deserving of specialized solutions. But in most cases users should adapt to new systems, not the other way around if they are going to reap the greatest benefits of Open Source. Every organization hires people, pays bills, handles expense reports and the like. These are processes that lend themselves to open source solutions.

“You make that business process fit the software rather than the other way around, because software generally is going to follow industry best practices,” Duncan said. And save precious development resources for truly mission-critical requirements.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Do Cloud Certifications Pay Off? It Depends on Whom You Ask

Do Cloud Certifications Pay Off? It Depends on Whom You Ask

Cloud expertise is in short supply. Everyone wants to get to the cloud, but hiring experts with the skills to efficiently get you there is difficult, because competition is stiff.

In the market for talent, there are two factors at play: certifications and experience. Those with both are the most in demand. But either one can add significantly to a solutions architect’s market value. While certifications are attractive, however, practical experience trumps “test knowledge” in the eyes of most hiring managers. In other words, certifications are good, but a proven track record is better.

Not all certifications are created equal, of course. According to information training firm Global Knowledge, cloud certifications for Amazon Web Services (AWS) typically command greater value in the cloud talent marketplace t than rival certs, such as ISACA’s Certified Information Security Manager and Certified in Risk and Information Systems Control, both offered by ISACA; Certified Information Systems Security Professional. Amazon offers five different certifications, three introductory and two more advanced.

“Certifications are highly valued at Rackspace and in the technology industry at large, as they help solidify a skill set,” said Lee Meyer, senior manager for talent development enablement at Rackspace, a leading provider of managed cloud services, offering AWS, Microsoft, and OpenStack platforms.

Likewise at government specialists, such as General Dynamics Information Technology.

In the federal government sector, however, cloud certifications are far from required. The Defense Department spells out specific certification requirements in cybersecurity, and the Department of Homeland Security provides a range of free cyber training for federal employees and veterans. But neither agency spells out requirements for certificates in cloud solutions.

Consider the General Services Administration’s Office of Government-wide Policy: The office is the federal government’s managing partner in the Data Center Optimization Initiative (DCOI), which seeks to help agencies reduce from more than 10,000 data centers to a more efficient number, often through the use of cloud technologies.

“OGP is collaborating with GSA’s Federal Acquisition Service (FAS) and FedRAMP to make information available on cloud service providers, as well as collaborating with the National Institute of Standards and Technology and FAS to provide agencies with guidance on how to comply with mandates for transitioning to cloud services,” said Dominic Sale, deputy associate administrator of the GSA Office of Government-wide Policy. FedRAMP is the Federal Risk and Authorization Management Program, a GSA-led effort to implement government-wide security controls for cloud computing.

As part of this, Sale’s office is investigating best practices in selecting cloud services to store data online vs. on-premise solutions, but it does not require that staff members hold cloud certifications to do that work, cloud solutions, according to an agency spokesperson.

The same is true at the Financial Industry Regulatory Authority (FINRA), a non-governmental organization authorized by Congress to help regulate financial transactions. FINRA handles massive data sets of multiple petabytes each and volumes can change threefold or more from one day to the next. To manage all that data, FINRA recently adopted an AWS-based cloud solution to capture, analyze and store nearly all of a daily influx of 75 billion records. The move saved an estimated $20 million annually over the previous on-premise system.

Yet FINRA does not require its IT staff to maintain cloud certifications, said spokesperson Ray Pellecchia. “Generally we tend not focus on certifications – regardless of whether it’s cloud or one of the particular platforms – but instead focus on demonstrated experience and broader problem-solving skills,” he said.

Cloud services are still new, however, so finding experienced managers is challenging. Certifications can help employees – and prospective employees – to develop and demonstrate knowledge and expertise. In the cyber world, certifications are mandated, but experienced practitioners insist that practical experience beats by-the-book certifications all the time.

In cloud, the jury is still out. While similar sentiments arise, experienced people are so hard to find that certifications may be the only way to rapidly make up that lack of first-hand knowledge. Indeed, the certifications themselves are still new, having all been developed since 2011. AWS launched its certification program in 2013.

The analyst firm IDC predicts cloud services will grow at a compound annual growth rate around 20 percent through 2020. So competition for experienced – and certified – employees will remain intense for years to come. Certs for AWS – the leader in the infrastructure-as-a-service market – are now the most-valued among any IT certification, according to Global Knowledge and Forbes.com.

Passing the AWS test is no easy feat.

“You sit there for four hours and do these pretty intense multiple choice [tests],” said Mark Ryland, chief solutions architect for AWS Worldwide Public Sector. “The professional exams are definitely challenging. It really tests your mettle.” Ryland holds all five AWS certifications.

Each AWS certification costs $150 or so; pricing is similar for cloud certifications from Microsoft, Rackspace and others. Study materials, course instruction and time away from the office can push the total cost into the thousands.

Publisher John Wiley & Sons Senior Acquisitions Editor Kenyon Brown said “Wiley has plans to publish Study Guides for all the AWS certifications.” The publishing house announced in August it was teaming with AWS to publish the first in a series of official AWS certification study guides.

Competition among employers – and between individual cloud experts – could change the landscape as the cloud employment market develops. Already, there appears to be something of an arms race going on. If professionals continue to make it a point to acquire and maintain all possible certifications, then introductory-level certs soon may not be enough to have an impact on pay. Instead, they become an entry level job requirement.

It’s also not clear where those skills will ultimately reside. Federal agencies could focus on higher-level management skills and look to systems integrators and IT support vendors to provide certified cloud experts as needed. Rackspace’s Lee noted that a certification is already something for which the company’s customers ask.

“Certifications may be a part of their Compliance requirements, Service Level Agreement requirements or even part of their business proposition to their [internal and external] customers,” Lee said.

Yet despite that demand, certification doesn’t eliminate the need for experience. At both FINRA and GSA, experience is a highly valued, if not explicit requirement.

System integrators who support public sector organizations make similar points. For example, Scott Rutler, General Dynamics Information Technology’s senior director in charge of AWS partnership efforts, says: “Certifications increase our credibility with customers. They help assure customers that best practices are being applied.”

Yet while some government customers are even starting to require cloud certifications for key positions, “coursework and test knowledge only gets you so far,” Rutler said. “In the end, there’s no substitute for real-world experience.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
Building a New Campus? ITSM Can Help

Building a New Campus? ITSM Can Help

Most of the time, IT service management (ITSM) is employed in familiar places, with retrofits and upgrades. When organizations modernize, ITSM is useful in helping to manage the change.

But what about when an agency is building new, constructing a whole new building or campus?

These instances present organizations with a special opportunity to re-invent how they work and to infuse their processes with new approaches and technologies. In doing so, it’s important not to abandon those same ITSM principles.

The automaker Ford is currently transforming its more than 60-year old facilities in Dearborn, Mich., building more than 7.5 million square feet of technology-enabled workspace for today’s – and tomorrow’s – highly connected, collaborative workforce.

“We want an open campus and for people to be able to move around,” said Phil Simonte, IT operating model manager at Ford.

From a service perspective, that means having widely available Wi-Fi capabilities, of course, but it also means anticipating wireless network demands not just for today, but for years from now. Simonte, an ITSM practioner who holds ITIL and COBIT certifications, said the number of devices and the volume of data each consumes must be accounted for – and growth needs to be anticipated for years or even decades to come.

Planning and building now for a future we can’t yet imagine is hard. But the key to success is staying rooted in your organization’s work objectives, and then incorporating only technology that helps you achieve them.

One key is not to get dazzled by shiny objects, cautioned Mark Storace, CEO of the IT Service Management Professionals Association. “Don’t just implement a tool,” he said. “It won’t work!”

Instead, Storace sounds a familiar ITSM consultant’s refrain: look at process, people and technology. For new buildings and campuses that need infrastructure and services flexible enough to remain viable decades from now, that’s even more important. Managers and planners must invest time on the front end of the project defining processes, roles and responsibilities. Those in turn lead to decisions about how to deploy resources throughout the facility.

It’s also essential to assemble a capable and formal transition change management team, he said. That “will make adoption and implementation much easier.” Everyone involved should have at least a basic level of understanding of ITSM concepts and terminology. He advocates ITIL, the IT Infrastructure Library, which is managed by Axelos, but other ITSM standards can also be used. The key is to have that common understanding and language.

‘Like Taking a Trip’
Matt Moore, practice manager at Fruition Partners, an ITSM consulting firm, has been involved in numerous public sector projects for agencies and departments in Illinois, Kentucky, Texas, Oklahoma and elsewhere

“It’s like taking a trip to Wally World,” he said, referring to the fictional amusement park in the movie comedy National Lampoon’s Vacation. “The ultimate goal is to follow a road that gets somewhere.”

The car, in this analogy, is the technology platform. Over the years it will need to be serviced. Tires and belts will be replaced and the engine or transmission may even need to be rebuilt. Someday, it may have to be replaced completely.

Yet while buying the vehicle – or the technology platform – is exciting, it also can be distracting, according to Moore. It’s the destination that’s really important, not the mode of transport, he said. Planners must first assess existing resources (systems) and determine whether they are adequate for the journey. Only then is it time to examine the people, processes and technology needed to identify the gaps.

ITSM is useful in the process. New York University, which has an enrollment of more than 50,000 students, used ITSM techniques to reduce calls to its overloaded service desk by putting more information online. Having seen success, Moore said the university began to apply the concept in other areas.

One example directly relevant to government campuses: using technology to account for students in the midst of a crisis, such as a mass shooting or terrorist incident. NYU needed a solution that would take into account its affiliated institutions, New York University Abu Dhabi and New York University Shanghai, because students enrolled at the university can be located half a world away at another location. The solution: a polling system, in which the university can identify users in a given areas, then poll mobile phones. Only those indicating a problem or that fail to respond are placed in line for the next level of checks.

When it comes to a new campus or building, selecting the end goal gets to the heart of an enterprise’s mission. For an example, Moore cited a hypothetical university with a strong business program and a weak reputation for the sciences. In designing and constructing a new building or campus, university decision makers could opt to concentrate even more on its core specialty or use the opportunity to invest more in its science program.

External as well as internal factors must be considered: How will a more robust science footprint affect enrollment? What effect might a rival institution have by making a corresponding investment in its business school?

Having a technology platform that is flexible and can adapt to changing situations is crucial, both for the hypothetical university and for a dynamic government agency. For a new facility to be successfully embraced by its occupants, it’s not enough to have bigger windows and high-tech meeting rooms. If technology is changed without the users having a say, change may not equate with improvement, Moore said.

That’s what happens if “the people who are doing the day-to-day were not consulted,” he said. “In essence, they need to have a say in ‘Will the shiny buttons actually help them out?’”

Outside In
Ian Clayton, principle at consultancy Service Management 101, says many institutions start out too quickly looking at technology and infrastructure centric, focusing too quickly on existing practices, policies and processes.

“As valuable as this is, I believe starting the journey to customer centricity from an inside-out perspective will fail,” Clayton said.

It’s better and more productive to work from the outside in, he said. Clayton says organization should ask four questions:

  • Who are our customers?
  • What activities do they perform in pursuit of successful outcomes?
  • How do we help them with those activities?
  • How satisfied are they with our help?

“In my experience, organizations adopting an ‘outside-in’ philosophy can answer these types of questions easily, and are able to commit effort and resources where they will have the greatest positive impact on customer satisfaction and the value derived from using IT services,” he said.

Importantly, for those looking at new buildings and campuses, Clayton has found that an outside-in focused organization can easily adapt to changes in customer behavior and needs. They can do so in real-time, and make targeted improvements to a service management system, operating model and service offerings, he said.

Clayton noted that inside-out efforts are worthwhile. They can be helpful, for instance, when improvements are needed in incident, change and configuration processes. The key is to ensure that everything is done in a customer centric way.

Applying ITSM practices during new campus development can also lead to significant cost savings, said Stan Tyliszczak, chief engineer and vice president for technology integration at General Dynamics Information Technology. “By understanding and addressing the business processes first, it’s possible to minimize or even eliminate duplicate and unnecessarily redundant services and capabilities.

“In our work with the federal government,” he added, “we find that campus development projects are ideal opportunities to rationalize applications and consolidate software licenses. The results are pretty substantial cost savings, while at the same time improving IT service performance.”

Starting with a blank slate is a wonderful and rare chance to re-invent how organizations work, said Ford’s Simonte. “It’s a good opportunity to be innovative, to think outside the box and to offer new services.”

But even so, the old fundamentals still apply, Simonte said. “The principles are the same whether it’s a new, green field project or if it’s a retrofit or an upgrade.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250
In Age of Cloud, ITSM Still Matters

In Age of Cloud, ITSM Still Matters

Even if everything is up in the air – or the cloud – the business of managing information systems doesn’t change: Routine maintenance and major upgrades must be handled in a disciplined and comprehensive fashion and metrics must be employed to evaluate system performance and customer satisfaction.

The discipline of IT Service management (ITSM) applies no less when an organization outsources its information technology than when it manages them on its own. ITSM helps managers better understand user requirements for security, capacity and system availability and to build solutions to satisfy those needs.

The U.S. Air Force is a convert.

Even as it prepares for a major shift to outsource as much of its IT services as possible, the Air Force is rolling forward with a major push to inculcate ITSM across the enterprise. Col. Paul Young, chief of the U.S. Air Force’s Joint Information Environment Integration, said the Air Force’s transition to an IT service model will provide two big benefits:

  • “One is it really opens the aperture on who we can use to provide these services,” Young said. “We don’t have to do it internally. We can explain it in a way that everybody understands and we can broaden their horizons in terms of who you can go to for service provisioning.”
  • The second “is that we get standard service delivery no matter who provides the service.” By establishing metrics and requirements up front, the Air Force can establish its own standard model and make those standards a requirement of any contract.

For the Air Force, the cloud and outsourcing co-exist with ITSM, which in various forms has become a standard for many commercial organizations. One common ITSM approach is the IT Infrastructure Library (ITIL), a registered Axelos trademark, which was developed in the United Kingdom as an outgrowth of government ITSM implementations. Other ITSM frameworks include ISO20000,  FitSM, COBIT and the Microsoft Operations Framework.

ITIL is a framework of processes, approaches and capabilities used by many contractors as standard operating procedures, Young said. Because ITIL allows users to choose which elements to use in their own organizations in a descriptive, rather than prescriptive, way, it helps ensure the Air Force and its service suppliers are speaking the same language.

Standard metrics also help, ensuring provider and customer are on the same page when evaluating service performance.

The Air Force is less than a year into its enterprise-wide rollout of ITSM, with five internal groups substantially involved or touched by the implementation: SAF/CIO A6, Air Force Space Command, Air Force Network Integration Center, Air Force Life Cycle Management enter and the 24th Air Force. Officials said a sixth organization, the Air Force Installation and Mission Support Center, established only a year ago, will also be involved.

The plan is to apply the ITSM framework to new initiatives first, then as the concepts are proven, incorporate them into legacy system management, as well, Young said. As with many challenges today, the implementation is more of a cultural change than a technical one, because it represents a new way to do things.

The approach boils down to splitting responsibility along clear lines: “We want to make the decision about which services are most important and how do they underpin the Air Force’s core mission,” Young said. “Let the provider make the technical decisions about how to meet our service requirement.”

Going to School

Ohio State University adopted ITSM and ITIL years ago for a system-wide approach to supporting its 120,000 students, faculty and staff 24 hours a day, seven days a week. The university’s experience was outlined recently in an Axelos case study.

Bob Gribben, director of service operations, said Ohio State does not require vendors to be ITIL certified, as some government agencies might do, but the university does like to see experience with ITSM frameworks.

Certification in any ITSM approach  helps ensure vendors employ recognized best practices and standards, Gribben noted.

“Knowing that this consultant works on the basis of the customer is first and I need to do things efficiently and effectively and economically – the three E’s – is kind of appealing, versus somebody who doesn’t have that,” he said.

Just knowing a vendor has invested in a framework instills confidence, Gribben added. “I think there is something to say about somebody who’s taken the time to get the certification to understand what ITSM involves – it doesn’t have to be ITIL, it could be any of them.”

That is the case whether the service in question is provided locally or in the cloud. Ohio State uses many cloud services, such as Box for storage and Office365 for productivity. The cloud solutions cut costs, he said.

Using ITSM helps define customer requirements and identify appropriate solutions, and as technology evolves, helps ensure that service provision can evolve with it. An email or streaming service can quickly shift from cutting edge a short time ago to outdated and expensive, lacking in capabilities and performance characteristics that are common in newer offerings.

With a service model, solutions can be continually refreshed and kept up-to-date.

Lessons learned
In Ohio State’s implementation, Gribben said his staff wanted a self-service portal to help solve user support challenges. The more those could be solved by individual users, the more time help desk specialists would have for more serious concerns.

“The Service Desk had for years been a part of the organization that customers did not want to work with,” Gribben told the case study authors. “We started with our immediate pain point, the Service Desk function and the Incident Management process. From there, we were able to see the benefits of adding Request and Change Management.”

They settled on developing a one-stop shop that could differentiate between user types. So users with Administrative Web Interface (AWI) accounts would see options designed for their level of access, while others — students, staff, online account managers, and so on – would see options tailored for them.

Then, using a combination of in-house and commercial tools, Ohio State built, tested and fielded the system. In nine months, user traffic topped 1 million visits, the system has never failed and feedback is consistently positive. It’s working, Gribben said, “pretty darn well!”

ITSM experience has taught Gribben the importance of ensuring the right people are on each team and that teams are properly sized for each project. In one case at Ohio State, having too large a team slowed down development time; a single tool took six months to develop because there were too many cooks in the kitchen, Gribben said.

But too few people can also be a problem, Gribben said: “Usually, if you get too small a team, the person sitting next to you has the same idea you do.”

A second critical lesson is ITIL’s service-centered mentality. The customer always comes first. IT supports the mission, and needs to flex to mission needs, not the other way around. For some organizations, that may require a cultural shift, Gribben said.

Similarly, ITSM may also require a whole new way to approach a project, says John Gilmore, director, IT services and solutions and ITSM subject matter expert at General Dynamics Information Technology. It’s important to begin with the end goal, such as the service one needs and the way to measure its effectiveness, rather than the conventional starting point, which often focuses on available infrastructure or tools.

By starting with desired outcomes and working backwards, teams can determine how to reach mission objectives based on the current state, Gilmore said. After that, deciding the way forward becomes easier.

The Marine Corps specified ITSM when launching its Marine Corps Enterprise Information Technology Services (MCEITS) environment. The program involves bringing hundreds of applications and processes in house, and developing a new organization and culture to manage it. General Dynamics Information Technology successfully managed the transition.

“MCEITS represents a transition from a contractor owned/contractor operated environment to a government owned/contractor supported environment and, ultimately, to a government owned/government operated environment,” Gilmore said.

“The Marines quickly realized that a critical element of transition success relies on ITSM best practices through the integration and maturation of processes, tools, and personnel,” he said. That, in turn, gave birth to the Enterprise IT Service Management (EITSM) efforts to establish a foundation for global IT management.

“Communications with all the stakeholders is extremely important,” he said. “We first performed a global current-state assessment that involved reviewing strategic plans and end-state objectives to develop an ITSM implementation roadmap.  Our approach also included daily communications with key stakeholders, weekly and monthly status reports and a comprehensive communications and training program in support of customized processes and supporting tools.”

Keeping focused on the end-goals, reviewing progress, making necessary adjustments and communicating progress are all critical to ITSM implementation. Also critical: making sure end users – and not just system owners and managers – are informed along the way.

Users are focused on the the end product, not the process used to make it, said the Air Force’s Young. They may not care whether a drill, saw or some other tool is used to cut a hole in a piece of wood.

What’s important is the outcome, Young said, extending the woodworking analogy.  “Always remember that what the customer cares about is getting a hole that’s a half inch across.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GM 250×250
GovExec Newsletter 250×250
Cyber Education, Research, and Training Symposium 250×250
December Breakfast 250×250

Upcoming Events

USNI News: 250×250
Nextgov Newsletter 250×250
WEST Conference
GDIT Recruitment 250×250
NPR Morning Edition 250×250
Winter Gala 250×250
AFCEA Bethesda’s Health IT Day 2018 250×250