How Machine Learning is Helping Combat Insider Threats

How Machine Learning is Helping Combat Insider Threats

The volume of computer data federal agencies must collect, store and analyze to effectively manage insider threat programs grows so fast that it threatens to overwhelm agencies’ ability to analyze all the data they collect. This potentially undermines the effectiveness of insider threat programs. Sophisticated tools can help, but implementing the needed levels of sophistication requires skilled people – those with years of experience not only with tools but with the analytics and analysis those tools enable.

Sorting through the data in a reasonable timeframe will require new ways of organizing it, as well as new tools for accelerating search. These new tools incorporate machine learning techniques to identify anomalies and patterns in broad data sets, speeding up investigations and threat detection, industry experts say.

Some of the federal government’s most damaging security breaches over the past decade have involved insiders – cleared government employees and contractors with direct internal access to government facilities, systems and information. In each case, these insiders downloaded massive amounts of classified data.

In the wake of those data breaches and other similar incidents, new standards for insider data collection require agencies to collect and analyze detailed information about the comings, goings and digital behavior of employees and contractors – essentially anyone with access to government systems and facilities. Security standards have now expanded the range of data collected to include physical access information, foreign contact information, foreign travel information, personnel security information and financial disclosure information.

From Gigabytes to Petabytes
All of that information translates into staggering volumes of data. Agencies capture every single log on, log off, file action (open, edit, print, share and more), permission change (especially escalation) and addition or deletion of users. “We’re talking 50,000 events per second today rising to 100,000 events per second before too long,” says David Sarmanian, an enterprise architect with General Dynamics Information Technology (GDIT).

Security Information Event Management (SIEM) systems are the tried-and-true tools for capturing and tracking all that data. Systems such as ArcSight have a proven track record of providing real-time analysis of security alerts generated by network hardware and applications. Based on a relational database backend, ArcSight allows security operations teams to gather events from applications, devices, servers and their networks. They then load the information into an SQL database where it can be analyzed.

But the exponential growth, over just the past few years, in the amount of data collected has such systems now struggling to keep up, some experts say.

“When the sum of all that data was measured in gigabytes of data per day, that was a good approach,” says Michael Paquette, director of products for security markets with Elastic. “The amount of security data is exploding so much today that large organizations, especially in the government, are not talking about terabytes per day, they are talking about petabytes per day.”

Elastic develops the Elasticsearch distributed search and analytics engine and is used in a wide range of big data applications, from Netflix on the consumer side to security in the IT world.

“With ArcSight, analysts had to create a rule that might say something like ‘if this IP performs a certain action, notify me or if this user goes to this subnet or computer, notify me,’” says GDIT’s Sarmanian. “But now you have petabytes of data. How do you write a rule that is constantly looking for anomalies? You will crush the system because there is not enough RAM to run the system and do these types of queries.”

Applying Machine Learning
Security analysts can analyze data through visualization, traditional security rules or via machine learning (ML), which can be built on top of fast search engine capability. ML employs artificial intelligence, allowing computers to learn or become more proficient at a given task over time. This learning can be either supervised, where analysts help train the system, or unsupervised, where it is autonomous.

Elastic’s engineers understood the analysis problem associated with big data is fundamentally a search problem, Paquette explains. Security teams usually perform analysis using a portion of data extracted by search technology.

Elasticsearch uses unsupervised learning and incorporates an ML job, Paquette says. This ML job builds on an index of data or fields within the data. Analysts can instruct the ML tool to model the behavior of that data and inform the tool to notify the security team whenever something unusual occurs. The ML job automatically runs in the background without any human participation, generating an alert or email each time it detects an anomaly.

Splunk Inc., a San Francisco provider of systems and tools for managing vast amounts of machine data, is another leading player in this market. The Splunk platform performs anomaly detection, adaptive thresholding and predictive analytics by using pre-packaged or custom algorithms to forecast future events.

“Splunk provides a comprehensive answer to one of the biggest challenges facing modern organizations: How does it harness diverse and increasingly profuse amounts of data to gain valuable business insights,” analyst Jason Stamper writes in a report on Splunk and machine learning.

Splunk software can also pull data from the Hadoop data file system, traditional databases and any other relevant data source via APIs to provide additional context for the data, according to the white paper. The software also creates correlations between disparate data sources and normalizes different data types at search time. For instance, a person might appear in log data as an employee number, but appear as a full name in a human resource system. Splunk software helps normalize the ways the data is represented, allowing analysts to take full advantage of the software’s statistical analysis capabilities, which can help them monitor for activities that are statistical outliers at a variety of levels of standard deviation.

GDIT security experts use both Splunk and Elastic ML capabilities in their work with federal agencies, to establish a baseline of behavior for anomaly and pattern detection, according to GDIT’s Sarmanian.

Hunting for Threats
Another tool gaining traction in the federal space is Sqrrl’s Threat Hunting Platform, built on a foundation of Apache Hadoop and Accumulo, an open source database originally developed at the National Security Agency, Sarmanian notes.

Sqrrl’s threat hunting capabilities are grounded in the concept of a behavior graph combining linked data with various types of advanced algorithms including ML, Bayesian statistics and graph algorithms. Sqrrl goes beyond simple log search and histograms. It provides linked data analysis and visualization capabilities, which involve the fusion of disparate data sources. These use defined ontologies to better enable ad hoc data interrogation, greater contextual awareness, faster search and more intuitive visualization, Sqrrl officials say.

Machine Learning Savvy
Insider threats require different types of analytics, says Chris Meenan, director of QRadar Management and Strategy with IBM Security.

“It is not about a specific vulnerability being exploited or a specific malware that has a signature associated with it,” Meenan points out. “It is about someone who has gone rogue, who uses entitlement to do their job, but starts to use that in a malicious way. So that requires analytics that is more behavior-based.”

IBM’s QRadar is a SIEM platform that detects anomalies and uncovers advanced threats through the consolidation of log events, network flow data from thousands of devices and endpoints and applications distributed throughout a network. ML extends the predictive modeling capabilities of QRadar and its User Behavior Analytics application.

ML helps analysts build models of user behavior. One single threshold cannot be applied across an organization because different groups of people have different job functions, requirements and work habits. A marketing staffer might travel often and log into the network from remote locations multiple times per week, while an administrative employee might only log in from a desktop computer at work. Analysts can build models for these employee profiles that anticipate what kinds of work they do, what types of files and apps they access and whether they are authorized to use privileged accounts. Such model-driven security is becoming more common, Meenan says.

Sven Krasser, chief scientist with CrowdStrike, a developer of ML-based endpoint security tools, says it’s relatively easy to build a machine learning model for such uses. The hard part is the integration of that machine learning model into a larger infrastructure. “You need to collect and normalize the data,” he says. “You need to have a data strategy where you are collecting data that is useful to machine learning and does not cause breakage.”

Finally, as valuable as ML technology is, it’s important to recognize that it’s a tool – not magic. ML is only effective in situations where there’s enough data generated to provide insight into IT network traffic.

“These are complex problems,” says GDIT’s Sarmanian. “You need to have the right data, you need to understand the complexity of the problem and you need to match the tools to the challenge in such a way that it can scale and adapt over time. Finally, you need to do all that within a finite budget. Get any of those wrong and things can spiral out of control in a heartbeat. This is one of those areas where experience makes a really big difference.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
Can Micro Certifications Match Government’s Real-World Needs?

Can Micro Certifications Match Government’s Real-World Needs?

Certifications are the baseline for assessing proficiency in information technology and cybersecurity skills. But many career cyber professionals say required certs underestimate the value of on-the-job training and experience.

Others complain about the cost of obtaining and maintaining certifications that often don’t match up to the narrow responsibilities of a given job. They see emerging micro certifications – short, online training programs followed by knowledge exams – as more practical in many cases.

At the same time, new national standards are taking root to try to better tie knowledge requirements to specific job categories and certifying organizations are revising their programs.

Anil Karmel Founder / Chief Executive C2 Labs

Anil Karmel
Founder / Chief Executive
C2 Labs

“Nothing replaces real world expertise,” said Anil Karmel, founder and chief executive at IT startup C2 Labs of Reston, Va., and a former deputy chief technology officer at the Energy Department’s National Nuclear Security Administration (NNSA). “Having on the job skills is invaluable, especially in the cyber realm when you are entrusted to protect our nation’s critical IT assets. The aim is to strike the right balance between real world experience, certifications, and training, Karmel said.

Certifications were crucial early in his career, Karmel said. As he moved into mid-to-senior level positions however, his needs changed and so did the qualifications required for those jobs. The higher you go, the more experience and past performance defines your capabilities.

Karmel was a solutions architect at the Energy Department’s Los Alamos National Laboratory in Santa Fe, N.M., where he helped develop and launch a private cloud that let researchers automatically request virtual servers on-demand.

“As I grew and became a systems administrator, I focused on industry or vendor-specific certifications, such as VMware, which enabled me to build the private cloud at Los Alamos.”

Certifications can be seen as a baseline, onto which you may want to add additional skills. Micro training programs are one way to do that, Karmel noted.

For instance, a security analyst might need to move beyond incident response – reacting to events that could have a negative impact on an organization’s network – to learn about incident management. It focuses on preparing formal policies and procedures, as well as having the necessary tools in place, to thwart cyber threats. An online micro certification class on the Incident Management Lifecycle could meet that need.

Micro certification, a growing trend?
Micro certifications are narrowly focused, non-traditional skills training in which participants can earn a credential within a matter of days, versus months or years for traditional technical certification programs.

Micro certifications are well-liked by workers – and supervisors – according to a January 2017 Linux Academy/Cybrary survey of 6,000 IT pros. They allow employees to rapidly gain specific knowledge sets that answer specific needs.

Anthony James CEO and Founder Linux Academy

Anthony James
CEO and Founder
Linux Academy

“The growing micro certification trend is driven predominantly by industries such as IT and cybersecurity that have a workforce skills gap where jobs can’t be filled because of a lack of qualified applicants,” according to Anthony James, CEO and founder of Linux Academy, an online Linux and cloud training firm based in Fort Worth, Texas.

The survey, conducted in partnership with Cybrary, a provider of no-cost, open source cybersecurity Massive Open Online Courses (MOOCs), found IT professionals use micro certification programs to keep up with changing technologies and also learn at their own pace. Some 86 percent of respondents said they prefer learning and testing in small increments to receive IT skill credentials.

Thirty-five percent of respondents said that micro certifications have either helped them get or advance in a job; 70 percent think their company would benefit from partnering with micro certification providers; and 85 percent would most likely pursue micro certifications if employers facilitated the offering.

Opinions on micro certification versus traditional IT training varied. More than half – 58 percent – of respondents said micro certifications convey the same level of technical proficiency as traditional training and more than 94 percent believe that micro certifications give entry-level job candidates an edge in competing for jobs.

In terms of costs, 82 percent of respondents understood that micro certifications are more affordable than traditional IT training. Fifty-eight percent of those surveyed paid $25 or more for their own micro certification courses. Most respondents believe their company spends an average of up to $25,000 annually on IT skills training for employees.

Difference Between Certificates and Certification
Many government and government contractor jobs require certifications from established organizations, such as the Certified Information Systems Security Professional (CISSP) certification conferred by (ISC)2, which offers a portfolio of credentials that are part of a holistic, programmatic approach to security. Candidates must have a minimum of five years of paid full-time work experience in two of the eight domains of the CISSP Common Body of Knowledge (CBK). It covers application development security, cloud computing, communications and network security, identity and access management, mobile security, risk management and more.

CISSP certification is costly, ranging from $2,000 to $4,000, depending on the choice of study – CISSP Boot Camp, regular classroom or online training. The six-hour exam alone costs $599.

But Dan Waddell, managing director for North America with (ISC)2, doesn’t see such certifications going away.

“I don’t believe the certification requirement is overkill and I believe most cybersecurity executives in the government would agree,” Waddell said.

According to the recently released federal results of the 2017 Global Information Security Workforce Study, 73 percent of federal agencies require their IT staff members to hold information security certifications. The survey of over 19,600 InfoSec professionals includes responses from 2,620 U.S. Department of Defense (DOD), federal civilian and federal contractor employees. The study was conducted by The Center for Cyber Safety and Education and sponsored by (ISC)2, Booz Allen Hamilton, and Alta Associates. Findings of the report will be released throughout 2017 in a series of dedicated reports.

To effectively retain existing InfoSec professionals and attract new hires, federal respondents indicated that offering training programs, paying for professional cybersecurity certifications, boosting compensation, and providing more flexible and remote work schedules and opportunities were among the most important initiatives.

Still, Waddell acknowledged that traditional certifications must evolve over time, and that (ISC)2 must develop ways to support government efforts to move toward a more performance-based certification system.

Micro certifications aren’t necessarily a replacement for baseline job requirements, however. Scott Cassity, senior director at the Maryland-based SANS Institute Global Information Assurance Certification (GIAC) center, said there is room for both in the complex and rapidly evolving world of cybersecurity.

“I can appreciate folks saying [they need] more bite-size micro certifications,” Cassity said. “I can appreciate that there might be some particular bite-size training we need on a particular tool, a particular technique.

“But if you back up and say: ‘Hey, I want someone who can be a defender. I want them to have a broad range of skills.’ Then, we don’t think that is something that will be absorbed in bite-size chunks,” Cassity continued. “It is going to be very rigorous and challenging training. It is studying above and beyond that training.”

Take the GIAC program, for example, which offers several dozen certifications for a range of different skill sets. Courses typically run four months and cost $1,249. Most students spend 40 to 50 hours studying outside of the classroom, Cassity said. Like CISSP, GIAC is a DOD-approved credentialing body, and its programs meet requirements laid out in the DOD Directive 8570, setting training, certification and management requirements of government employees involved with Information Assurance and security.

(ISC)2’s Waddell agrees there is a difference between a certificate covering practical cyber security knowledge or a specific skill set and professional certification more rigorously assessing a broader range of knowledge, skills and competencies.

The cybersecurity industry keeps evolving, Cassity said. Fundamental skills for information security will stand the test of time. But with mobile security, forensics and other rapidly growing technologies, job functions and certifications must change, as well.

Building a Skills-based Workforce
Federal agencies are looking to adopt the skills-based workforce definitions developed under the National Initiative for Cybersecurity Education (NICE), a partnership between government, academia, and the private sector that’s managed by the National Institute of Standards and Technology (NIST). NICE aims to standardize expectations for cybersecurity education, training, and workforce development across the industry to level-set expectations for employers and employees alike.

“We are not in favor of check-the-box for knowledge and skills,” said Rodney Petersen, NICE director at NIST.  “We really want a robust process for validating an employee’s knowledge, skills and abilities or a job seeker’s knowledge, skills, and abilities.”

The NICE Cybersecurity Workforce Framework (NCWF) – released by NIST in November 2016 – is the centerpiece, describing seven broad job categories: security provision; operate and maintain; protect and defend; analyze; operate and collect; oversight and development and investigate. It also includes 31 specialty areas and 50 work roles, each predicated on specific knowledge and skills, Petersen said.

NICE aims to improve education programs, co-curriculum experiences, training and certification to increase the quality of those credentials, he added.

NICE also impacts certifications. Defense Department Directive 8140, Cyberspace Workforce Management, issued in August 2015, sets the stage for replacing DOD’s certification-based requirements with skill-based assessments rooted in NICE.

According to the 2017 Global Information Security Workforce Study, 30 percent of federal respondents said their organizations have at least partially adopted the NICE Cybersecurity Workforce Framework.

The U.S. Department of Homeland Security (DHS) is using the NICE framework to build up its cybersecurity workforce. As a government-wide workforce framework, NICE “helps us to implement best practices, to identify, find and recruit the really good people,” Phyllis Schneck, DHS deputy undersecretary told GovTechWorks last year.

Some certifying organizations are starting to develop new “performance-based” certifications that are more in line with the NICE standard:  ISACA unveiled its Cyber Security Nexus Practitioner (CSXP) certification, which tests a candidate’s skills in a live, virtual cyber-lab, and CompTIA’s A+, Network+, Security+ and CompTIA Advanced Security Practitioner (CASP) certifications also include performance-based assessments.

Both ISACA and CompTIA are building their new hands-on programs around the NICE standards and definitions. NICE doesn’t undo the call for certifications, but instead emphasizes functional roles to better align candidates’ skills with specific job functions.

(ISC)2 began mapping CISSP certification requirements to the NICE Cybersecurity Workforce Framework last year, Waddell said.

“Certification is just the beginning,” he added. “You are now required to maintain that certification. You are required to set aside a certain number of hours per year to maintain that certification.” Those Continuing Professional Education (CPE) hours can include hands-on training or even skilled-based micro certs.

“In a perfect world,” Waddell said, “a certification program and certificate program can co-exist in a healthy way.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
Gotta Get Mobile: Citizens Drive Changes in .Gov Web Design

Gotta Get Mobile: Citizens Drive Changes in .Gov Web Design

More than one in three visitors to Federal Websites use mobile or tablet devices today, following a national trend that saw mobile Web usage surpass personal computers for the first time in 2014, and has continued growing since.

Many Federal agencies are unprepared for the shift, however. Some have recently updated their sites to be mobile-friendly, but others are scrambling to update systems developed before mobile was a significant concern.

The heat rose last April, when Google changed its search engine to give preference to mobile-friendly sites. At the time, the government IT news site NextGov tested two dozen of the government’s largest Websites using Google’s mobile friendly webmasters tool. Eleven sites failed, including,, and Nine months later, is the only one of the four to pass the Google mobile test today.

Upgrading Websites is a challenge to begin with, and adding mobile to the mix is just one more wrinkle to be ironed out. The Department of Homeland Security’s U.S. Citizenship and Immigration Service (USCIS) has been working on its site update for more than a year, focusing on mobile first and using responsive design technology that will adapt to different screen sizes on the fly.

At the same time, the General Services Administration’s (GSA) 18F organization is working with the White House U.S. Digital Services group to develop government-wide standards for Federal Websites. The draft U.S. Design Standards, which are voluntary, aim to provide federal IT developers and industry partners with a roadmap and toolset to help spread clear design and navigation standards across all government sites.

“When the American people go online to access government services, they’re often met with confusing navigation systems, a cacophony of visual brands, and inconsistent interaction patterns,” explained project leader Mollie Ruskin, announcing the standards on an 18F blog post.

The new standards include open-source user interface (UI) components, a visual style guide and industry-standard guidance for accessibility and design.

Going Mobile

At USCIS, leaders pursued responsive design because mobile was what their constituents wanted, says Jeffrey Levy, chief of E-Communications in USCIS’ Office of Communications. The new site will give mobile users access to all the same features and functions as the desktop version, which isn’t the case today.

“You get a layout that works on any size screen so you don’t have to maintain two servers, two web sites,” he says.

Responsive design is part of a broader strategy to make it easy for customers to get answers to their questions and apply for services without requiring help from a representative in person or on the phone. The more customers can help themselves with routine matters, either online or using automated phone systems, the more time representatives will have to resolve more complex issues that require human intervention. Toward that end, USCIS launched a virtual assistant feature in December. Named “Emma,” it helps Web visitors quickly answer questions and locate information, easing call center workload.

Unhappy Customers

Improving citizen Web services to be as simple and intuitive as consumer sites, such as Amazon or Netflix, is a major goal of the Obama administration. Executive Order 13571, Streamlining Service Delivery and Improving Customer Service, declared in 2011 that “government managers must learn from what is working in the private sector and apply these best practices to deliver services better, faster, and at lower cost.” The order emphasizes the need to offer online and mobile solutions and to reduce “the overall need for customer inquiries and complaints.”

In contrast, citizens’ expectations for government service have declined steadily. According to an American Customer Satisfaction Index report released last year, citizen satisfaction with Federal government services plunged in 2014 to its lowest level since at least 2007.

Citizen Satisfaction With Federal Government Services

“Overall, the services of the federal government continue to deliver a level of customer satisfaction below the private sector and the downturn this year exacerbates the difference,” the report states.

Alan Webber, a research director with IDC Government Insights, says government agencies don’t benefit from the competitive pressures seen in consumer markets. The Social Security Administration, for example, is the only source of information about your benefits, so there’s little natural incentive for the agency to upgrade its online services.

For the public, Webber says, this plays out in diminished expectations. “We have the expectation that any engagement or any interaction with government isn’t necessarily going to be easy,” he says. “We want it to be easy, but it is not necessarily going to be that way.”

New Federal Standards

Changing those expectations is the driving force behind a joint effort of the General Services Administration’s 18F organization and the White House’s U.S. Digital Services group. The pair introduced in September 2015, a first-ever effort to provide federal IT developers and industry partners with clear design and navigation guidance for government sites.

Much of the guidance in these draft Web Design Standards incorporates open source designs, code, and patterns from other civic and government organizations, including:

“Like any true alpha, this is a living product,” Ruskin wrote in the 18F blog. As a result, the 18F and U.S. Digital Services team will continue to test its UI decisions and assumptions with real-world feedback, letting the standards evolve over time. Designers are encouraged to explore the U.S. Web Design Standards, contribute their own code and ideas, and leave feedback on GitHub. The project team will use this input to improve the standards and make regular releases over the coming months, according to Ruskin.

Designers seeking other sources of insight and best practices can also look to the Content Managers Forum and the Social Media Community of Practice, both managed by GSA; the Federal Communicators Network; and the Federal User Experience Community, Levy says.

The new U.S. Web Design Standards do not come with a mandate or requirement; their use is entirely voluntary. But already some sites have begun to incorporate the standards, including a voter registration portal for accessible at has also incorporated some of the design standards into its own system, according to 18F officials.

Whether or not every agency website should have the same look and feel or be distinctive in its own right is open to debate. Some say yes, others no, still others argue for a middle ground in which a basic set of building blocks, such as a narrow selection of available fonts, is used to establish a modicum of order and consistency. But even that is easier said than done, Levy says. Changing the font on websites can require programming and layout changes, which cost time and money. Within USCIS alone, he says, four or five platforms would have to be changed just to standardize fonts.

Putting the User First

Simply changing fonts on existing sites may not be worth the effort. Rather than look back, the place to put current effort is in future improvements that not only meet, but exceed customer expectations.

David Simeon, chief of USCIS’s Innovation and Technology Division and myUSCIS Product Manager within the agency’s Customer Service and Public Engagement Directorate, says any site redesign needs to start with the customer.

“We start with user research, interviewing customers, understanding their needs and motivation. Then we determine what types of products [USCIS can develop] that would suit those needs,” Simeon says. The web development teams are made up of USCIS designers and contractors.

Once initial apps are designed, usability testing helps ferret out problems.

Simeon’s team has worked with Levy to build tools and applications for the USCIS website and add new capabilities to the page, enabling customers to check the status of their cases, or determine what options and benefits are available to them by entering information into a pull-down menu.

Based on that information, USCIS narrows the benefits and options to their specific needs and give customers links to various resources to begin the application process. For instance, by identifying themselves under Explore My Options as a citizen, a Green Card holder, an employer, a foreign national or an “individual without lawful immigration status,”  customers see a customized menu of choices appear to get them the information they need more quickly.

Other new tools help users complete citizenship applications and study for the citizenship exam, maintain appointments and receive text-message alerts. Coming soon: A secure message app will let visitors communicate online with immigration officers.

All of these improvements evolved from research, including the site’s emphasis on mobile devices, Simeon says. “We had to go responsive because mobile is the way they access the Internet.”

Now Simeon’s team is working to improve the citizenship e-filing experience so customers can submit immigration applications and petitions online. USCIS conducted usability testing for a new naturalization form by using real applicants from around the country. “They have given us pointers about what works and what doesn’t work,” he says.

The objective, UCIS E-Communications chief Levy says: “Help people to use our online resources in the way they think of it, rather than forcing them to think of it the way we think of it.”

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250
State CIOs Push Accessibility and User Experience Standards

State CIOs Push Accessibility and User Experience Standards

Nearly one in five citizens need some kind of accommodation when accessing digital government services – and ensuring that every citizen has equal access to those services is the focus behind a new guidelines initiative of the National Association of State CIOs (NASCIO) calling for increased understanding and use of accessibility standards.

At the Federal level, the requirements are laid out in Section 508 of the Rehabilitation Act of 1973 (amended in 1998) and in the Federal standards that implement the law. Broad in its definitions, Section 508 covers the blind and deaf, those who have lost or lost the use of limbs along with people afflicted with color blindness, dyslexia, slow reaction times, short-term memory issues, cognitive disabilities and near-sightedness.

But while most states have similar legislation and virtually all have IT accessibility policies in place, adoption has been slow and implementation inconsistent. Jeff Kline, program director of Statewide Electronic and Information Resources Accessibility with the Texas Department of Information Resources, said state bureaucracies are complex, widespread and cover myriad agencies, many with their own IT departments. Getting everyone on board is challenging.

A Step Towards Change

NASCIO published a pair of position papers in July and August advocating a Policy-Driven Adoption for Accessibility (PDAA) approach, which aims to:

  • Make it harder to ignore accessibility
  • Establish non-technical methods and metrics to help measure compliance
  • Provide guidance on end goals, rather than how-to instruction
  • Help accelerate marketplace innovation.

Ultimately, accessibility is part of system design, said Kline. It needs to be addressed from the beginning of every development project, with IT accessibility woven into the fabric of a product from the setting of requirements through development and testing.

Sarah Bourne, director of IT accessibility with the Massachusetts Office of Information Technology (MassIT), who also contributed to the NASCIO position papers, agreed. “A lot of what PDAA is trying to do is get people to understand accessibility is not something you do at the end,” she said. You can’t sprinkle magic accessibility dust and have it all be better at the end. You have to build it in.” That means paying attention to details, like choosing a Javascript library that meets accessibility standards, she said, and ensuring that color schemes have the necessary contrast to be viewed and understood by the colorblind.

“You don’t want to find out two weeks before the site goes live” that your colors don’t pass the test, she said.

Desarae Veit, a ‎Senior UI/UX designer with General Dynamics Information Technology, said government applications should display a color contrast ratio of 4:5:1 to ensure content is easily visible to color-blind users.

More important, color alone cannot be the only way a message is indicated to the system user. The takeaway: Messages must be conveyed in several ways and be interpreted by assistive technology, such as the braille readers or automated audio blind people use to navigate applications or websites.

Integrating Across an Enterprise

MassIT’s Bourne said it “would help a lot” if technical and business schools were including accessibility into the curriculum whenever they talk about compliance requirements.

“Accessibility is seen as being only applicable to government, so we don’t get people coming in who know anything about it,” she said. “We’ve done tons of outreach, awareness raising and training, but we always have new people coming and going.” Simply inserting the required accessibility language into IT contracts without really understanding what it means is not good enough. Jay Wyant, Minnesota’s chief information accessibility officer, and another contributor to NASCIO’s framework, acknowledges another challenge is bringing existing systems into compliance

“Enterprise systems have been a struggle for us,” he said, adding that over the past three years, the state has consolidated IT systems, employees and administrators and established a statewide project management office to ensure greater accountability for Minnesota’s major technology investments and contracts.

The biggest accessibility challenge is the state’s huge Enterprise Resource Planning systems which run accounting, human resource, workforce management and payroll, said Wyant. Enterprise software developers such as Oracle and SAP acquired other software companies through the years and many enterprise applications have been bolted together without regard to accessibility or usability. They may depend on old browsers and hardware, which can hold system owners back from change and innovation.

Wyant cited three significant developments that help make older applications more compliant with accessibility guidelines:

  • The emergence of mobile technology
  • HTML 5, the markup language for creating web site content
  • Web interface technology

Still, progress is slow. Enterprise commercial software comprises millions of lines of code, much of it written before accessibility was a consideration. Kline said developers keep putting on Band-Aids until they reach the point where the product has to be entirely revamped.

In her state, MassIT’s Bourne said, the move to web interfaces gives application developers more ways to fix legacy accessibility shortfalls.

“Before, when it was a desktop system that was the way it was. You were completely dependent on the vendor addressing whatever issues there were,” she said. “But with the web interfaces, you often have opportunities to make configurations or customizations that can address some of the problems that prevent the people from using it to do their jobs.”

PDAA Adds Insight

Kline said adding policy-driven documentation requirements can help government managers better gauge vendors’ ability to deliver accessible systems. Both government offices and vendors need to be fully familiar with the requirements so that both sides understand what is required and what it will take to deliver on that requirement.

The same holds for development of a web site. Setting down and agreeing to requirements and metrics for achieving the desired results has to happen up front. It can’t be an afterthought, Kline said.

The policy-driven approach establishes criteria for accessibility training as well as integration into business processes and organization structure.

One looming challenge facing government IT managers: How to fit cloud-based solutions and services into the accessibility picture. As cloud applications proliferate, agencies will have to monitor not just their own processes and interfaces, but also those of third-party cloud providers.

Leading the Way

The Texas Department of Information Resources issued the first solicitation using NASCIO’s PDAA approach in August 2014, for IT product and service education. Now Minnesota is launching a pilot program with the approach, and working with a number of vendors as partners.

Kline keeps an eye on accessibility work in other spheres. He noted the U.S. Department of Justice is developing new rules for web site accessibility under the Americans with Disabilities Act, which should be codified next year.

Lawsuits related to accessibility are on the upswing. Implementing a policy-driven framework can help government agencies and vendors alike protect themselves and their products from legal challenges.

This “is really what is going to drive more seriousness in accessibility and drive a lot more innovation and tools to make accessible IT more efficient,” Kline said.

Related Articles

GDIT Recruitment 600×300
Tom Temin 250×250
GovExec Newsletter 250×250
GDIT Recruitment 250×250