Q&A: US Patent and Trademark Office's CIO on Cloud and DevSecOps

Jamie Holcombe talks about developing a "software factory" drawing upon DevSecOps methodology and GitLab to help it modernize software development within his agency.

2 Min Read
Jamie Holcombe, USPTO's Chief Information Officer
Jamie Holcombe, USPTO's Chief Information OfficerUSPTO

Even the federal agency that puts its stamp on new, original innovations has had to update its infrastructure using DevSecOps for a cloud-based world.

United States Patent and Trademark Office dealt with a software system outage in 2018 that disrupted the patent application filing process and exposed a need for more effective data recovery. The downing of the Patent Application Locating and Monitoring system, which tracks progress in the patent process, along with other legacy software applications, helped prompted changes at the federal agency.

Jamie Holcombe, the USPTO's Chief Information Officer, spoke with InformationWeek about taking advantage of modern resources such as GitLab along with DevSecOps methods to improve time delivery on IT updates, shifting to the cloud, and improving resiliency.

What was happening at the USPTO that drove the changes you made? What was the pain point?

What was the burning platform? Why did you have to jump off into the sea because everything around was just going to hell? Well, what had happened was, before I even arrived at the agency, the Patent and Trademark Office experienced an 11-day outage where over 9,000 employees could not work.

Why couldn't they work? They were using old, outdated applications — which is okay; everybody uses old apps — but they didn't practice on how to come back and be resilient. When they took out those backup files to lay over the top and bring the database up, they didn't know how, and they failed not once but twice.

Related:GitLab DevSecOps Survey Highlights Toolchain Sprawl Problem

The third one they were able to lay over the top and got back continuity of operations. The only problem was it was 9 petabytes of information and they had failed to back up the indices of the database. So, it took over eight days to rebuild the indices. That's a lesson learned.

That was the burning platform. Then there were a lot of complaints by the business that IT was slow and "You can never deliver on the new stuff."

Read the rest of this interview on InformationWeek.

Read more about:

DevSecOpsInformationWeek

About the Author(s)

Joao-Pierre S. Ruth

Senior writer, InformationWeek

Joao-Pierre S. Ruth has spent his career immersed in business and technology journalism. He first covered local industries in New Jersey and later became the New York editor for Xconomy, where he delved into the city's tech startup community. He also freelanced for such outlets as TheStreet, Investopedia and Street Fight. Joao-Pierre earned his bachelor's in English from Rutgers University. 

InformationWeek

InformationWeek, a sister site to ITPro Today, is a trusted source for CIOs and IT leaders seeking comprehensive and authentic coverage of the constantly evolving world of technology and its impact on business. Our experienced and ethical journalists conduct in-depth examinations of crucial issues and the impact of global events on IT operations and strategies, helping forward-thinking executives stay at the forefront of their industries. InformationWeek also provides a platform for enterprise IT leaders and leading tech companies to share their insights and experiences through exclusive interviews, opinion pieces, and events, offering firsthand accounts of strategies, trends, and innovations.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like