What Is Razee, and Why IBM Open Sourced It

The continuous delivery software that's been doing the heavy lifting on IBM's global Kubernetes platform is now open source.

What Is Razee, and Why IBM Open Sourced It
Getty Images

It's no secret that IBM wants to establish itself as a major hybrid cloud player. That was the major motivator behind its $34 billion Red Hat acquisition, which now appears to be in the final stages of the closing process.

Its cloud aspirations also seem to be behind a move the company made a few weeks ago when it open sourced Razee, a continuous delivery tool for managing applications in Kubernetes-based cluster deployments. It's probably no coincidence that the software seems to be ready-made for Red Hat to fold into its hybrid cloud offerings.

Kubernetes, of course, is all about the cloud, being the centerpiece of the cloud-native software ecosphere. And Razee isn't some not-yet-ready-for-prime-time software that's untried and untested in production. IBM's been using it to drive Kubernetes on its own cloud for a while now.

"The number one question I get is that this must be new, in it's early days, and not hardened," Briana Frank, IBM's director of product management, told Data Center Knowledge about early reactions to its open source announcement. "What's so wonderful is that we've been using it for a year and a half, and it is hardened, and it is ready."

What Is Razee?

"Hardened" might be an understatement. In the same interview, Jason McGee, VP and CTO of IBM Cloud Platform, told us that Razee was originally developed to meet the demanding workloads of IBM's public cloud, and it's been performing trouble-free in production.

"Our team runs a managed Kubernetes service on IBM cloud, and underneath that service we're managing tens of thousands of Kubernetes clusters on behalf of our users," he said. "As you can imagine, when you're running tens of thousands of anything globally in over 35 data centers around the world, and you're trying to manage that environment and keep it up to date, that's a challenge. And like any kind of mature agile team, we're doing 50 to 100 updates a week into that environment, so we have a high rate of change."

McGee told us that IBM originally tried a traditional approach to pushing code updates, using the open source automation server Jenkins with build automation to build, test, and push changes into production.

"What we found was when you have a diverse set of micro-services (so, lots of small teams who are working together), and you have lots of locations that you're deploying to, the traditional approaches started to break down," he explained. "They were very complex; they were very slow to deploy; the rules about when you made updates to different environments were hard. It was hard for individual teams to update just their one service."

Razee has two major components, starting with RazeeDash, a user interface that allows users to see all of the clusters, resources, and micro-services that are being managed, including where they're running and when they were updated. The other component, Kapitan, does most of the heavy lifting and is the core machinery for delivering updates and the like.

"Kapitan is the core machinery for doing deliveries," McGee said. "There's a sub-component that runs in your cluster that's checking for updates and pulling and doing updates. All of that's managed by Kapitan."

One of the things McGee finds powerful about Razee is that it uses rule-based updates, which makes automating complicated tasks more finely grained.

"If you're managing fifteen thousand of something, you can't manage any resource directly," he said and offered a hypothetical example. "When I roll out a new version of something, maybe I want to roll it out in Australia first, where I have a smaller user base, and I want to roll it to Europe second, and then to the US third. Being able to have those kinds of rules is hard in a traditional delivery system. With Razee, that's just rules. All the clusters that match Australia should get this version of the code. Those clusters will automatically detect that and pull down the updates and update themselves."

Why Open Source?

McGee said there a couple of key reasons IBM decided to offer Razee as open source, under the Apache 2.0 license.

"One of the core tenets of IBM cloud is we build our cloud on open technologies," he said, describing the first reason. "Anywhere where a technology or an API would touch how someone builds an application,  we want that to be built as open source. This obviously would touch how people do their continuous delivery pipelines and how they do deployment into the cloud, so we thought there was value there. Secondly, I think that as an industry we're collaborating on open source to build the next generation cloud-native platform."

He pointed out that the cloud-native sphere has largely been driven by open source technologies, and named Kubernetes, Knative (for running serverless applications on Kubernetes), and Istio, which was launched by IBM and Google.

Frank said that IBM's experience with Istio is a good example of what the company hopes for Razee.

"A few years ago we actually open sourced a project called Amalgamate, which was basically the early days of Istio," she said. "It allowed us to put our opinion out there on how to run and manage microservices. Then we started talking with some of the engineers at Lyft and some of the engineers at Google, and that's how Istio was born. I think that the objective here was similar. Get our opinion out there and hopefully others will join and start to collaborate and contribute to make this system better."

So, is this part of a joint IBM/Red Hat play that they're hoping will bear fruit post merger? Right now, nobody's saying, which isn't surprising since they can't offer many specifics about how they'll eventually work and play together until the last regulatory hurdles are passed. When asked, McGee offered a noncommittal "I think it would be useful for anyone running Kubernetes, including people who are running RHEL."

Read more about:

Data Center Knowledge

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.