Red Hat Summit webpage visualization Red Hat

Red Hat Strengthens AI and Hybrid Cloud Push with Latest Software

At Red Hat Summit, the company showcased edge-focused updates to its OpenShift AI software, ‘policy-as-code’ tech to put AI guardrails in place, and several other improvements.

Red Hat has introduced new advancements across its software family that make it easier for enterprises to build, deploy, and manage AI applications across a hybrid cloud, including integrated generative AI capabilities.

At its Red Hat Summit in Denver, the company today (May 7) announced a new version of Red Hat OpenShift AI, a software platform for training, monitoring, and managing the lifecycle of AI/ML models and applications in a containerized Kubernetes environment on-premises and in the cloud.

New features include the ability to deploy AI models to remote, edge locations and “enhanced model serving,” which allows enterprises to deploy predictive and generative AI apps into production, the company said.

Red Hat also announced plans to expand its Lightspeed generative AI capabilities to its OpenShift Kubernetes container platform and Red Hat Enterprise Linux OS. To address a skills gap as hybrid cloud adoption grows, OpenShift Lightspeed can now assist IT administrators in autoscaling clusters in the public cloud.

To further address the complexities of managing AI across the hybrid cloud, Red Hat also introduced “policy-as-code” as part of its Red Hat Ansible automation software, which automates IT management tasks.

Instead of building processes manually, which is time-consuming and can result in human error, enterprises can use code to enforce policies across their infrastructure to ensure they meet governance, risk, and compliance (GRC) requirements, says IDC Research vice president Jeven Jensen.

“It’s really needed for AI. It’s all about putting guardrails in place,” he told Data Center Knowledge.

The updates were well received by industry analysts, who said Red Hat's enhanced OpenShift AI and other software improvements hold significant strategic implications for businesses.

Red Hat’s AI Vision

In all, Red Hat made 28 announcements at Red Hat Summit today, and more than half of them (16 to be exact) mention AI in the headline. The big picture is that the pace of AI innovation is growing, which presents plenty of opportunities for enterprises – but it also creates risk. Challenges include the cost of running AI models in the cloud, and this is why a hybrid cloud approach is important, said Steven Huels, vice president and general manager of Red Hat’s AI business unit, in a media news briefing this week.

Customers want to perform inferencing across the cloud, on-premises, and the edge, Huels said. Red Hat’s latest developments give customers the flexibility and confidence to build and deploy their AI workloads across the hybrid cloud while mitigating risks, he added.  

“They may train on a data center, but then they want to deploy across multiple platforms. So, at this point, AI has become the ultimate hybrid workload, and customers are designing with that in mind,” Huels said. “This really plays to Red Hat’s hybrid cloud strategy. We’ve always been that enterprise-trusted source for these enterprise data center systems across multiple platforms.”

Analysts’ Response

Analysts say the new version of Red Hat’s OpenShift AI and other software improvements across the company’s portfolio are significant and can drive further adoption.

“It better positions Red Hat as a neutral, cloud-native platform player with a very focused eye on helping companies leverage AI – and generative AI in particular – in a very secure, (highly) performing, and easily governable manner,” says Brad Shimmin, Omdia’s chief analyst of AI and data analytics.

Red Hat competes against VMware, but open source options and even IBM Watson and companies like DataRobot, Dataiku, and Databricks, which have their own AI platforms, could also be seen as competitors, added Ritu Jyoti, group vice president of IDC’s worldwide AI and automation research.

Red Hat is also in “coopetition” with cloud vendors such as Amazon Web Services, Microsoft Azure, and Google Cloud. They compete because the cloud providers have their own AI and Kubernetes platforms. But they are also partners because they make Red Hat solutions available in the cloud.

Jyoti said Red Hat’s software solutions are attractive to organizations that want flexibility and choice and the ability to operate in a hybrid cloud environment, she said.

Enterprises that could gravitate to and adopt OpenShift AI include those that are in the early stages of their AI journey, open source loyalists, and companies in highly regulated industries like financial services and healthcare, she said.

According to Jyoti, Red Hat’s large ecosystem of partners, which includes support for a variety of GPUs and accelerator chips, is also a major selling point.  

“Red Hat’s value proposition is flexibility and portability,” she said. “Their customers that I’ve spoken to say they love Red Hat OpenShift AI because it gives them the abilities that they want.”

New RedHat OpenShift AI Features

Red Hat OpenShift AI version 2.9 is available this week. The new version’s support for edge deployments of AI models is important because of latency and performance requirements, Jyoti said. It is critical for some industries like manufacturing and healthcare to run their AI workloads in edge locations where their data resides, she said.

Another new feature – enhanced model serving – allows organizations to optimize the deployment of predictive and generative AI workloads through KServe, vLLM, and text generation inferencing server (TGIS), the company said.

“If you have 100,000s of people inferencing on an app, you know your model is going to cost a ton of money, so you need to optimize that process. These serving frameworks will give customers the most bang for the buck in terms of their hardware,” Shimmin said.

Other new OpenShift AI features include distributing workloads with Ray, model monitoring visualizations that track how AI models are performing, and new accelerator profiles that enable users to pick the appropriate accelerator chips for their specific workloads, the company said.

Red Hat Lightspeed

Red Hat said it will add its Lightspeed generative AI capabilities to OpenShift and Red Hat Enterprise Linux. The company last year first introduced Lightspeed in Ansible, which helps novice users automate tasks by using text in plain English.

For example, if an IT administrator knows how to do a task on-premises but doesn’t know how to do the same task on AWS, the IT administrator can write a simple text query using Ansible Lightspeed to generate YAML code and find the correct Ansible Playbooks to do that task on AWS.

Similarly, OpenShift Lightspeed will be an AI-based virtual assistant to help IT administrators run, manage, troubleshoot, and fine-tune their OpenShift Kubernetes environment, Jensen said. For example, it can suggest fixes for a problem or suggest best practices, he said.

“It can compare a well-performing cluster versus a cluster that’s not performing as well and give you suggestions to improve performance,” Jensen said.

As for Lightspeed on Red Hat Enterprise Linux, the generative AI service could alert an IT staffer that a Red Hat Security Advisory with fixes was just released. The IT staffer can use simple commands to schedule patching during the next production maintenance window, the company said.

OpenShift Lightspeed will be available in late 2024. Red Hat Enterprise Linux Lightspeed is currently in the planning phase, the company said.

In other Red Hat Summit news, the company announced:

  • ‘Image mode’ for Red Hat Enterprise Linux. Image mode, available now as a tech preview, delivers the OS as a container image and enables the OS to be managed entirely through container-based tooling and concepts. DevOps teams can plug the OS into their CI/CD and GitOps workflows. Image mode also simplifies security management and allows solutions providers to more easily build, test, and distribute Red Hat Linux Enterprise-based apps.
  • Podman AI Lab. The Podman AI Lab, which is an extension for Podman Desktop, allows developers to build, test, and run generative AI apps in containers on their local workstation.
  • Policy as code for Ansible Automation Platform. This feature will be available as a tech preview in the coming months.

Red Hat’s announcements today include partnerships with Pure Storage, Run:ai, Elastic and Stability AI, and the availability of Red Hat OpenShift on Oracle Cloud Infrastructure.

Red Hat also announced that OpenShift AI will support AMD’s Instinct GPUs, Intel’s Gaudi AI accelerators, and Intel Xeon processors and integrate with Nvidia’s “NIM” microservices, which speeds the development of AI-powered enterprise applications.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish