Scaling computing operations to enable AI applications can be a complicated task, but it's one that San Francisco-based Anyscale is taking on.
Anyscale announced on Dec. 8 that it raised $100 million in a Series C round of funding to advance its efforts that are based on the open-source Ray distributed programming framework. Ray got its start in 2018 at the RISELab computing laboratory at the University of California, Berkeley. Anyscale was founded a year later to help commercialize the technology and provide support capabilities.
Alongside the new funding, the company announced the general availability of the Anyscale managed platform, which provides a managed service for Ray.
Robert Nishihara, CEO of Anyscale and one of the creators of Ray, told ITPro Today that over the last two years he has seen tremendous growth and momentum for both Ray and the Anyscale offering.
"Ray is increasingly the framework of choice for building the new generation of ML [machine learning] platforms," Nishihara said. "Given the growth in the community, the success of early customers and rapid revenue growth, we felt now was the right time to put our foot on the gas and prepare for rapid scaling in 2022."
Anyscale Helps Scale MLOps Across Multicloud Operations
According to Nishihara, organizations like Uber, OpenAI, Amazon, McKinsey and many others have been using the open-source Ray framework for a variety of use cases — from petabyte-scale data lake management to large-scale deep learning. Those open-source users deploy and manage Ray clusters on their own.
While the Ray project has many capabilities, Anyscale found several areas where it could improve the user experience, which is why it launched a managed platform.
"Ray is so simple compared to the alternatives for scaling AI workloads, but we want to do better," Nishihara said. "We believe a managed service is the way to do that because it allows us to take on a lot of the operational burden from the user."
Anyscale also wanted to make Ray accessible to teams that don’t have the necessary compute or operations resources to scale AI applications. The Anyscale platform provides managed infrastructure that is run and operated by the same people who created Ray, Nishihara said. The platform also provides built-in observability monitoring dashboards and tooling for cost control and tracking.
The Intersection of Anyscale Platform and MLOps
A primary use case for Anyscale is to help scale up computing operations to enable machine learning workloads. That said, Nishihara emphasized that the Anyscale platform and MLOps are related but are not the same.
There are compute providers, and there are MLOps tools, he said. MLOps is a collection of tools and processes that enable data scientists to easily collaborate with DevOps people to deploy their models in production. MLOps tools seek to provide a way of tracking all of the metrics and metadata about what you’re trying to do — they might tie into or connect to the business. However, they often don’t provide the compute; they just integrate with service providers, which are typically the cloud provider.
In contrast, Nishihara said that Ray and Anyscale are focused on the compute layer. They both aim to be the cloud-agnostic, general-purpose compute platform on which training happens, simulations are run and models are deployed, he added.
"This means that our users use Anyscale in conjunction with an MLOps platform," Nishihara said. "Anyscale aims to be the compute layer above the cloud providers, enabling users to scale and productionize their AI applications."