Anyscale: The Infinite Laptop for AI
Enterprises investing time and money into artificial intelligence (AI) and machine learning (ML) is not about buzzwords or building out cool tech—it’s about finding new business value. IDG’s 2021 State of the CIO report states that CIOs are tasked with creating new revenue streams and uplifting existing ones. It’s what public shareholders expect, and is a main topic of discussion in boardroom conversations. Management teams in turn look to a variety of teams including data scientists, data engineers and platform engineers, in addition to all the other teams that touch the AI/ML lifecycle, to implement new AI/ML practices and expect quick results. While AI/ML continues to grow at various points on the Hype Cycle, there continues to be a disconnect between the business results expected from operationalizing AI/ML, and the tooling and steps required to actually operationalize AI/ML.
Meanwhile, there are some deeply technical realities that are happening behind the scenes from those management teams:
- Data, and especially training data, is the oil that fuels the AI industry. Unlike oil in the physical world, the amount of data is growing exponentially. So is the complexity of ML algorithms. The pace of innovation is both exciting and suffocating in some cases.
- More data and complex algorithms require larger machines with more compute and memory. Per research conducted by OpenAI in 2018, deep learning compute demand grew 35x every 18 months from 2012-18, and memory demand grew 20x every 18 months from 2016-18 (the release of GPT further increased this trend).
- The AI/ML pipeline continues to encompass many heterogeneous workloads. Exploration (training) and inference (production) phases have different characteristics. For instance, the exploration phase is batch-oriented, using huge amounts of data, while the inference phase is real-time on small amounts of data (e.g., one record or one image). The exploration phase is further split into pre-processing and ML training that each use different libraries and may use different distributed systems.
It’s the perfect storm: executive-driven business needs with difficult timelines, AI-centric business applications, increasing complexity and heterogeneity, technical limitations, and inadequate tooling.
What developers need is a cloud “easy button”, where an individual developer can write code once on their laptop and scale it to the cloud seamlessly without any refactoring or modifications. A serverless experience, without the limitations imposed by cloud service providers (CSPs) at run-time.
The Solution: Ray
At Intel, we have long collaborated with academic centers of excellence across the globe, including UC Berkeley. This collaboration extends from Intel Labs to various business unit engineers. We seek not only to conduct state of the art research, but also to apply that research to real-world problems. Of course, at Intel Capital we work closely with our partners in Intel Labs and broader Intel, tracking the research projects and corresponding open source communities from the earliest of days.
As sponsors of UC Berkeley AMPLab and early contributors to the BDAS stack, we had a deep understanding of the benefits of the new data analytics stack, but we also understood its limitations.
Five years ago, towards the end of AMPLab and the birth of UC Berkeley’s RISELab, Assaf Araki had a meeting with Robert Nishihara and Philipp Moritz, then PhD students at UC Berkeley working under Professor Ion Stoica on a new open-source project called Ray. In its infancy, Ray was an experimental distributed extension of Python that made it easy to write ML applications that run on a cluster while providing the development and debugging experience of working on a single machine. Rapidly, Ray became the skeleton of the RISELab stack. Ray developed into a universal computing substrate to scale the most compute-intensive ML-workloads with the simplicity of a laptop-like development experience, all while leveraging a broad set of libraries and integrations that are built on Ray’s distributed execution framework. In fact, Intel’s IT team was one of the early adopters of Ray, leveraging it to scale Python, hyperparameter selection techniques, and auto modeling processes workloads with minimal code modifications. The Intel Artificial Intelligence and Analytics team further offered Ray as part of AnalyticsZoo and BigDL and contributed to the open-source project by creating RayOnSpark and SparkOnRay.
Here at Intel Capital, Ray reminds us of "Intel Inside", a key interoperability point between hardware and applications. Ray offers one unified—and simple—platform to scale all of your AI and analytics libraries. With a robust and growing ecosystem of native and third party libraries, developers can use Ray as a single distributed platform for all of the heterogeneous tasks we outlined above: data ingestion, hyperparameter tuning, model training, model serving and so on. It reduces complexity, accelerates the time to deployment, increases the model accuracy, and shortens the time to retrain.
The growth of the Ray open-source community has also been incredible, speaking to the power of the platform. With more than 18,000 GitHub Stars for Ray core and contributor growth continuing to rise rapidly, we view Ray’s adoption as best-in-class.
Anyscale: Our Partnership Since Formation
As we monitored the development of Ray and the associated libraries within RISELab, we quickly developed the thesis that this is the solution to the perfect storm of operationalization of ML complexities. The path to success is distribution, and the path to simplicity in distribution is Ray and Anyscale.
Both Assaf and Nick Washburn recall driving over to UC Berkeley in early 2019 to meet with Professor Stoica to discuss the incorporation of Anyscale and how Intel Capital would love to start our partnership from Day 0, building upon the work that Intel Labs and the Intel technologists had already been working on with Robert, Philipp, Ion and the team for years. We were thrilled to participate in Anyscale’s Series A round in 2019 and Series B round in 2020, and we are equally excited to continue our strong support in today’s announcement of a $100M Series C round.
Since our initial investment in 2019, we’ve loved working with the Anyscale team and watching the growth in not only the community around Ray and its associated libraries and integrations, but also the development of Anyscale’s managed offering, which was also just announced in general availability. Anyscale’s product truly creates the infinite laptop experience for a developer. Developers can write a ML application on their laptop and—with zero code changes or refactoring—scale across the cloud in a serverless experience without run-time limitations. It offers multi-cloud portability, offering users the ability to click a button to move workloads on any Anyscale managed cloud offering. It is the easy “on ramp” to the superhighway of ML workloads, offering simple APIs for job automation and dependency and cluster management.
More exciting work from the Anyscale team to come. Onward!