The ease.ml Project

AutoML

If ease.ml/snoopy said “Yes” and we can finally fire up our ML training process! Given a dataset, ease.ml contains an AutoML component that outputs a ML model without any user intervention. There are three aspects of ease.ml/AutoML that makes it special.


The first aspect is that it is holistic — we aim at fully automate the end-​to-end process of building ML applications, from feature engineering, model selection, architecture search, hyper-​parameter tuning, and post-​processing. Our system is built upon many seminal work for each of these sub-​process, while also contains our own techniques and algorithms.

The second aspect is that it is multi-​tenant and scalable — different from many existing ML platforms in which users are isolated, the view of ease.ml/AutoML is that AutoML is an endless process: whenever a new model architecture has been published by researcher, all users' application should be reran! This view introduced a scalability problem and ease.ml/AutoML tries to handle this process by having multiple users sharing multiple devices. Those users who have the highest potential to better accuracies should occupy proportionally more devices. This opens up an interesting research problem about resource sharing and allocations for AutoML workloads, and we develop an interesting principled algorithm for this problem.

The third aspect is rooted in our belief that many future ML workloads can be solved by simply applying a transfer learning-​based approach. Together with the increasing availability of pre-​trained ML models (via repositories such as TensorFlow Hub and Pytorch Hub), this aspect becomes increasingly promising, while how to efficiently manage such large pools of pre-​trained models also becomes an emerging problem.

Input: (1) Augmented, machine readable, dataset;

Output: An endless stream of models trained by the AutoML system.

Publications

2018

Ease.ml: Towards multi-tenant resource sharing for machine learning workloads
T Li, J Zhong, J Liu, W Wu, C Zhang
[VLDB] Proceedings of the VLDB Endowment
Abstract

We present ease.ml, a declarative machine learning service platform. With ease.ml, a user defines the high-level schema of an ML application and submits the task via a Web interface. The system then deals with the rest, such as model selection and data movement. The ultimate question we hope to understand is that, as a “service provider” that manages a shared cluster of machines running machine learning workloads, what is the resource sharing strategy that maximizes the global satisfaction of all our users?

This paper does not completely answer this general question, but focuses on solving the first technical challenge we were facing when trying to build ease.ml. We observe that resource sharing is a critical yet subtle issue in this multi-tenant scenario, as we have to balance between efficiency and fairness. We first formalize the problem that we call multi-tenant model selection, aiming for minimizing the total regret of all users running automatic model selection tasks. We then develop a novel algorithm that combines multi-armed bandits with Bayesian optimization and prove a regret bound under the multi-tenant setting. Finally, we report our evaluation of ease.ml on synthetic data and on two services we are providing to our users, namely, image classification with deep neural networks and binary classification with Azure ML Studio. Our experimental evaluation results show that our proposed solution can be up to 9.8x faster in achieving the same global average accuracy for all users as the two popular heuristics used by our users before ease.ml, and 4.1 x faster than state-of-the-art systems.

ease.ml in action: towards multi-tenant declarative learning services
B Karlaš, J Liu, W Wu, C Zhang
[VLDB Demo] Proceedings of the VLDB Endowment
Abstract

We demonstrate ease.ml, a multi-tenant machine learning service we host at ETH Zurich for various research groups. Unlike existing machine learning services, ease.ml presents a novel architecture that supports multi-tenant, cost-aware model selection that optimizes for minimizing total regrets of all users. Moreover, it provides a novel user interface that enables declarative machine learning at a higher level: Users only need to specify the input/output schemata of their learning tasks and ease.ml can handle the rest. In this demonstration, we present the design principles of ease.ml, highlight the implementation of its key components, and showcase how ease.ml can help ease machine learning tasks that often perplex even experienced users.

2019

Automl from service provider’s perspective: Multi-device, multi-tenant model selection with gp-ei
C Yu, B Karlaš, J Zhong, C Zhang, J Liu
[AISTATS] 22nd International Conference on Artificial Intelligence and Statistics
Abstract

AutoML has become a popular service that is provided by most leading cloud service providers today. In this paper, we focus on the AutoML problem from the\emph {service provider’s perspective}, motivated by the following practical consideration: When an AutoML service needs to serve {\em multiple users} with {\em multiple devices} at the same time, how can we allocate these devices to users in an efficient way? We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, used by systems such as Google Vizer. The technical contribution of this paper is the first multi-device, multi-tenant algorithm for GP-EI that is aware of\emph {multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given users and devices, we obtain a regret bound of $ O ((\text {\bf {MIU}}(T, K)+ M)\frac {N^ 2}{M}) $, where $\text {\bf {MIU}}(T, K) $ refers to the maximal incremental uncertainty up to time for the covariance matrix . Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.

2020

Efficient Automatic CASH via Rising Bandits
Y Li, J Jiang, J Gao, Y Shao, C Zhang, B Cui
[AAAI] Proceedings of the AAAI Conference on Artificial Intelligence
Abstract

The Combined Algorithm Selection and Hyperparameter optimization (CASH) is one of the most fundamental problems in Automatic Machine Learning (AutoML). The existing Bayesian optimization (BO) based solutions turn the CASH problem into a Hyperparameter Optimization (HPO) problem by combining the hyperparameters of all machine learning (ML) algorithms, and use BO methods to solve it. As a result, these methods suffer from the low-efficiency problem due to the huge hyperparameter space in CASH. To alleviate this issue, we propose the alternating optimization framework, where the HPO problem for each ML algorithm and the algorithm selection problem are optimized alternately. In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods. Furthermore, we introduce Rising Bandits, a CASH-oriented Multi-Armed Bandits (MAB) variant, to model the algorithm selection in CASH. This framework can take the advantages of both BO in solving the HPO problem with a relatively small hyperparameter space and the MABs in accelerating the algorithm selection. Moreover, we further develop an efficient online algorithm to solve the Rising Bandits with provably theoretical guarantees. The extensive experiments on 30 OpenML datasets demonstrate the superiority of the proposed approach over the competitive baselines.

TextNAS: A neural architecture search space tailored for text representation
Y Wang, Y Yang, Y Chen, J Bai, C Zhang, G Su, X Kou, Y Tong, M Yang, L Zhou
[AAAI] Proceedings of the AAAI Conference on Artificial Intelligence
Abstract

Learning text representation is crucial for text classification and other language related tasks. There are a diverse set of text representation networks in the literature, and how to find the optimal one is a non-trivial problem. Recently, the emerging Neural Architecture Search (NAS) techniques have demonstrated good potential to solve the problem. Nevertheless, most of the existing works of NAS focus on the search algorithms and pay little attention to the search space. In this paper, we argue that the search space is also an important human prior to the success of NAS in different applications. Thus, we propose a novel search space tailored for text representation. Through automatic search, the discovered network architecture outperforms state-of-the-art models on various public datasets on text classification and natural language inference tasks. Furthermore, some of the design principles found in the automatic network agree well with human intuition.

2021

VolcanoML: speeding up end-to-end AutoML via scalable search space decomposition
Y Li, Y Shen, W Zhang, J Jiang, B Ding, Y Li, J Zhou, Z Yang, W Wu, C Zhang, B Cui
[VLDB] Proceedings of the VLDB Endowment
Abstract

End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning. Existing AutoML systems, however, suffer from scalability issues when applying to application domains with large, high-dimensional search spaces. We present VolcanoML, a scalable and extensible framework that facilitates systematic exploration of large AutoML search spaces. VolcanoML introduces and implements basic building blocks that decompose a large search space into smaller ones, and allows users to utilize these building blocks to compose an execution plan for the AutoML problem at hand. VolcanoML further supports a Volcano-style execution model - akin to the one supported by modern database systems - to execute the plan constructed. Our evaluation demonstrates that, not only does VolcanoML raise the level of expressiveness for search space decomposition in AutoML, it also leads to actual findings of decomposition strategies that are significantly more efficient than the ones employed by state-of-the-art AutoML systems such as auto-sklearn.