1 code implementation • 12 Feb 2024 • Yifei Ming, Haoyue Bai, Julian Katz-Samuels, Yixuan Li
Out-of-distribution (OOD) generalization is critical for machine learning models deployed in the real world.
1 code implementation • 7 Feb 2022 • Julian Katz-Samuels, Julia Nakhleh, Robert Nowak, Yixuan Li
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
BIG-bench Machine Learning Out of Distribution (OOD) Detection
1 code implementation • 3 Feb 2022 • Jifan Zhang, Julian Katz-Samuels, Robert Nowak
Active learning is a label-efficient approach to train highly effective models while interactively selecting only small subsets of unlabelled data for labelling and training.
no code implementations • NeurIPS 2021 • Julian Katz-Samuels, Blake Mason, Kevin Jamieson, Rob Nowak
We begin our investigation with the observation that agnostic algorithms \emph{cannot} be minimax-optimal in the realizable setting.
no code implementations • 10 Sep 2021 • Yinglun Zhu, Julian Katz-Samuels, Robert Nowak
The core of our algorithms is a new optimization problem based on experimental design that leverages the geometry of the action set to identify a near-optimal hypothesis class.
no code implementations • 13 May 2021 • Julian Katz-Samuels, Jifan Zhang, Lalit Jain, Kevin Jamieson
We consider active learning for binary classification in the agnostic pool-based setting.
no code implementations • 12 May 2021 • Romain Camilleri, Julian Katz-Samuels, Kevin Jamieson
We also leverage our new approach in a new algorithm for kernelized bandits to obtain state of the art results for regret minimization and pure exploration.
no code implementations • 1 Nov 2020 • Andrew Wagenmaker, Julian Katz-Samuels, Kevin Jamieson
In this paper we propose a novel experimental design-based algorithm to minimize regret in online stochastic linear and combinatorial bandits.
1 code implementation • 30 Jun 2020 • Cody Coleman, Edward Chou, Julian Katz-Samuels, Sean Culatana, Peter Bailis, Alexander C. Berg, Robert Nowak, Roshan Sumbaly, Matei Zaharia, I. Zeki Yalniz
Many active learning and search approaches are intractable for large-scale industrial settings with billions of unlabeled examples.
no code implementations • NeurIPS 2020 • Julian Katz-Samuels, Lalit Jain, Zohar Karnin, Kevin Jamieson
This paper proposes near-optimal algorithms for the pure-exploration linear bandit problem in the fixed confidence and fixed budget settings.
no code implementations • 15 Jun 2019 • Julian Katz-Samuels, Kevin Jamieson
We consider two multi-armed bandit problems with $n$ arms: (i) given an $\epsilon > 0$, identify an arm with mean that is within $\epsilon$ of the largest mean and (ii) given a threshold $\mu_0$ and integer $k$, identify $k$ arms with means larger than $\mu_0$.
no code implementations • ICML 2018 • Julian Katz-Samuels, Clay Scott
We introduce the feasible arm identification problem, a pure exploration multi-armed bandit problem where the agent is given a set of $D$-dimensional arms and a polyhedron $P = \{x : A x \leq b \} \subset R^D$.
no code implementations • 30 Sep 2017 • Julian Katz-Samuels, Gilles Blanchard, Clayton Scott
Many machine learning problems can be characterized by mutual contamination models.
no code implementations • 24 May 2017 • Julian Katz-Samuels, Clayton Scott
We consider the task of collaborative preference completion: given a pool of items, a pool of users and a partially observed item-user rating matrix, the goal is to recover the \emph{personalized ranking} of each user over all of the items.
no code implementations • 19 Feb 2016 • Julian Katz-Samuels, Clayton Scott
We examine the decontamination problem in two mutual contamination models that describe popular machine learning tasks: recovering the base distributions up to a permutation in a mixed membership model, and recovering the base distributions exactly in a partial label model for classification.