no code implementations • 31 Oct 2023 • Lorenzo Luzi, Helen Jenne, Ryan Murray, Carlos Ortiz Marrero
The rapid advancement of Generative Adversarial Networks (GANs) necessitates the need to robustly evaluate these models.
no code implementations • 4 Jul 2023 • Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.
no code implementations • 20 Nov 2022 • Yehuda Dar, Lorenzo Luzi, Richard G. Baraniuk
We study how the generalization behavior of transfer learning is affected by the dataset size in the source and target tasks, the number of transferred layers that are kept frozen in the target DNN training, and the similarity between the source and target tasks.
1 code implementation • 1 Nov 2022 • Lorenzo Luzi, Daniel LeJeune, Ali Siahkoohi, Sina AlEMohammad, Vishwanath Saragadam, Hossein Babaei, Naiming Liu, Zichao Wang, Richard G. Baraniuk
We study the interpolation capabilities of implicit neural representations (INRs) of images.
no code implementations • 21 Oct 2022 • Lorenzo Luzi, Paul M Mayer, Josue Casco-Rodriguez, Ali Siahkoohi, Richard G. Baraniuk
As implied by its name, Boomerang local sampling involves adding noise to an input image, moving it closer to the latent space, and then mapping it back to the image manifold through a partial reverse diffusion process.
1 code implementation • 11 Oct 2021 • Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk
To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.
no code implementations • 8 Oct 2021 • Lorenzo Luzi, Carlos Ortiz Marrero, Nile Wynar, Richard G. Baraniuk, Michael J. Henry
We define a performance measure, which we call WaM, on two sets of images by using Inception-v3 (or another classifier) to featurize the images, estimate two GMMs, and use the restricted $2$-Wasserstein distance to compare the GMMs.
no code implementations • 7 Jun 2021 • Lorenzo Luzi, Yehuda Dar, Richard Baraniuk
We show that overparameterization can improve generalization performance and accelerate the training process.
1 code implementation • 27 Oct 2020 • Sina AlEMohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, Richard G. Baraniuk
High dimensionality poses many challenges to the use of data, from visualization and interpretation, to prediction and storage for historical preservation.
no code implementations • 25 Jun 2020 • Lorenzo Luzi, Randall Balestriero, Richard G. Baraniuk
They can be represented in two ways: With an ensemble of networks or with a single network with truncated latent space.
no code implementations • ICML 2020 • Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk
We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.
no code implementations • 25 Sep 2019 • Lorenzo Luzi, Randall Balestriero, Richard Baraniuk
We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution.