Decoder
3547 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Decoder
Libraries
Use these libraries to find Decoder models and implementationsMost implemented papers
Listen, Attend and Spell
Unlike traditional DNN-HMM models, this model learns all the components of a speech recognizer jointly.
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks.
End-to-End Object Detection with Transformers
We present a new method that views object detection as a direct set prediction problem.
Adversarial Autoencoders
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder.
Modeling Relational Data with Graph Convolutional Networks
We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification.
Fast-SCNN: Fast Semantic Segmentation Network
The encoder-decoder framework is state-of-the-art for offline semantic image segmentation.
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders.
Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making.
Longformer: The Long-Document Transformer
To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.