Accepted papers

A versatile and efficient approach to summarize speech into utterance-level representations
Towards Zero and Few-shot Knowledge-seeking Turn Detection in Task-orientated Dialogue Systems BEST PAPER AWARD
Consistent Accelerated Inference via Confident Adaptive Transformers
Communication-Efficient Federated Learning for Neural Machine Translation
Dynamic-TinyBERT: Further Enhance the Inference Efficiency of TinyBERT by Dynamic Sequence Length
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models BEST POSTER AWARD
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators
Towards Textual Out-of-Domain Detection without any In-Domain Labels
Continual Few-Shot Named Entity Recognition via Data-Free Distillation
Efficient Variational Graph Autoencoders for Unsupervised Cross-domain Prerequisite Chains
Unsupervised Domain Adaptation with Adapter
Efficient Strategies of Few-Shot On-Device Voice Cloning
Adversarial Conversational Shaping for Intelligent Agents
Adaptive Fine-tuning for Vision and Language Pre-trained Models
Towards Continual Entity Learning in LanguageModels for Conversational Agents
Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning
A Short Study on Compressing Decoder-Based Language Models
Towards efficient end-to-end speech recognition with biologically-inspired neural networks
Compressing Pre-trained Language Models using Progressive Low Rank Decomposition
User-in-the-Loop Named Entity Recognition by Counterfactual Learning
Pruning Pretrained Encoders with a Multitask Objective
Undivided Attention: Are Intermediate Layers Necessary for BERT?
Prune Once for All: Sparse Pre-Trained Language Models
Kronecker Decomposition for GPT Compression
Evaluating robustness of You Only Hear Once(YOHO) Algorithm on noisy audios in the VOICe Dataset