Update README.md
Browse files
README.md
CHANGED
|
@@ -514,3 +514,264 @@ configs:
|
|
| 514 |
- split: video_50
|
| 515 |
path: "labels/video_50/label-eval-0.tar"
|
| 516 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 514 |
- split: video_50
|
| 515 |
path: "labels/video_50/label-eval-0.tar"
|
| 516 |
---
|
| 517 |
+
|
| 518 |
+
|
| 519 |
+
# AVSRCocktail: Audio-Visual Speech Recognition for Cocktail Party Scenarios
|
| 520 |
+
|
| 521 |
+
**Official implementation** of "[Cocktail-Party Audio-Visual Speech Recognition](https://arxiv.org/abs/2506.02178)" (Interspeech 2025).
|
| 522 |
+
|
| 523 |
+
A robust audio-visual speech recognition system designed for multi-speaker environments and noisy cocktail party scenarios. The model combines lip reading and audio processing to achieve superior performance in challenging acoustic conditions with background noise and speaker interference.
|
| 524 |
+
|
| 525 |
+
## Getting Started
|
| 526 |
+
|
| 527 |
+
### Sections
|
| 528 |
+
1. <a href="#install">Installation</a>
|
| 529 |
+
2. <a href="#evaluation">Evaluation</a>
|
| 530 |
+
3. <a href="#training">Training</a>
|
| 531 |
+
|
| 532 |
+
## <a id="install">1. Installation </a>
|
| 533 |
+
|
| 534 |
+
Following this steps:
|
| 535 |
+
|
| 536 |
+
```sh
|
| 537 |
+
# Clone the baseline code repo
|
| 538 |
+
git clone https://github.com/nguyenvulebinh/AVSRCocktail.git
|
| 539 |
+
cd AVSRCocktail
|
| 540 |
+
|
| 541 |
+
# Create Conda environment
|
| 542 |
+
conda create --name AVSRCocktail python=3.11
|
| 543 |
+
conda activate AVSRCocktail
|
| 544 |
+
|
| 545 |
+
# Install FFmpeg, if it's not already installed.
|
| 546 |
+
conda install ffmpeg
|
| 547 |
+
|
| 548 |
+
# Install dependencies
|
| 549 |
+
pip install -r requirements.txt
|
| 550 |
+
```
|
| 551 |
+
|
| 552 |
+
## <a id="evaluation">2. Evaluation</a>
|
| 553 |
+
|
| 554 |
+
The evaluation script `script/evaluation.py` provides comprehensive evaluation capabilities for the AVSR Cocktail model on multiple datasets with various noise conditions and interference scenarios.
|
| 555 |
+
|
| 556 |
+
### Quick Start
|
| 557 |
+
|
| 558 |
+
**Basic evaluation on LRS2 test set:**
|
| 559 |
+
```sh
|
| 560 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name lrs2 --set_id test
|
| 561 |
+
```
|
| 562 |
+
|
| 563 |
+
**Evaluation on AVCocktail dataset:**
|
| 564 |
+
```sh
|
| 565 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name AVCocktail --set_id video_0
|
| 566 |
+
```
|
| 567 |
+
|
| 568 |
+
### Supported Datasets
|
| 569 |
+
|
| 570 |
+
#### 1. LRS2 Dataset
|
| 571 |
+
Evaluate on the LRS2 dataset with various noise conditions:
|
| 572 |
+
|
| 573 |
+
**Available test sets:**
|
| 574 |
+
- `test`: Clean test set
|
| 575 |
+
- `test_snr_n5_interferer_1`: SNR -5dB with 1 interferer
|
| 576 |
+
- `test_snr_n5_interferer_2`: SNR -5dB with 2 interferers
|
| 577 |
+
- `test_snr_0_interferer_1`: SNR 0dB with 1 interferer
|
| 578 |
+
- `test_snr_0_interferer_2`: SNR 0dB with 2 interferers
|
| 579 |
+
- `test_snr_5_interferer_1`: SNR 5dB with 1 interferer
|
| 580 |
+
- `test_snr_5_interferer_2`: SNR 5dB with 2 interferers
|
| 581 |
+
- `test_snr_10_interferer_1`: SNR 10dB with 1 interferer
|
| 582 |
+
- `test_snr_10_interferer_2`: SNR 10dB with 2 interferers
|
| 583 |
+
- `*`: Evaluate on all test sets and report average WER
|
| 584 |
+
|
| 585 |
+
**Example:**
|
| 586 |
+
```sh
|
| 587 |
+
# Evaluate on clean test set
|
| 588 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name lrs2 --set_id test
|
| 589 |
+
|
| 590 |
+
# Evaluate on noisy conditions
|
| 591 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name lrs2 --set_id test_snr_0_interferer_1
|
| 592 |
+
|
| 593 |
+
# Evaluate on all conditions
|
| 594 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name lrs2 --set_id "*"
|
| 595 |
+
```
|
| 596 |
+
|
| 597 |
+
#### 2. AVCocktail Dataset
|
| 598 |
+
Evaluate on the AVCocktail cocktail party dataset:
|
| 599 |
+
|
| 600 |
+
**Available video sets:**
|
| 601 |
+
- `video_0` to `video_50`: Individual video sessions
|
| 602 |
+
- `*`: Evaluate on all video sessions and report average WER
|
| 603 |
+
|
| 604 |
+
The evaluation reports WER for three different chunking strategies:
|
| 605 |
+
- `asd_chunk`: Chunks based on Active Speaker Detection
|
| 606 |
+
- `fixed_chunk`: Fixed-duration chunks
|
| 607 |
+
- `gold_chunk`: Ground truth optimal chunks
|
| 608 |
+
|
| 609 |
+
**Example:**
|
| 610 |
+
```sh
|
| 611 |
+
# Evaluate on specific video
|
| 612 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name AVCocktail --set_id video_0
|
| 613 |
+
|
| 614 |
+
# Evaluate on all videos
|
| 615 |
+
python script/evaluation.py --model_type avsr_cocktail --dataset_name AVCocktail --set_id "*"
|
| 616 |
+
```
|
| 617 |
+
|
| 618 |
+
### Configuration Options
|
| 619 |
+
|
| 620 |
+
#### Model Configuration
|
| 621 |
+
- `--model_type`: Model architecture to use (use `avsr_cocktail` for the AVSR Cocktail model)
|
| 622 |
+
- `--checkpoint_path`: Path to custom model checkpoint (default: uses pretrained `nguyenvulebinh/AVSRCocktail`)
|
| 623 |
+
- `--cache_dir`: Directory to cache downloaded models (default: `./model-bin`)
|
| 624 |
+
|
| 625 |
+
#### Processing Parameters
|
| 626 |
+
- `--max_length`: Maximum length of video segments in seconds (default: 15)
|
| 627 |
+
- `--beam_size`: Beam size for beam search decoding (default: 3)
|
| 628 |
+
|
| 629 |
+
#### Dataset Parameters
|
| 630 |
+
- `--dataset_name`: Dataset to evaluate on (`lrs2` or `AVCocktail`)
|
| 631 |
+
- `--set_id`: Specific subset to evaluate (see dataset-specific options above)
|
| 632 |
+
|
| 633 |
+
#### Output Options
|
| 634 |
+
- `--verbose`: Enable verbose output during processing
|
| 635 |
+
- `--output_dir_name`: Name of output directory for session processing (default: `output`)
|
| 636 |
+
|
| 637 |
+
### Advanced Usage
|
| 638 |
+
|
| 639 |
+
**Custom model checkpoint:**
|
| 640 |
+
```sh
|
| 641 |
+
python script/evaluation.py \
|
| 642 |
+
--model_type avsr_cocktail \
|
| 643 |
+
--dataset_name lrs2 \
|
| 644 |
+
--set_id test \
|
| 645 |
+
--checkpoint_path ./model-bin/my_custom_model \
|
| 646 |
+
--cache_dir ./custom_cache
|
| 647 |
+
```
|
| 648 |
+
|
| 649 |
+
**Optimized inference settings:**
|
| 650 |
+
```sh
|
| 651 |
+
python script/evaluation.py \
|
| 652 |
+
--model_type avsr_cocktail \
|
| 653 |
+
--dataset_name AVCocktail \
|
| 654 |
+
--set_id "*" \
|
| 655 |
+
--max_length 10 \
|
| 656 |
+
--beam_size 5 \
|
| 657 |
+
--verbose
|
| 658 |
+
```
|
| 659 |
+
|
| 660 |
+
### Output Format
|
| 661 |
+
|
| 662 |
+
The evaluation script outputs Word Error Rate (WER) scores:
|
| 663 |
+
|
| 664 |
+
**LRS2 evaluation output:**
|
| 665 |
+
```
|
| 666 |
+
WER test: 0.1234
|
| 667 |
+
```
|
| 668 |
+
|
| 669 |
+
**AVCocktail evaluation output:**
|
| 670 |
+
```
|
| 671 |
+
WER video_0 asd_chunk: 0.1234
|
| 672 |
+
WER video_0 fixed_chunk: 0.1456
|
| 673 |
+
WER video_0 gold_chunk: 0.1123
|
| 674 |
+
```
|
| 675 |
+
|
| 676 |
+
When using `--set_id "*"`, the script reports both individual and average WER scores across all test conditions.
|
| 677 |
+
|
| 678 |
+
## <a id="training">3. Training</a>
|
| 679 |
+
|
| 680 |
+
### Model Architecture
|
| 681 |
+
|
| 682 |
+
- **Encoder**: Pre-trained AV-HuBERT large model (`nguyenvulebinh/avhubert_encoder_large_noise_pt_noise_ft_433h`)
|
| 683 |
+
- **Decoder**: Transformer decoder with CTC/Attention joint training
|
| 684 |
+
- **Tokenization**: SentencePiece unigram tokenizer with 5000 vocabulary units
|
| 685 |
+
- **Input**: Video frames are cropped to the mouth region of interest using a 96 × 96 bounding box, while the audio is sampled at a 16 kHz rate
|
| 686 |
+
|
| 687 |
+
### Training Data
|
| 688 |
+
|
| 689 |
+
The model is trained on multiple large-scale datasets that have been preprocessed and are ready for the training pipeline. All datasets are hosted on Hugging Face at [nguyenvulebinh/AVYT](https://huggingface.co/datasets/nguyenvulebinh/AVYT) and include:
|
| 690 |
+
|
| 691 |
+
| Dataset | Size |
|
| 692 |
+
|---------|------|
|
| 693 |
+
| **LRS2** | ~145k samples |
|
| 694 |
+
| **VoxCeleb2** | ~540k samples |
|
| 695 |
+
| **AVYT** | ~717k samples |
|
| 696 |
+
| **AVYT-mix** | ~483k samples |
|
| 697 |
+
|
| 698 |
+
The information about these datasets can be found in the [Cocktail-Party Audio-Visual Speech Recognition](https://arxiv.org/abs/2506.02178) paper.
|
| 699 |
+
|
| 700 |
+
**Dataset Features:**
|
| 701 |
+
- **Preprocessed**: All audio-visual data is pre-processed and ready for direct input to the training pipeline
|
| 702 |
+
- **Multi-modal**: Each sample contains synchronized audio and video (mouth crop) data
|
| 703 |
+
- **Labeled**: Text transcriptions for supervised learning
|
| 704 |
+
|
| 705 |
+
The training pipeline automatically handles dataset loading and loads data in [streaming mode](https://huggingface.co/docs/datasets/stream). However, to make training faster and more stable, it's recommended to download all datasets before running the training pipeline. The storage needed to save all datasets is approximately 1.46 TB.
|
| 706 |
+
|
| 707 |
+
### Training Process
|
| 708 |
+
|
| 709 |
+
The training script is available at `script/train.py`.
|
| 710 |
+
|
| 711 |
+
**Multi-GPU Distributed Training:**
|
| 712 |
+
```sh
|
| 713 |
+
# Set environment variables for distributed training
|
| 714 |
+
export NCCL_DEBUG=WARN
|
| 715 |
+
export OMP_NUM_THREADS=1
|
| 716 |
+
export CUDA_VISIBLE_DEVICES=0,1,2,3
|
| 717 |
+
|
| 718 |
+
# Run with torchrun for multi-GPU training (using default parameters)
|
| 719 |
+
torchrun --nproc_per_node 4 script/train.py
|
| 720 |
+
|
| 721 |
+
# Run with custom parameters
|
| 722 |
+
torchrun --nproc_per_node 4 script/train.py \
|
| 723 |
+
--streaming_dataset \
|
| 724 |
+
--batch_size 6 \
|
| 725 |
+
--max_steps 400000 \
|
| 726 |
+
--gradient_accumulation_steps 2 \
|
| 727 |
+
--save_steps 2000 \
|
| 728 |
+
--eval_steps 2000 \
|
| 729 |
+
--learning_rate 1e-4 \
|
| 730 |
+
--warmup_steps 4000 \
|
| 731 |
+
--checkpoint_name avsr_avhubert_ctcattn \
|
| 732 |
+
--model_name_or_path ./model-bin/avsr_cocktail \
|
| 733 |
+
--output_dir ./model-bin
|
| 734 |
+
```
|
| 735 |
+
|
| 736 |
+
**Model Output:**
|
| 737 |
+
The trained model will be saved by default in `model-bin/{checkpoint_name}/` (default: `model-bin/avsr_avhubert_ctcattn/`).
|
| 738 |
+
|
| 739 |
+
#### Configuration Options
|
| 740 |
+
|
| 741 |
+
You can customize training parameters using command line arguments:
|
| 742 |
+
|
| 743 |
+
**Dataset Options:**
|
| 744 |
+
- `--streaming_dataset`: Use streaming mode for datasets (default: False)
|
| 745 |
+
|
| 746 |
+
**Training Parameters:**
|
| 747 |
+
- `--batch_size`: Batch size per device (default: 6)
|
| 748 |
+
- `--max_steps`: Total training steps (default: 400000)
|
| 749 |
+
- `--learning_rate`: Initial learning rate (default: 1e-4)
|
| 750 |
+
- `--warmup_steps`: Learning rate warmup steps (default: 4000)
|
| 751 |
+
- `--gradient_accumulation_steps`: Gradient accumulation (default: 2)
|
| 752 |
+
|
| 753 |
+
**Checkpoint and Logging:**
|
| 754 |
+
- `--save_steps`: Checkpoint saving frequency (default: 2000)
|
| 755 |
+
- `--eval_steps`: Evaluation frequency (default: 2000)
|
| 756 |
+
- `--log_interval`: Logging frequency (default: 25)
|
| 757 |
+
- `--checkpoint_name`: Name for the checkpoint directory (default: "avsr_avhubert_ctcattn")
|
| 758 |
+
- `--resume_from_checkpoint`: Resume training from last checkpoint (default: False)
|
| 759 |
+
|
| 760 |
+
**Model and Output:**
|
| 761 |
+
- `--model_name_or_path`: Path to pretrained model (default: "./model-bin/avsr_cocktail")
|
| 762 |
+
- `--output_dir`: Output directory for checkpoints (default: "./model-bin")
|
| 763 |
+
- `--report_to`: Logging backend, "wandb" or "none" (default: "none")
|
| 764 |
+
|
| 765 |
+
**Hardware Requirements:**
|
| 766 |
+
- **GPU Memory**: The default training configuration is designed to fit within **24GB GPU memory**
|
| 767 |
+
- **Training Time**: With 2x NVIDIA Titan RTX 24GB GPUs, training takes approximately **56 hours per epoch**
|
| 768 |
+
- **Convergence**: **200,000 steps** (total batch size 24) is typically sufficient for model convergence
|
| 769 |
+
|
| 770 |
+
|
| 771 |
+
## Acknowledgement
|
| 772 |
+
|
| 773 |
+
This repository is built using the [auto_avsr](https://github.com/mpc001/auto_avsr), [espnet](https://github.com/espnet/espnet), and [avhubert](https://github.com/facebookresearch/av_hubert) repositories.
|
| 774 |
+
|
| 775 |
+
## Contact
|
| 776 |
+
|
| 777 |
+
nguyenvulebinh@gmail.com
|