-
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Aligning Teacher with Student Preferences for Tailored Training Data Generation
Paper • 2406.19227 • Published • 25 -
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 28 -
CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues
Paper • 2404.03820 • Published • 25
Collections
Discover the best community collections!
Collections including paper arxiv:2404.09656
-
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 126 -
Multi-Head Mixture-of-Experts
Paper • 2404.15045 • Published • 60 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 259 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89
-
UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 5 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Natural Language Reinforcement Learning
Paper • 2411.14251 • Published • 31 -
Group Robust Preference Optimization in Reward-free RLHF
Paper • 2405.20304 • Published • 1
-
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39 -
Pandora: Towards General World Model with Natural Language Actions and Video States
Paper • 2406.09455 • Published • 16 -
WPO: Enhancing RLHF with Weighted Preference Optimization
Paper • 2406.11827 • Published • 17 -
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper • 2406.11194 • Published • 20
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 93 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 24 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 26 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 30
-
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 41 -
Advancing LLM Reasoning Generalists with Preference Trees
Paper • 2404.02078 • Published • 46 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39
-
Transforming and Combining Rewards for Aligning Large Language Models
Paper • 2402.00742 • Published • 12 -
UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 5 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 123
-
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Aligning Teacher with Student Preferences for Tailored Training Data Generation
Paper • 2406.19227 • Published • 25 -
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 28 -
CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues
Paper • 2404.03820 • Published • 25
-
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39 -
Pandora: Towards General World Model with Natural Language Actions and Video States
Paper • 2406.09455 • Published • 16 -
WPO: Enhancing RLHF with Weighted Preference Optimization
Paper • 2406.11827 • Published • 17 -
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper • 2406.11194 • Published • 20
-
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 126 -
Multi-Head Mixture-of-Experts
Paper • 2404.15045 • Published • 60 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 259 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 93 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 24 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 26 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 30
-
UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 5 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Natural Language Reinforcement Learning
Paper • 2411.14251 • Published • 31 -
Group Robust Preference Optimization in Reward-free RLHF
Paper • 2405.20304 • Published • 1
-
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 41 -
Advancing LLM Reasoning Generalists with Preference Trees
Paper • 2404.02078 • Published • 46 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39
-
Transforming and Combining Rewards for Aligning Large Language Models
Paper • 2402.00742 • Published • 12 -
UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 5 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 89 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 123