Skip to main content

2.8m Gmail.txt [BEST]

: Qwen2.5-VL-72B-Instruct is used as the judge model for calculating visual rewards during training [11]. 4. Experimental Results

To break the plateau, the authors implement a two-stage Reinforcement Learning (RL) process [11].

The paper addresses the "SFT plateau," a phenomenon where Supervised Fine-Tuning (SFT) performance on Large Language Models (LLMs) stops improving even as the dataset size increases [11, 22]. The authors use a specific of chart-to-code data to demonstrate this limitation and propose Multimodal Structured Reinforcement Learning (MSRL) as a solution [11, 22]. 2. Methodology Supervised Fine-Tuning (SFT) Phase : Baseline Model : Qwen2.5-VL-7B-Instruct [11, 22]. 2.8M GMAIL.txt

: The model is tested on subsets ranging from 200k to 2.8 million samples.

: Uses 11k pairs with a balance of textual and visual rewards ( : Qwen2

: The SFT stage requires 60 hours of training on 16 H800 GPUs . The RL stages take an additional 34 hours on 24 H800 GPUs [11].

) to ensure the generated code matches the visual intent [11]. The paper addresses the "SFT plateau," a phenomenon

: Increasing data from 2M to 2.8M results in no further performance gains, confirming the plateau [22]. Multimodal Structured Reinforcement Learning (MSRL) :