-
Notifications
You must be signed in to change notification settings - Fork 757
add RAFT #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
add RAFT #363
Conversation
|
@DearAJ please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
| self._balance_batch(positive_batch, metrics=metrics) | ||
|
|
||
| # Pad batch for distributed training | ||
| positive_batch, pad_size = pad_dataproto_to_divisor(positive_batch, self.actor_rollout_wg.world_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why three paddings (here and L514, L577) are needed?
| raft_batch.batch["returns"] = raft_batch.batch["advantages"].clone() | ||
|
|
||
| # Store labels in batch for potential use | ||
| raft_batch.batch["labels"] = labels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even labels are specified here. will these labels be used by the following update_actor method? or will they be used as expected?
| # Update actor with pure SFT loss | ||
| # With advantages=1.0 and clip_ratio=1.0, this becomes standard cross-entropy | ||
| # This mimics SFTTrainer.compute_loss() behavior | ||
| actor_output = self.actor_rollout_wg.update_actor(raft_batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the called update_actor function is still the RL one, not the one in "SFTTrainer"? not sure if verl has SFTTrainer...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verl does not have a SFTrainer. SFTrainer inherits from transformers.Trainer and requires the complete HuggingFace Trainer infrastructure, whereas VERL uses Ray distributed training and a custom worker group. Directly adopting SFTrainer would disrupt VERL's existing architecture. Additionally, SFTrainer and VERL use different data formats.
| original_clip_low = self.config.actor_rollout_ref.actor.get("clip_ratio_low", 0.2) | ||
| original_clip_high = self.config.actor_rollout_ref.actor.get("clip_ratio_high", 0.3) | ||
|
|
||
| # Disable clipping: set both ratios to 1.0 (no clipping in pure SFT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why clipping can be disabled by setting them to 1.0? This might not be related to RAFT.
Set "adv_estimator": "raft" in config.algorithm to enable support for the RAFT algorithm.