Files
logo_test/training
Rick McEwen 1bf9985def Fix double LoRA application when loading fine-tuned model
The from_pretrained method was applying LoRA twice:
1. In the constructor via lora_r parameter
2. When loading with PeftModel.from_pretrained()

Now creates model with lora_r=0 and loads LoRA weights separately.

Note: Warning about "missing adapter keys" for layers 0-11 is expected
since those layers are frozen and don't have LoRA adapters.
2026-01-05 11:50:10 -05:00
..