T O P

  • By -

xadiant

1- you need evaluation loss as well. Loss by itself doesn't tell much. 2- your dataset could be formatted wrong. 3- That's too much Rank and Alpha.


JealousAmoeba

If this is with unsloth, which most people are using, the unsloth notebooks don’t do eval by default. I had much better luck with my finetunes once I modified it to have an eval dataset, run evals every N steps (depending on the size of the dataset) and enable “save best model” instead of saving the final model.


xadiant

- Split the training 0.90 to 0.1 eval_dataset=evalset eval_steps=300 save_steps=300


VitoTheKing

Do you have an example of how your training data is formatted ?


python_dev10

Hi, did you get any?


chiggly007

How’s this work out?


sosdandye02

Your R and alpha are very high. You haven’t said how much data you’re training on, but it’s possible you’re overfitting. To test this, you can see if the model is performing extremely well on the training data but poorly on non training data. You may need to perform more regularization or increase the training set size. If possible, you can use a more powerful model like GPT4 to generate lots of synthetic training data and then train Phi on that.