>>57569182This or llama 3 default. I change the temp as needed but these are my saved settings. I tend to cut down token # to ~50 because sometimes it gives way too much response.
Llama 3 I also have repetition penalty at 1.3 and top p at 0.64, temp I move between 1.0 and 1.3 depending on what I want.