Quoted By:
I know this AI thing is getting annoying for some so I wont spam the thread with it, but I did a first finetuning test on GPT-3 for Gura members post generation.
For now I fed it very very little data, because I wanted to try if it was working before I spend a few euros on this.
It does work but the result is not super good. But we can definitly improve it a lot by feeding it more data. According to the OpenAI team the performance increases linearly with the amount of data fed, so I think I'll do that.
I'm not sure about I would approach conversations generation, because we don't have many Gura conversations to train the model on.
One thing I could try is to give a few examples in the prompt and ask the members posts finetuned model to complete the prompt according to the examples (that's called few-shots learning and usually it works well with this type of large language models)