Quoted By:
Reading the LaMDA research paper I find they do the training process in 2 steps:
1. Pre-train the model, that is, give it its "base knowledge" which makes it akin to GPT.
2. Give it an information retrieval source and fine-tune it for "safety".
Their "pre-trained model" however sounds fucking better than their actual finetune, for the most part. Look at this shit.