>>38119852False. I already went over this for retards like you in an earlier thread. Increasing model size has has diminishing returns, and Google hit them hard.
There are three (3) models mentioned in the paper about fine-tuning: 2B, 8B and 137B parameters. The first can run on a graphics card. The second can run on a top-of-the-line graphics card. The third requires a modest bitcoin farm of several dozen GPUs (still, not a military supercomputer).
The crowdworkers were asked to rate these models' responses. It was done twice - before and after fine-tuning. Google's own pajeets rated responses made by a 2B model after fine-tuning higher than responses made by the 137B model before fine-tuning on pretty much every metric. Putting in additional work works much better than simply throwing more GPU processing power at the problem, and massively so. The 137B model is 68 times larger than the 2B one.
We know that
Character.AI is using a LaMDA model or something derived from it because the CAI model is fine-tuned very similarly, if not the same. So the same conclusions apply here. Also, the same can be seen with other AI, like InstructGPT.
https://openai.com/blog/instruction-following/