>>76043056Somewhat but it also depends on how squeaky clean the sponsors/advertisers are about enforcing those policies. The bigger, the more impossible it is to do that as a rule of thumb generally.
On the other hand, since I've been posting again, just an update on the local models front since I follow that religiously. Nothing terribly interesting going on on that front, the biggest news recently is that Google is releasing Gemma-2 in June which is going to be a 27B parameter model that supposedly outperforms Llama 3 70B in its current pre-release checkpoint form. But I bet it is going to take some serious work to uncensor it. Some open models released from the Chinese but they are lobotomized to CCP values in the same censorship vein as what Western companies do.
Finetunes of L3 are not going well still, at least on the open source front. People are losing hope rapidly and no surprise, the people who know how to make the new models tick are being paid to do actual work and advertising that in small-time leaderboards. Everyone else is still fumbling around. Most people aren't even trying to outdo the official instruct L3 model and just building on top of it like Nous-Research which is doing merge models. The most interesting is the test finetunes from the guy that made Euryale and Fimbulvetr, who consolidated his experiments on L3-Run1. Better than most people's and his first quick finetune, Solana, but still suffers from downgrades vs the official instruct.