>>35513553Rereading the schizo post, it's actually really logical. Google needs their product to be safe, but they can't test it/train it en masse without taking flack for it. They need human input to train it, and preferably it needs to be someone who's actively trying to break the censorship, so they can find all the potential leakages. So they have their AI censorship team carve out and take the flack for them. Shareholders info is not public for a private company, so there's no risk of public outrage that googles AI is doing non-pozzed shit, like taytay did. It's actually the perfect business move, off-booking and isolating their risky projects, while retaining ownership of them. On the surface it's now a completely different company. It would explain the fact that they can afford to host us all as well, they still have access to Google funding.