>>76904607open source software is usually either indirectly financed by large companies (like linux, where the people who maintain the open source software get paid really well to consult for large companies) or it funds itself by dumping tokens on retail land dumb dumb institutions ike ethereum. mans gotta eat
>>76904438>>76904493"unironic detailed answer: no. forgive reddit formatting but essay needs it
vrchat has significant limitations on models, while the poly count recommendation isn't as small as it used to be, most high quality models used in warudo/vseeface/vnyan need to be reduced down a little. the best results are done by hand rather than an automatic process because the auto won't know what to prioritize for maintaining rigging or detail quality
body rigging almost always needs to be significantly changed because vrchat uses a different physics bone system than vtubing programs, personally I think vrchat's is actually superior than dynamic bone and magica cloth1/2 which are what's used in warudo, though magica cloth 2 could overtake it when the guy adds proper single and dual axis limitations that don't need colliders. anyway, because the bones move and collide differently depending on the system you might need a totally different setup for bones that control hair or clothing and that takes a LOT of time especially because each time you test you have to export the fbx from blender to unity to run the physics and the only way to be sure you haven't fucked something is make small changes each time
vtubing program faces are so, so different from vrchat. vrchat uses the viseme system because it's audio-driven, that's AEIOU + limited consonants as well as a binary blink that's automatic. you can add anything you want as a toggle for facial expressions but they don't use tracking data, it's just something you turn on or off. vtubing programs both 2D and 3D use apple arkit's 52 shapes which it gathers data for via sensor on an iphone, though webcam tracking which is visual only is now very close in quality though it's not as great with cheekPuff or tongueOut. so for those models you need to have bare minimum 52 different blendshapes, here's the list with examples if you're interested
https://hinzka.hatenablog.com/entry/2021/12/21/222635warudo/vnyan also have setups within their programs to trigger/limit other blendshapes depending on what tracking data is being sent, so if you smile a certain amount you can trigger a blush as well, that's how vtubestudio (2D) has worked for a long time and I'm glad 3D can do that now as well"
so it sounds like the vrchat models and the "real" 3D models use different software? I don't think it's as simple as toggling between the two.