>>1527618I wouldn't "rely" on them to do anything, when an LLM is unsure it doesn't tell you, it confidently hallucinates at you. This makes them hard to just trust.
I would use it to write things you plan to proof read, such as corporate e-mail responses, greetings you to people you don't really care about, shitposts to friends about nonsense, since these things will both be proofread and do not need to perfectly accurate I think LLMs are great for filler
You can also setup local LLMs to autocomplete code for you if you're a developer, but I haven't done that, I am wary of it