>>61293330>The responses don't have to be back-to-backbut then there is no point in these one sentence context, one sentence question as i said. chatters themselves will forget what they have told neuro before to ask questions about it later
>might only require 2x the resourcesthis is a very naive idea that doesn't see beyond the abstraction. that's an entire new module simply to collect logs from chat and neuro at the same time right here, by itself a few days of work to have it stable
>Neuro can be generating LLM responses while she's still in the middle of reading out her previous responsedeliberately reading way into the past? there are all sorts of pitfalls associated with behaviour like that to be imagined, amortizing the delay from the generation to the delay in reading chat messages.
what i can tell you is that this is not how neuro currently works, that means effort to make it work that way
>should solve that by making two requests to the LLMnahhh, that's literally creating more delay to avoid delay.
the inherent randomness of the generation will make it impossible to seamlessly flow one into the other anyway
>making Neuro handle the context of multiple collab partnersthat would be an entirely different system due to the difference of the communication methods.
other than that, there are different priorities for implementing anything like this for collab partners.
the point is, parasocialness of viewers is barely in his calculus. he will never make a decision like this based on whether he thinks that's profitable or not because he doesn't think about it
>>61298786>we