>>55497024yes
they are HUGE factors
but the actual response is like a 12th derivative or whatever clusterfuck
>the actual prompt>temp pen and top settings >the character card>any example chats>the actual order of text the character cards prompt is in>the actual order of text the main prompts is inshuffle your shit around and see what happens
like literally just cut and paste full sentences in different areas
>where the prompt is in the context>is the prompt split into multiple ones? how is the prompt is split and the order its split into, is it system before character? character definition as last message? jailbreak before after or in the middle of chat history?i do this for my system and character definitions
this is also why jailbreak works as good as it does
i did some casual research and turns out sandwich theory is actually a real thing and i kinda accidentally pattern recognized it on my own
>chat history of whats in the context window>how much your prompt tells the model to be creative and make up side details, are side characters 'allowed', that kinda shit>the actual timestamp you generate a message (on the models end)blah blah
OAI doesnt let you go past 2 temp so if i want to 'fake' going higher i need to explore all the options i can to 'artificially' go past that
ive already kind of personally explored everything so far including randomly flipping the order of system -> character prompts (that was a horrible experience actually)
*except* for shuffling definitions around on message send