>>6168594>>6168396>>6168186>tldr; pic related depicts Sam Altman reaching out with his homosexual hands to fondle the breasts of your AI girlfriendDay after day I read of how the latest iteration of OpenAI models ScatCBT powered by tens of thousands of Hopper nvda gpus directly interlinked with the microarchitecture of Sam Altman's unadulterated raging homosexual libido is breaking some new mathematical benchmark ARC-AGI or whatever, all the experts academics mathematicians and computer scientists repeat this they cannot all have been bribed or become fake and gay so surely it must be true?
The other day I read ChatGPT broke yet another super difficult mathematical test benchmark I was super hyped I am not very good at maths so maybe finally I can use the superhuman AI to augment myself and unlock the deep arcane mysteries of the vampire sorcery, I decided to try it basically the problem I had was "Find a simple arithmetic combination of pi and e that produces a near integer" so if you put this query into something as basic as googl, it probably does not give you an exact answer but it will link to a page like Heegner numbers in wikipedia or Wolfram Alpha etc and if you open it you will see a formula like
e^(pi*sqrt(43))=884736744
which is not true hehe you know that e and pi are transcendental numbers no combination can give an exact integer but given the level of precision of computers and numbers this is enough to trick most computation hehe.
Now ChatGPT has surely read all these webpages and all the maths textbooks and more beyond, I tried this query and it gave me complete hallucinated incoherent nonsense so I thought, aha, maybe my prompt is retarded I need to use the more advanced ChatGPT model so I put in the query into the advanced ChatGPT one (the version that has the mixture of experts capable of multiple step-by-step walkthrough reasoning) and it gave an answer like this
1/ pi = 3.1415926...
2/ multiply by 1000000...
3/ you get 314159, this is an integer
Which is technically not incorrect given the loose imprecise phrasing of my original prompt, but I lost it when I saw this output, AI is so fake and gay and retarded. Of course you can spend three hours finetuning the prompt to eventually achieve a competent answer, but it genuinely trains you to become more stupid even as you try as best as you can with your prompt iterations to make it better and cleverer and Sam Altman becomes more homosexual. This is not worth it