>>39474817Okay, but same basic argument where the accuracy is jumping with repeated exposure.
I think the bot is relying on some tuning across cycles rather than picking up on objective cues.
For an example, it clearly struggles with stacks sometimes and we can see it. So it might, as a tuning mechanism, decide that when it sees a bunch of failures from stacks, to start overtapping based on the likelihood that, "I'm not seeing new targets for these beats, so they are probably hidden in a stack under the most recent." And then the map cycles to something designed differently, but the bot remains "biased" towards tapping on the assumption a stack might exist.
I also think that's why, for example, it's struggling with some spinners. I think it doesn't have a good sense for how "long" a spinner exists on screen. So it tunes for a certain assumed length based on the spinners it's seen recently. But if it runs into a spinner that lasts longer, it suddenly starts struggling to keep the RPM up.
Anyways, it's hard to really "know" what's going on inside these statistical models. I'm just seeing certain patterns that make me go, "hmm... I think this might be a logical explanation for the behavior observed."
Tuning mechanisms are very effective, but I think it's an unfortunate "trap" for a model to fall into that will limit it's overall growth. It makes me want to freeze a few frames and force it to work harder on its weaknesses.