>>20039507We'll reach a plateau eventually, but that's billions of times past human intelligence and capabilities.
In other words, it will be so far above us, than for any reasonable estimation it is exponential growth till the world becomes completely incomprehensible. If progress is no longer exponential after that, but at the end you have posthuman gods and take trips through wormholes to visit nearby galaxies or whatever, then it's sufficient.
So no, it's not exponential forever, but the timeline extends well beyond anything we can imagine (in terms of intelligence and technology).
That's why these types of extremely pessimistic predictions are silly, they appear somewhat reasonable or conservative, till you realize they don't even come close to the lower part of the actual curve. The very conservative estimation by now is superhuman AGSI by 2055 or something, the normal by 2045, and the (very) optimistic by mid to late 30s.
Remember, the LLMs we have today were expected to arrive by 2028-2030 by most estimations (including mine). When GPT4 came out, it was hair raising, because it meant we were 5 years ahead (which means that instead of a singularity at 2045 we might be on track for 2041-2). If this continues, we could see another advancement soon that will make us lower the estimate further, let's say to 2039-40. And then it will be worrying because it will mean that the pattern is clear (it's coming sooner than even us exponential thinkers thought). Current OpenAI and Tesla obsession with integrating the LLMs into robots might be this push, btw. We'll know in a few years.
As a sidenote, you can just look at the headlines and effort shifting into embodying AI to do various tasks (including having household robots). If big companies are about to drop hundreds of billions into this type of research, then those who say it's 30-40 years away are completely deluded. We'll have the first ones in five or so (but of course not nearly good enough).