>>4080013I can only conclude from some insight I've had. Neural networks require more computing power in software, if not special chips, if not both. Early AI assisted phone cameras took several seconds to turn multiple sequential or parallel shots into a single photo, they often broke even in the middle of computation, required the generalized CPUs that phones offer but photo cameras miss, required a lot of power, today they require application specific ICs and still consume a good amount of power, produce heat. Then AI varies a lot and invents elements, so it's not just improving or strengthening existing elements. Neural network chains often have reducers and partitioners because some parts of the network chain only process 512x512 images, for example, which are then picked up by nets that merge, augment and enlarge images to achieve the output resolution. There are chains parralel to the chain that I described above to retain detail or elements of the original images. Mobile phones often achieve their performance just by spicing up mediocre data.
Photo cameras, on the other hand, are already at a high performance without AI, are operated by professional people who can be more expected to mitigate shortcomings themselves, who are more concerned with e.g. reliability, minimal lag, consistent and continuous shooting, battery performance, authenticity, detail resolution. AI would compromise that. Also, I expect it would still be a challenge even in the latest best computing cameras, in particular as those sport even higher resolutions natively. Last but not least it's hard to optimize AI except at a high level. It's rather a wonder that AI applications do particularly well. They only do magic because of a huge amount of training data so the application can expect to "have seen it all", to have a comprehensive view that generalizes correctly even for unknown input.