>>3360741>The visual system is the largest single system in the human brain.True, but the visual system of the human brain is doing a lot more as well (e.g., recognizing that the fuzzy thing barking at you should be connected to the neurons that know what a dog is, and to either the neurons that handle the concept of running away very fast or the neurons that handle the concept of yelling at Mr. Poopers to shut the fuck up while you're trying to eat dinner), plus it's doing it all in meat that was developed by random natural selection over millions of years.
For certain classes of processing problems, silicon can be a lot more efficient than meat.
> A phone that can do that level of processing is basically just a strong AI that you tell what kind of photo you want.Not really. I'm not saying it's an *easy* problem, but it's not an *intractable* problem. Plus a lot of it has already been done.
Look at stuff that already exists: HDR, focus stacking, stacking for noise reduction, fake bokeh, and augmented reality toolkits.
HDR modes take multiple pictures and smush 'em together to increase dynamic range. They already have to deal with things like minor movement during the multiple exposures, although at a smaller scale.
Focus and noise reduction stacking does the same thing over more pictures, including handling "which of these pixels is the sharpest" and "which of these pixels is most likely to be the "real" value we want for this pixel without noise".
The fake bokeh modes already handle figuring out 3D maps of the scene, and the newest ones can even do it with a single camera and cleverness.
And AR software, like Apple's ARKit SDK, handle tracking the movement of the phone and linking up what you see through the camera with actual physical space so that if you drop an augmented-reality whatsit at point (X,Y,Z) in real space, it will stay at point (X,Y,Z) in real space even as you move the phone around and walk around it.