>>3020580If an expert says a lens has 20 inches of DoF, they don't mean there's 20 inches where it will have perfect sharpness.
It means there are 20 inches where an image will be Acceptably Sharp.
99.9% of those inches inside the DoF band have the light still not perfectly aligned to land at exactly the same spot, it's just that the light doesn't have to hit Exactly the same spot, it's only when the light is significantly separated that we are able to notice the blur as details like a strand of hair become multiple times as wide because the light that hit it got spread out by the lens shape (if the lens doesn't focus those beams back together)
Two light beams could hit slightly different parts of the sensor and still both be mapped as the exact same pixel, they're just added together and that pixel will be brighter.
Or they could hit the same rod or cone in your eyeball even though they were slightly apart, it is physically impossible for you to tell them apart at a certain level.
Then perceptually, the brain isn't going to differentiate between such microscopic variations when the image is in your brain, it would be a massive waste of memory to try and not compress the image.
Essentially, past the hyperfocal point, each far detail (like a strand of hair of a person a mile away) is so tightly packed together into a small angle (a tiny sliver of view), that the light from that hair hits essentially the exact same small spot on the front lens and can't take vastly different paths through the lens that would de-focus the light. It hits the same small area on the sensor and you get a low-resolution-but-not-blurred object.
The downside is though nothing beyond the hyperfocal distance is blurred by angles, it is blurred by sensor/display res, your 20 m-pixel lens still won't have the pixels to produce a million pixel portrait of someone a mile away. Instead of an in-focus shot of a million hairs, they combine into a head-shaped blob, summing together everything