Ray-CubeAce
lvl.4
United Kingdom
Offline
|
Difficult for me to say but I suspect it's an offshoot from digitizing animation characters from live performance actors from the gaming and film industries. As in most instances, people judge how sharply eyes are in focus and the fact that eye colouring differs so much that some eye colouring from a shading point of view would be better than others. I would imagine it has more to do with the triangular spacing between say, eyes and mouth with possibly additional parameters depending on how much additional spare processing power there is to be had on the Pocket.
DJI has walked a very fine line with this.
Using such a wide aperture on such a small sensor is a very brave move on their part.
It does produce a slight background blur but using such a small sensor and a relatively wide viewing angle makes tracking and keeping focus much more difficult than a longer focal length would be.
The wider a viewing angle is the more difficult it is to get good separation between foreground and background. That makes the contrast around any object it is supposed to be tracking less contrasty than if it was a much narrower field of view where depth would be easier to separate. Phones manage it when taking still images by combining several shots and selectively blending them together while when shooting video they tend to use the advantage of using the narrower use of their lens aperture to keep both background and foreground in relative sharpness and therefore make it look as if they are doing a better job when they aren't.
I'm surprised the Osmo Pocket does as good a job as it is doing.
It's the same with the focus breathing problem some seem to be suffering from. They try focusing on a relatively bland section of the frame with no discernable strong contrast and the result is the camera hunts slightly to confirm it is focusing correctly. This is easier to control using larger sensors or smaller apertures on longer focal length lenses but difficult to achieve on wide angle lenses with larger apertures on smaller sensors.
I suspect this is why the active tracking asks for large boxes to be drawn around a whole object before it locks on. It needs some sharp edges of contrast to lock onto. If the subject then wanders close to another object with stronger contrast, the camera could switch to that if it gets within the target box area.
I'm not saying these things can't be improved upon but technically it is harder to achieve the more you miniaturize the equipment. This is also why mirrorless cameras are now incorporating phase detection pixels within the image sensor as phase detection is better in general and faster, and more capable as lighting conditions become less contrasty or dimmer. I can only hope that it will be possible in the near future to incorporate some of that technology into these smaller sensors. |
|