Newswise — The research, released on May 8 in the PLOS ONE journal, investigated the computational procedures utilized by the human brain to discern the dimensions of items in our surroundings.

The investigation, headed by Professor Tim Meese from the School of Optometry at Aston University and Dr. Daniel Baker from the Department of Psychology at the University of York, provides additional insight into how our visual system can utilize 'defocus blur' to deduce perceptual size, although it does so in an unsophisticated manner.

It is common knowledge that in order to determine the size of an object based on its retinal image size, our visual system must make an estimation of the object's distance. The retinal image comprises numerous pictorial cues, including linear perspective, which assist the system in determining the relative size of objects. However, to determine the object's actual size, the system must have knowledge of spatial scale.

The visual system can accomplish this by considering defocus blur, which is similar to the blurred areas in an image outside the depth of focus of a camera. Other researchers have extensively studied the mathematics behind this concept, but this study posed a question: does human vision utilize this mathematics?

The researchers provided the study participants with pairs of photographs depicting full-scale railway scenes that had been subjected to various artificial blur treatments, as well as small-scale models of railway scenes captured with a long exposure and small aperture to decrease defocus blur. The objective was to identify which photograph in each pair depicted the genuine full-scale scene.

Interestingly, when the artificial blur was aligned with the ground plane (which represents the horizontal plane of the ground on which the viewer is standing) in the full-scale scenes, the participants were deceived, and they thought that the small models were the full-scale scenes. Surprisingly, this did not necessitate the use of realistic gradients of blur. Even simple, uniform bands of blur at the top and bottom of the photographs produced almost identical miniaturization effects.

Professor Tim Meese, who specializes in vision science at Aston University, stated that "our results suggest that human vision can use defocus blur to deduce perceptual scale, but it does so in an unsophisticated manner – more of a rule of thumb than a precise analysis. In general, our discoveries offer fresh understanding into the computational mechanisms employed by the human brain when making perceptual judgments about our relationship with the outside world."

Dr. Daniel Baker, a senior lecturer in psychology at the University of York, stated that "these results show that our ability to perceive size is not flawless and can be impacted by other aspects of a scene. It also emphasizes the extraordinary adaptability of the visual system, which may be relevant in comprehending the computational principles that underlie our perception of the world. For instance, when assessing the size and distance of potential hazards while driving.

Journal Link: PLOS ONE

MEDIA CONTACT
Register for reporter access to contact details
CITATIONS

PLOS ONE