Newswise — Do you waste time in the morning looking for your keys?

Try writing the word “KEYS” on a light switch you use every morning, and you might find them a little quicker.

That’s a suggestion based on brain-sciences memory research at Washington University in St. Louis showing that where someone looks can be guided by their recent interactions with the environment.

Our visual world is cluttered, complex and confusing. “We can’t fully process everything in a scene, so we have to pick and choose the parts of the scene we want to process more fully,” said Richard Abrams, professor of psychological and brain sciences in Arts & Sciences. “That is what we call ‘attention.’”

So then, how do we choose which parts of a scene deserve our attention?

“We’re more likely to direct our attention to things that match objects that we’ve interacted with,” Abrams said. The new research shows that this is the case even if they only match in meaning — not appearance, Abrams said.

Previous research has shown that our attention is biased toward objects that share basic features — such as color — with something we have recently seen.

For example, finding your keys on a red key chain would be easier if you had previously reached for a red apple, compared to if you had chosen a yellow banana for your snack. This effect is called “priming.” This priming also occurs for items that are only conceptually related — the word “KEYS” doesn’t look like your keys, yet the facilitation still occurs.

And if we want to strengthen that bias? Perform an action while being primed with an image or, according to the newest research, with a word, and you’ll find your keys even faster.

“Things we act on are, by definition, ‘important’ because we’ve chosen to make an action,” Abrams said. Making an action may produce a signal in the brain that what you’re seeing is more important than if you just observed it, passively.

In the study, published in the journal Psychonomic Bulletin and Review by Abrams and Blaire J. Weidler of the University of Toronto, participants performed a pair of ostensibly unrelated tasks.

Seated at a computer, the screen first flashed either the word “Go” or “No.”  If the screen flashed “Go,” the study participant was to press a button when the priming word (e.g., “KEYS”) appeared. If they initially saw “No,” however, they were told just to look at the word.

Next, participants searched a “scene” on the screen that contained two pictures. They were told to find a left or right arrow and indicate which was present (while ignoring up-or-down arrows). The arrows were superimposed on the pictures (but the content of the pictures was unimportant for the task). Importantly, one picture always represented the priming word (a picture of keys). The other picture was of a different, unrelated object.

Although the pictures were unrelated to the arrow-finding task, on some trials the arrow was on the prime-matching picture (keys), whereas on the other trials, the arrow was on the other picture.

Subjects were able to locate the left or right arrow faster when it was on the picture of keys than when it was on the random picture — as to be expected based on the established principle of priming.

The important novel finding of this research was that participants located the arrow faster still if the priming had involved an action — that earlier button press.

In practice, this priming could have a number of applications. For example, Abrams said, imagine priming baggage screeners with the word “KNIFE.” That obviously wouldn’t allow screeners to see something that was invisible, but, he said, “That might draw their attention to a knife in their visual field,” something that may have otherwise gone undetected in the visual clutter of a disorganized suitcase.

“In order for us to behave efficiently in the world, we have to make good choices about which of the many objects in a scene we are going to be processing,” Abrams said. “These experiments reveal one mechanism that helps us make those choices.”

Journal Link: Psychon Bull Rev (2018)