You can practically smell the toast wafting from the kitchen. Over in the corner, you spot someone grabbing a butter knife. It’s almost instinctive—you can almost predict what’s coming next.
Your mind figures that the slice of bread will get some butter, and, well, it’s usually right. This kind of automatic reasoning helps keep situations from feeling chaotic.
This quick decision-making links to a group of brain regions known as the “action observation network (AON).” This network lights up when we see others reach for, grab, or interact with objects.
Research, mostly involving brief video clips, has mapped out the basics of this network.
But in real life, actions rarely come in such short snippets. More often, they play out in a full sequence, layered with intention. This difference led to an intriguing new study.
Action observation network in the brain
A team from the Netherlands Institute for Neuroscience, led by Christian Keysers and Valeria Gazzola, took on this project.
Keysers succinctly states: “What we would do next becomes what our brain sees.” It’s a good reminder that prediction is central to how we perceive the world.
Previous studies suggested that information flows in just one direction—visual areas send info to parietal and premotor hubs, which plan the next action. However, this new study questioned whether that direction reverses when we can anticipate what happens next.
To explore this, researchers created two types of everyday scenes—like making a sandwich or folding a shirt.
In the logical version, actions unfolded in a sensible order; in the scrambled version, those same clips were mixed up. Participants viewed both while their brain activity was monitored.
Some participants, epilepsy patients with existing intracranial electrodes, allowed for high-precision recording of electrical signals deep in the cortex.
What you and your brain see
When actions made sense, the brain reacted differently than expected. Feedback signals moved from higher motor regions to the sensory cortex.
This “top-down” influence quieted the visual areas, almost as if the brain decided it could ease up since the next action was so predictable.
Conversely, the scrambled scenes forced the cortex to rely solely on incoming visual information, reactivating the traditional feed-forward process from eyes to hands.
This shift was most pronounced in the premotor cortex. This area, known for preparing movements, lit up first when actions were presented logically. Electrical rhythms then flowed back to regions involved in touch and vision.
The finding suggests that motor memories—like how we slice a roll—ready the brain for what our eyes are about to encounter.
Brain uses memory to ‘see’
Gazzola explains the reversal: “Now, information was actually flowing from the premotor regions, that know how we prepare breakfast ourselves, down to the parietal cortex, and suppressed activity in the visual cortex.”
She adds, “It’s as if they stopped seeing with their eyes and began to see what they would have done themselves.”
This perspective suggests that in familiar situations, how we perceive actions might rely more on what we already know rather than just what we see.
This aligns well with the concept of predictive coding, which argues that the brain continually compares expectations to new information, issuing error messages when things don’t match up.
By demonstrating this mechanism in natural sequences, the research bolsters the notion that prediction isn’t just a feature for critical moments—it’s a default mode integrated into our daily interactions.
Efficiency through feedback loops
Quieting the visual cortex during routine tasks might sound risky, but it reads as a way to save energy and enhance understanding.
Trusting its predictions, the brain cuts back on unnecessary sensory checks, allowing it to allocate resources for unexpected events—like a sudden ingredient change.
Electroencephalography and functional MRI data from the study supported a lower metabolic need when actions were predictable, reinforcing the idea that knowing ahead lessens the brain’s workload.
These feedback loops also clarify how we keep track of others in noisy or cluttered environments.
Relying on motor memories, our brains can weave together fragmented views into a cohesive scene—a crucial ability for teamwork on busy streets or during dinner conversations.
This is weird, why does it matter?
Grasping how the motor system influences perception might refine rehabilitation approaches post-stroke. Training methods that focus on anticipating movement sequences—rather than merely mimicking single motions—could more effectively rebuild damaged brain pathways.
The findings also inspire innovators developing assistive robots and augmented-reality glasses. Systems that can predict human intent just a moment in advance can navigate more safely, hand over tools correctly, or identify hazards before they become accidents.
Beyond healthcare and technology, this research invites a fresh appreciation for the unrecognized work our brains do. When we pass the salt or catch a thrown set of keys, intricate layers of neural forecasting smooth out the exchanges.
This detailed mapping illustrates that a large part of what we “see” is actually influenced by our experiences rather than strictly what our eyes perceive.
Future studies on how the brain sees
Upcoming studies will determine if this feedback pattern appears during more complex social interactions—like playing music together, learning a new sport, or interpreting facial expressions in fast-paced chats.
If motor-based prediction proves significant in those areas, training programs that broaden a person’s range of movements could also enhance perceptual abilities.
For now, the main takeaway is clear: when actions play out in a familiar way, the brain often lets memory take the lead. This shortcut keeps life flowing seamlessly, one well-timed prediction at a time.
The full research is available in the journal Cell Reports.





