#time4action in its Ascendancy: It’s Time for the Ball

This #time4action special issue of Attention, Perception & Psychophysics, edited by Joo-Hyun Song and Timothy Welsh, is a tour de force for which they should be applauded. To narrow my comments enough to fit in this space, I will focus mostly on the article by David Rosenbaum and Iman Feghhi, The Time for Action is at Hand. Whereas Paul Cisek provided a contribution that beautifully spans its purview across millions of years of biological evolution of motor activity, Rosenbaum and Feghhi’s article masterfully spans its purview across millions of minutes of cultural evolution of the scientific study of motor activity.

Both articles perform a kind of informal time series analysis that reveals a steady progression from simple forms of motor processing to more complex ones. Rather than simply treating action as the consequent of perceptual processing, Rosenbaum and Feghhi (and many others in this special issue) show us that the scientific study of motor activity has revealed that action can also be the antecedent of perceptual processing.

Rosenbaum and Feghhi point out that when the late great Dick Neisser wrote the book on Cognitive Psychology back in 1967, he placed a marked focus on “information intake.” That focus points to a clear bias toward one particular portion (and direction) of the complex flow of information that happens between an organism and its environment; namely, how the organism absorbs information. It is worth noting that Neisser wrote that book before he got to know J. J. Gibson at Cornell. At the time, Gibson was building his theoretical framework of ecological perception, where the focus of psychology was not on the brain or on action, but was instead on the interface between the organism and the environment. After spending a few years in an office nearby Gibson’s in Cornell’s psychology department, Dick Neisser wrote a second book. In his Cognition and Reality (1976), Neisser recanted much of his previous enthusiasm for mechanistic feed-forward information-processing models, and imported principles from Gibson’s ecological perception into cognitive psychology.

The result was the formation of what Neisser called the perceptual cycle, now often referred to as the “perception-action cycle.” In the laboratory, this term may be relatively appropriate since most laboratory tasks require the participant to wait until a stimulus is presented before they act. However, in real life and in ecologically valid environments, organisms often initiate their actions without waiting for a triggering stimulus. Therefore, calling it the “action-perception cycle” might be more appropriate―wherein action tends to be primary and is significantly responsible for many of the perceptual events that take place.

Take eye movements, for example. You make 3-4 of them per second (actually about 4-5 per second while reading), and that’s just counting saccades. Add to that the smooth pursuit eye movements for following a moving object, and the fixational pursuit movements for fixating a stationary object while your head moves, and it becomes clear that your eyes are moving practically all the time!

Sometimes an object does something that catches your attention and you make an eye movement to it, thus showing how perception causes action. But more often, objects in your field of view are sitting there motionless, and your goals and intentions are what drive your eye movements to change your perceptual input from one foveated object to another foveated object. That’s action causing perception―at least 100,000 times a day.

Fitting with the majority of this special issue, Rosenbaum and Feghhi focus not on eye movements but on reaching and grasping movements. As it turns out, the movement of your hands causes changes in your perception as well. And as a highly common method for externalizing our intentions, those actions can be pretty darn interesting in and of themselves. Rosenbaum and Feghhi begin by revisiting an old experiment where Rosenbaum and colleagues first introduced “finger fumblers” (fast patterns with the fingers that can get tripped up just like a tongue twister). They showed how preparation for each next movement is more efficient when the previous movement can essentially be repeated with only a single parameter changed, compared to when the next movement must be prepared entirely anew from scratch.

Rosenbaum recounts how it was a violin exercise that first inspired him to explore that parameter remapping effect. One finds that different movements often blend into one another over time to constitute one action, and this blending is smoother when fewer parameters need to be modified from one movement to the next. By carefully observing ecologically valid complex actions that involve combinations of movements, Rosenbaum and Feghhi artfully teach us how you can discover action phenomena that would never have been imagined while studying simple movements in the laboratory alone.

In fact, throughout the article, this background of everyday experiences outside of the lab is shown to be replete with opportunities to explore action and perception. The lesson provided from these background observations is to always be on the lookout for ecologically valid scenarios where the action-perception cycle is doing something interesting, something that can be tweaked just enough to fit inside the constraints of the laboratory. For example, Rosenbaum regales us with the story of how his repeatedly reaching for olives at a dinner party inspired him to develop the hand-path priming effect (Jax & Rosenbaum, 2007), where a reaching path can be primed for future use in a manner not unlike semantic or syntactic priming. He also tells of watching a waiter grasp an upside-down glass with an upside-down grasp, so that upon flipping it upright, the grasp too was now upright. This end-state comfort effect was then explored in Rosenbaum et al. (1990), where they found that participants consistently grasped objects in a fashion that best accommodated the most comfortable grip for the end of the maneuver.

One of the lessons to be taken from this work is that, while biomechanics and kinematics need to be taken into account to understand motor movement, those hard constraints should be accompanied by soft constraints that influence the central tendencies of motor movement. Rosenbaum and Feghhi acknowledge that understanding the hard constraints can tell us what kinds of actions are possible in the first place, but the soft constraints are also needed so we can explain why it is typically only a subset of those possible actions that are actually used―especially in conjunction with perception.

Rosenbaum and Feghhi are, of course, not alone in noticing this important role of action in perception. If one takes the action-perception cycle seriously, as a continuous loop of information flow and transformation, then it stands to reason that attempting to identify elements in that loop that contain purely perceptual information or purely action information should be impossible. Perception and action are continuously blurred into one another in that cycle.

And that is exactly what Bernd Hommel has been proposing since his 2001 article, The Theory of Event Coding. In Hommel’s contribution to the present special issue, he updates his model to accommodate the mountains of recent evidence. Whatever an action representation is, it must be (at least partly) encoded in terms of its perceptual results. This will be crucial for modifying an action midstream due to perceptual feedback, for learning how to optimize those actions from trial to trial, and for coordinating joint actions with others (e.g., Knoblich & Jordan, 2003).

In fact, this special issue contains quite a few elegant examples of action and perception interacting with one another. For instance, Agauas and Thomas demonstrate that your handshape in peripersonal space influences your visual perception and change detection (for precursors, see Abrams, Davoli, Du, Knapp, Paul, 2008; Reed, Grubb & Steele, 2006; Thomas, 2017; Tseng & Bridgeman, 2011).  Halvorson, Bushinski, and Hilverman show that hand gestures are integral to the verbal memory encoding process and Meghanathan, Nikolaev, and van Leeuwen show that eye movements are integral to the visual memory encoding process (see also, Richardson et al., 2009, and even Parnamets et al., 2015). And Smith, Davoli, Knapp and Abrams show us that even just standing up can give you improved cognitive control!

Clearly, a wide variety of human movement systems, such as postural control, hand gestures, reaching movements and eye movements all have direct and immediate effects on perception (e.g., Witt, 2011). They are not merely output systems that slavishly follow instructions from cognition. They involve feedback processes (as part of the internal neurophysiology and also as part of the external action-perception cycle) that allow partially-prepared and partially-executed actions to influence perception.

The “partially” part there is crucial.

Allow me to return to J. J. Gibson, where he showed us decades ago that, in ecologically valid real-world situations, there are no such things as stimuli and responses. The environment almost never delivers an individuated stimulus to you and then quietly waits for your response. By the same token, an organism doesn’t generate an individuated response, and then quietly wait for the next stimulus. Just as one pulse in the flow of stimulation often blends partially into the next pulse, one pulse in motor output often blends partially into the next pulse. If we didn’t partially blend our movements, then we would all look like the robot dance from the 1970’s, moving only one joint at a time. When we are partway through executing one movement, we are routinely partly beginning (or at least preparing) other movements that are partners of that movement, as Rosenbaum and Feghhi clearly show. As a result, there are often no discrete segmentations to be found between a set of movements, and thus, no discrete point in time when an action is complete and thus can finally influence the next perceptible event. Rather, in most realistic perception-action scenarios, the continuous flow of motor output is constantly influencing the continuous flow of sensory input (and vice versa).

This changes the calculus.

No longer can we pretend to label one stimulus with a formal logical symbol, connect it to a resulting response with its own formal logical symbol, and then connect that response to the next stimulus, and so on. Rather than a causal “chain” of events, where the chain links can be individuated for logical computational analysis, it is a causal “flow” of events―where the best tool for analysis just might be dynamical systems theory (e.g., Kelso, 1994; Port & Van Gelder, 1996; Smith & Thelen, 1994; Spivey, 2007; Turvey, 2018).

That temporal continuity that blurs one movement into the next, and one perception into the next, is now accompanied by a spatial continuity that blurs one sensorimotor mechanism into those neighboring it. For decades now, the neural and cognitive sciences have been gradually moving from the internal to the external. From “grandmother cells” (Barlow, 1972) to population codes (Georgopoulos, Schwartz, & Kettner, 1986) to networks of brain regions (Churchland & Sejnowski, 1992) to embodied cognition (Barsalou, 2008) to extended cognition (Clark, 2008) to group cognition (Fusaroli et al., 2012) and even to “Markov blankets all the way down” (Kirchhoff et al., 2018). Rather than zooming in deeper and deeper to find some kernel of cognitive origins, the scientific evidence has been pulling us toward zooming out wider and wider to find out what cognition is made of. In my upcoming book, Who You Are (2020), I guide the reader through a thick forest of scientific findings that gradually encourage you to include more and more stuff (e.g., your brain, your body’s actions, the tools and toys around you, and even other people) as part of who you are. And in Michael Turvey’s recent book, Lectures on Perception: An Ecological Perspective (2018), he outlines how life forms of all kinds exhibit exactly this bidirectional relationship between information-intake and information-outflow. The ecological Dick Neisser would be proud.

The articles in this #time4action special issue provide some of the most up-to-date findings in support of this process of expanding the purview of what makes the mind. It’s not just the information taken in by the brain, it’s also the information expressed out by the brain and body. In 2005, when Rosenbaum called action and motor movement “the Cinderella of psychology,” he was right. It’s been 15 years since then, and many cognitive scientists have been theoretically embracing embodied cognition (e.g., Barsalou, 2008; Shapiro, 2019; Spivey, 2007) but not really learning enough about the physiology, the kinematics, and the dynamics of motor movement to allow them to study it properly. Rosenbaum and Feghhi make a compelling case that “the time is at hand” for the neural and cognitive sciences to fully embrace the study of motor movement as part of cognition. I say: If the golden slipper fits, wear it, girl. It’s time for the ball.

Author

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like