Well, that was awkward—and here’s why

The awkward greeting. We’ve all seen it. Many of us have participated in it. For some, it may be a daily occurrence. This could happen when you go in for a high-five but the other person fist bumps. Or when you wave at someone who is actually waving to a person behind you. Or when a handshake involves multiple hand movements that both parties are not mutually privy to. These are a few instances when a greeting can get awkward.

We all know it when we see it, but what really makes an interaction awkward? Given its subjective nature, awkwardness can be difficult to define. Akila Kadambi and colleagues set out to answer this question in a study published in the Psychonomic Society’s journal Attention, Perception, & Psychophysics earlier this year.

In their first experiment, they wanted to see if people are in general agreement when identifying awkward situations. To do this, they played 34 YouTube videos of greeting behaviors (without sound), and they asked participants to categorize each interaction as “awkward” or “not awkward.” The researchers used majority rule here—if more than 50% of participants classified a video as awkward, they deemed it officially so. Among the 30 participants, there was a high level of agreement for detecting the presence of awkwardness (reliability coefficient r = 0.85). Based on this, they were able to classify 24 videos as undeniably “awkward” and 10 videos as “natural.”

When participants identified a video as awkward, they were then asked to describe why it was awkward. This was an open-ended description task, and participants could freely explain what was happening in the encounters. Their text responses were entered into the online service Textalyser, which returned the 200 most frequent words in the total set of responses. Kadambi and colleagues formed two subsets from the frequent word list: motor-related words (e.g., “pull,” “grab,” “toward”) and social-related words (e.g., “try,” “want,” “confused”). They found that the number of motor words for a given video had a significant correlation with the proportion of participants that identified the video as awkward (r = .50, p = .012, observed power = .863), but there was no such relationship for social words. Therefore, motor cues seem to have a strong influence on perceiving a situation as awkward.

The authors designed this word cloud below to display the motor-related and social-related words from the participant descriptions.

In their second experiment, Kadambi and colleagues altered the visual display of the same YouTube videos to determine the most salient visual cues that prompt onlookers to characterize an interaction as awkward. For this, they devised three different display types.

Patch display: This was a super pixelated filter that essentially blurred the visual scene. Of the three display types, this version had the most contextual information (i.e., discernible human figures and objects in the background).

Body display: This version displayed human forms against a black background. They used a deep learning model to highlight certain body regions (head, torso, upper arm, lower arm, upper leg, lower leg).

Skeleton display: This display showed stick figure versions of people against a black background. They used an algorithm to locate key body joints and extract kinematic movement from the videos.

Here’s an example of one video screenshot in all four versions: raw format, patch display, body display, and skeleton display.

In this experiment, the 66 participants were assigned to one of the three altered displays. They watched all 34 videos in a random order and rated each one on an awkwardness scale from 1 (natural) to 6 (awkward). The ratings from this experiment were significantly correlated with the proportion of awkward responses from Experiment 1. This means that participants had general agreement about awkwardness across all displays.

Further analysis revealed that the videos with natural interactions were rated similarly across all display types. However, there was a key difference found in the videos with awkward interactions. The patch display yielded significantly higher awkwardness ratings than the body or skeleton displays. This indicates that the human characteristics and scene background may have a strong influence on determining awkwardness, as this information was available in the patch display but not the other two. The mean ratings by condition are presented in the graph below.

This pair of experiments draws a connection between lower-level perception and higher-level social judgments. That is, the kinematics of human movement can be used to determine if an interaction is socially awkward. Raters consistently identified interactions as awkward, even when the visual presentation was narrowed down to the bare bones of the skeleton display—when only kinematic movements were visible. Since the patch display had significantly greater awkwardness ratings, it seems that contextual information also plays a role in this judgment. Furthermore, the raters used two types of words when explaining awkward situations—motor descriptors and social descriptors—although motor outweighed social. All of this together suggests that the perception of awkwardness relies on principles of human kinematics coupled with contextual cues.

The findings of this paper demonstrate that motor coordination is linked to social coordination when establishing awkwardness. Greeting behaviors are like a dance, and unfortunately, this dance is not choreographed. When two or more people engage in this kind of improvised form, there is bound to be some discoordination in their movements. Now we see that this lack of coordination is what makes someone say: “awkward….”

Psychonomic Society article featured in this post:

Kadambi, A., Ichien, N., Qui, S., & Lu, H. (2020). Understanding the visual perception of awkward body movements: How interactions go awry. Attention, Perception, & Psychophysics. https://doi.org/10.3758/s13414-019-01948-5

 

Author

  • Brett Myers is an Assistant Professor in the Department of Communication Sciences and Disorders at the University of Utah. He received his doctorate from Vanderbilt University, where he studied with Duane Watson and Reyna Gordon. His research investigates planning processes during speech production, including parameters related to prosody, and their role in neural models of motor speech control.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like