inter/face-to-face
a speculative project on how we interact with screens in OL
“Our love for the digital interface is out of control. And our obsession with it is ruining the future of innovation.”
— Golden Khrisna
June 25, 2040
It’s my 44th birthday. And finally, I’ve done my magnum opus, the thing that would let my name stay forever in the field of science. The quantifier. It’s a device that will make us humans enter the digital space. Literally, we are saving humanity from extinction.
I just heard in the news that aside from climate change, Professor Motohiko Murakami of the National Space Agency decided to drop the idea of space exploration and find a more sustainable place for humans. Murakami admitted that we are nearing the end. The earth will die. And humans will be doomed as a natural consequence because the earth’s core is cooling faster than expected. Earth will soon be a wasteland.
This is what I’m telling them from the beginning. We need to survive. And the way is not to investigate the outer space but the digital space. I THINK I’VE FOUND A WAY TO BE FREE.
And what a happy way to end this day, so I cooked my favorite meal and shared it with my wife. We spent the whole evening sharing stories, laughter, and worries.
And by the way, she doesn’t want me to pursue this project.
I still have time to persuade her to be pixelized and join me in the digital world before the Great Decision happens, where every person on earth will decide to stay and suffer the wrath of nature or move to the digital earth to perpetuate the human species.
January 11, 2050
[start]
This is how it feels to be in the digital earth. I feel like… no, I feel…
data
[end]
August 11, 2122
[start]
[humanizing conversation] [special code]
This note is coded in a specialized form to hide it from the AIs.
To all pixelized humans, I am sorry. In my hopes of advancing technology and designing online tools to help our species survive, I have not considered its consequences, that we are just simply amusing ourselves to death. If there’s one thing that I have considered myself as successful, it’s keeping our virtual community alive — through manageable human stresses, interaction, inclusion, and genuine conversations with one another. Things may have change over the years, technology may advance, and we may get old, but what makes us humans will always be the same. And this will be our secret that no AI could copy or learn — the feeling of freedom.
[humanizing conversation] [special code]
[end]
So… that is an excerpt from the short story entitled Letters from the Age of Pseudo-Anthropocene which I wrote for this project. I have actually dreamed of it. In my dream, I saw how humans transform into pixels and then enter the digital earth, from feeling everything to simply feeling data.
What if this becomes the new reality?
Which I think is gradually happening. Since the introduction of computers, our world is heading to make everything digital. And the COVID-19 pandemic acted as a catalyst forcing us to abruptly shift all our societal transactions using digital platforms. Look closely at how every technological innovation and anything associated with the future has a screen, an interface. Every social interaction we used to do in person now has a digital counterpart.
We shop now through screens — online shopping.
We manage our finances through screens — online banking.
We find partner/s in life through screens — online dating.
We can earn academic credentials through screens — online learning.
It seems that the problems we have today have always digital solutions after all.
Such digital solutions like the use of videoconferencing tools to continue education despite lockdowns and health restrictions paved the way for the construction of a new system of social control.
We are imprisoned on various levels starting from the struggle of rural communities versus urban areas for stable Internet connection which is affected by the economic conditions of the place, of the poor versus the rich for device efficiency which is affected by the constant system updates/upgrades, of the newbie versus the expert for familiarization on how to use online tools which is affected by the individual’s socio-economic and cultural context, up to the conflict between machine and humans which depends on how our technology is designed.
Technically speaking, machines are designed by someone like us. And that person is designing how we should behave with technology.
Neil Postman wrote that,
“…embedded in every tool is an ideological bias, a predisposition to construct the world as one thing rather than another, to value one thing over another, to amplify one sense or skill or attitude more loudly than another.”
I would reiterate my realization in my Critical and Radical Pedagogies paper that,
“The online learning tools that we have seem to grant us the power and freedom to choose what we want to do. To mute, unmute, open and close the camera, type in the chat, and so on… Intentionally not learning it would seem a self-imposition of disability, but in reality, it is this new technology that is being birthed by commercialism that wants to impose a new order of control on society.”
Everything is intentional. The design of technology is intentional. The tools or apps in it. And the form of interfaces that we have.
I have posited in that same paper that at the onset of the COVID-19 pandemic, we are trapped in our homes and Zoom. And cited a student saying that,
“Personally, I struggled emotionally since I felt like I was trapped in the four corners of our house. There is no way for me to vent out my stress and anxieties because my classmates or friends are not around physically… during virtual classes, simple tasks feel like a huge amount of burden… there is always an awkward feeling towards sharing our sentiments in private chats or group chats because we did not build our friendship or connection the natural way.”
If only that student regains the freedom to connect with her/his peers/classmates without the awkwardness brought by the interface.
So, is there any other way of designing technology?
I have tried initially to speculate how I can do this? Maybe I will start framing my question on how I can create a nurturing online learning experience. But later on, I realized how enormous the word “nurturing” is, so I zoomed in a bit and tried the word “engaging.”
I then found out that my students want a BIG classroom — a classroom with bearable pressure, interactive and inclusive, and with genuine conversations. That framework will guide me in designing the technology responsible for online learning experiences. Yet I am still confused about how to practically start this, so I try to go back again and reframe my question.
After all, it seems that what I am really trying to answer is what makes a humane online classroom so we can have a humanizing learning experience.
So, what if we start to humanize online learning?
How could we do that?
What will be its form?
If we apply the Freirean concept that teaching is a human act to this…
What if we start by examining how we interact with our screens in online learning?
What if the best way to start humanizing online learning is to put a physicality to it?
Literally changing the default form of technology.
What if the tiny cameras we used to see just above our screens have a form and size like the “human head?”
What if it is really a talking head that can sense even non-verbal communication?
What if it can sense our genuine emotions?
What if we have something like this…
The plan inter/face-to-face is a laptop that makes a person feel physically connected to others while learning remotely. I intended to make the invisible values “companionship” and “limitation” visible, as opposed to the commonly visible values in online learning, which are “isolation” and “flexibility.”
I got this idea since whenever I ask students around me what they like about online learning, they always respond that it is because of its convenience, which is why they like it. There is flexibility in online learning. But whenever I ask them why they do not like it, their usual response is that the feeling of isolation, of being alone, is what they hate.
Going back to my speculative project, it aims to let the audience imagine and think about how they interact with their computer and its screen. This explores the possible relationships we can make with computer screens.
If we look closely at the design of online learning tools such as Zoom, Google Meet, and MS Teams, there exist what Robby Nadler calls third skins.
“Third skins are about recognizing how the virtual worlds we navigate flatten our interactions due to spatial shifts between physical and virtual space.”
That means the background and the speaker becomes one. To appreciate this more, let us talk about first skins and second skins. First skin is the literal skin. It tells something about the person — its origin, ethnicity, age, health, etc. Second skin refers to what we wear. It tells something about our socioeconomic status, political and religious beliefs, interest, etc. Now, in third skin, the physical space, including all the sounds in it, becomes part of who we are. In the virtual space, the background and the speaker are tied to one another or flattened.
So, in third skins, there is a loss of administrative agency on the design of the space or the choice of the location for both parties — the sender and the receiver. They are in two different locations, in which either of the two places may influence the communication success, compared to predetermined sites that in-person communication employs. Or they may be competing spaces within the virtual world like attending online classes while having opened tabs such as Facebook, Twitter, or Instagram, leading to what is known as Zoom fatigue or exhaustion.
In computer-mediated communication, there is a blurring of interstitial and surrounding spaces. There exists not one shared interstitial space but two personal interstitial spaces simply linked by technology. Technology serves as a spatial divider that leads to telecocooning — sites where the user cleaves from physical spatial reality — that can lead to “absent presence” or talking to people who are digitally present but are not really there.
Simply put, we get cognitively exhausted quickly in online interactions because of the third skins. Nadler’s colleague even said that,
“So no matter how many times I use my laptop to teach, it cannot feel the same as a physical classroom, not because the environments are different but because the spaces for possibilities are different. And if it does not feel the same, it may not be the same because spatial feelings influence user behavior.”
This is because no matter what we do, if all that we see on screens are flattened as third skins, though we know that to whom we speak online is different from the background; we see no separation between the human and the tool, so we treat everything as one whole digital screen. There is no defining human element now that could make the conversation the same as face-to-face.
Now, what if we begin humanizing online learning by changing some form of the present technology? Let us add a talking head to it.
But why only put the head? Why not include a body or an arm?
Simhi and Yovel (2020) found that whole-person representation only need a face to be recognized at a short distance, while at long distances, people tend to recognize others thru their body and their gait distinctiveness increases the person recognition.
So, since interacting with computers is usually done with short distances, I have decided to put only the whole head to support aesthetically the face.
Also, to integrate what Terayama Shūji calls active cinema by elevating the role of the spectator or audience to active co-author of the film. Terayama believes that the projection space — the distance between the projector and the screen — is a creative space. Applying this form of breaking the fourth wall in film to online learning, the design of the talking head is to provoke the spectator or user to engage with machines or the interface the same as how the user interacts with a real person. Paradoxically, provoking the person to think about her/his user-interface interaction, suggest to look beyond the screen.
So, to test my speculation, I asked my colleague to participate in my experiment.
I instructed the “talking head” to begin with the prompt of stating his name followed by the phrase, “your personal tutor,” and the question, “what do you want to ask today?” The conversation is natural and spontaneous. It ended with a prompt of, “Are you satisfied with my answer? Pick a color that best suits your feelings today.”
They said that the experience at first was funny and weird, but later on, they enjoyed the conversation. They affirmed both that there’s a feeling of emotional connection.
Interestingly, the one asking questions said that with the presence of the talking head, he did not look at the screen anymore.
On matters of answering questions, though the answers are contextualized, they both agreed that they are not looking for the correct answers. They just want someone to listen to them and see a familiar face.
I got the same reaction when I presented this to my cohort in Designing Education. During my presentation, I ask a colleague to act as the talking head and another one to act as the user. Given the same prompts and without any script, the performative experience that the whole class witnessed created an overwhelmingly positive classroom atmosphere. They like the idea. And affirmed that initially, it is pretty weird and funny, but an emotional connection exists between the user and the talking head.
Now, when I came back to the Philippines and conducted the second experimentation, unfortunately, all my plans were changed because I was hospitalized with dengue fever. Luckily, I have very supportive friends and former students who helped me finish this project.
What we did was… I told them what my speculative project was all about. Then, since I was asking for a favor, I allowed them to creatively explore and interpret my speculation on online learning. They gave their personal touch in the process of making the short film. The speculation came from me, but they make everything — screenplay, script, acting, camera angles, editing, effects, lighting, etc., including the title for the film.
Initially, I gathered them online to explain my speculation, and they asked me anything about it, so they would understand what my goal was all about. In fact, at the end of the virtual meeting, we conducted a screen test to see who was the best fit to be the lead actors. Yet on the date of production, some cannot attend because of personal extenuating circumstances.
We employ a process that is similar, I think, to codesigning.
They worked on their own without my supervision. They only asked me for some guidance on matters of the function and nature of my speculative project.
Along the process, they wanted to highlight the emotional connection between the talking head and the user. So, they dropped the prompts to make the conversation more natural and the color buttons from my initial prototype and created a much simpler, yet I think, sophisticated talking head.
The result was… the short film And Then Two Griefs Coincide.
I asked the project lead, the director, on how they arrived at that and why they chose that title. She sent a message to me thru chat,
“There was a part in the script where the main character asked the talking head why the device is so good at “getting” and “understanding” him, and the talking head answered, ‘I’m not going to lie… it makes me feel like an anomaly most of the time. And I guess that’s how I’m able to get you… at some point in some random venn diagram, your grief and loneliness coincide with mine.’
“I feel like this part of the story where we saw the two was able to have an opportunity to discuss their emotions, revealed to us that (while I don’t believe there’s beauty in pain) even pain, grief, loneliness can connect us to people with whom we can build truest relationships. And I think that is the strongest message of this film and the two characters’ growing relationship if we really look at it. Grief and loneliness were very much felt during the onset of the pandemic, but then the two made it through with the presence of each other.”
What I can infer from their creative explorations is that they highlighted the interaction between humans and machines. They look into the possibility of seeing machines as our best friends or confidant.
Upon reflecting and trying to figure out what I learned from this speculative project, it is interesting to note that changing the form or material of technology can really elicit behavioral change among its users — that materiality affects interaction, most especially how it transformed the social space to prioritize emotional wellbeing of persons; hence it is essential to start designing technology with humans as the center of the design.
What if we explore ways of changing the default form of technology by changing some of its shape, size, color, material, or texture?
How would our world be different now?
Technology could really be never neutral. With the brains behind every technological innovation, each designer has something in mind that includes someone and excludes someone, even if unintentional.
Putting something on technology could alter the level of the user’s emotional and cognitive processing. Designers should choose the materials and the form of technology they want to give birth on.
So, relating this to the kind of distance education/online learning that we have, learning could possibly go beyond the screens.
Bibliography
1. Arroyo, D., & Yilmaz, Y. (2018). An Open for Replication Study: The Role of Feedback Timing in Synchronous Computer‐Mediated Communication. Language Learning, 68(4), 942–972.
2. Bernstein, M., Oron, J., Sadeh, B., & Yovel, G. (2014). An Integrated Face–Body Representation in the Fusiform Gyrus but Not the Lateral Occipital Cortex. Journal of Cognitive Neuroscience, 26(11), 2469–2478.
3. Buckley, C., Campe, R., & Casetti, F. (2019). Screen Genealogies (MediaMatters). Amsterdam University Press.
4. De Angelis, E. (2019). Beyond the Screen. Annali Di Ca’ Foscari : Rivista Della Facoltà Di Lingue E Letterature Straniere Dell’Università Di Venezia, (1), Annali di Ca’ Foscari : Rivista della Facoltà di lingue e letterature straniere dell’Università di Venezia, 2019–06–27 (1).
5. Dutta, P., Barman, A., & SpringerLink. (2020). Human Emotion Recognition from Face Images (1st ed. 2020.. ed., Cognitive Intelligence and Robotics). Singapore: Springer Singapore : Imprint: Springer.
6. Grossman, A., & Kimball, S. (2021). Beyond the Edges of the Screen: Longing for the Physical ‘Spaces Between’. Anthropology in Action (London, England : 1994), 28(3), 1.
7. Güneş, E., & Olguntürk, N. (2020). Color‐emotion associations in interiors. Color Research and Application, 45(1), 129–141.
8. Harriger, J., & Pfund, G. (2022). Looking beyond zoom fatigue: The relationship between video chatting and appearance satisfaction in men and women. The International Journal of Eating Disorders, The International journal of eating disorders, 2022–05–04.
9. Heimgärtner, R., & SpringerLink. (2019). Intercultural User Interface Design (1st ed. 2019.. ed., Human-Computer Interaction Series). Cham: Springer International Publishing : Imprint: Springer.
10. Manago, A., Brown, G., Lawley, K., & Anderson, G. (2020). Adolescents’ Daily Face-to-Face and Computer-Mediated Communication: Associations With Autonomy and Closeness to Parents and Friends. Developmental Psychology, 56(1), 153–164.
11. Mansur, A., Hauer, T., Hussain, M., Alatwi, M., Tarazi, A., Khodadadi, M., & Tator, C. (2018). A Nonliquid Crystal Display Screen Computer for Treatment of Photosensitivity and Computer Screen Intolerance in Post-Concussion Syndrome. Journal of Neurotrauma, 35(16), 1886–1894.
12. Miguel Alfonso Bouhaben. (2021). State-Screen, Society-Screen and Body-screen Videoartistic representations of the uses of technologies during the pandemic. Estudios De Teoría Literaria, 10(22), 41–55.
13. Moorhouse, B. (2020). Adaptations to a face-to-face initial teacher education course ‘forced’ online due to the COVID-19 pandemic. Journal of Education for Teaching : JET, 46(4), 609–611.
14. Nadler, R. (2020). Understanding “Zoom fatigue”: Theorizing spatial dynamics as third skins in computer-mediated communication. Computers and Composition, 58, 102613.
15. Nesher Shoshan, H., & Wehrt, W. (2021). Understanding “Zoom fatigue”: A mixed‐method approach. Applied Psychology, Applied psychology, 2021–11–14.
16. Ng, J. (2021). The Post-Screen Through Virtual Reality, Holograms and Light Projections : Where Screen Boundaries Lie (MediaMatters). Amsterdam: Amsterdam University Press.
17. Peper, E., Wilson, V., Martin, M., Rosegard, E., & Harvey, R. (2021). Avoid Zoom Fatigue, Be Present and Learn. NeuroRegulation, 8(1), 47–56.
18. Petriglieri, G. (2020). Musings on Zoom Fatigue. Psychoanalytic Dialogues, 30(5), 641.
19. Schwartz, L., & Yovel, G. (2019). Learning Faces as Concepts Rather Than Percepts Improves Face Recognition. Journal of Experimental Psychology. Learning, Memory, and Cognition, 45(10), 1733–1747.
20. Scott, H., Batton, J., & Kuhn, G. (2019). Why are you looking at me? It’s because I’m talking, but mostly because I’m staring or not doing much.
21. Simhi, N., & Yovel, G. (2021). Independent contributions of the face, body, and gait to the representation of the whole person. Attention, Perception & Psychophysics, 83(1), 199–214.
22. Takei, A., & Imaizumi, S. (2022). Effects of color–emotion association on facial expression judgments. Heliyon, 8(1), E08804.
23. Te Walvaart, M., Dhoest, A., & Van den Bulck, H. (2019). Production perspectives on audience participation in television: On, beyond and behind the screen. Convergence (London, England), 25(5–6), 1140–1154.
24. Veletsianos, G., Childs, E., Cox, R., Cordua-von Specht, I., Grundy, S., Hughes, J., . . . Willson, A. (2022). Person in environment: Focusing on the ecological aspects of online and distance learning. Distance Education, 43(2), 318–324.
25. Williams, N. (2021). Working through COVID-19: ‘Zoom’ gloom and ‘Zoom’ fatigue. Occupational Medicine (Oxford), 71(3), 164.
Footnotes
- First published on August 9, 2022, and revised to include the bibliography and footnotes on January 17, 2023.
- This speculative project is filed in the repository journal of Goldsmiths University of London Online Research Collection as Open Access. You can access it thru this URL:
https://research.gold.ac.uk/id/eprint/32134 - The speculative project’s name is “inter/face-to-face,” but the broadcasted short film’s title is “and then two griefs coincide.”
- To cite this speculative project, you can use this format: Pangan, Emmanuel John. 2022. and then two griefs coincide.[Project]