Can we talk to computers using our body ?

This is a long one ..... so please bear with me

As a human-centered designer, the next few years are going to see a dynamic shift in interface design. Amplified by the pandemic, touchless and embodied interactions are going to disrupt customers as well as personal experiences. With gesture-based interaction as one of the frontiers in this new model of interaction, I wanted to explore the possibility of developing one holistic interaction model that would integrate the use of Natural User Interaction as the primary mode of communication. to create an interaction system that is truly seamless and intuitive and adaptive to the user. the purpose of incorporating such a model in HCI would shift the focus from having humans adapt to the language of technology to having technology adapt and simulate human behaviors.

Breaking Down the Context

Whilst defining the scope, I wanted to ensure that I choose a context within which gestures could work as a preliminary model of interaction instead of an alternative to a well-defined interaction ecosystem. One of the primary goals of NUI is to replicate the experience of natural interaction. Most of our interactions exist in 3 dimensions and require a wide range of movements from every part of our body, due to this it was important that the technology and scope also accounted for the same range of motion. For the sake of the experiment, I wanted to only focus on the interaction module by taking a speculative leap in the technological abilities and possibilities.
For my Scope, I have thereby narrowed it down to the interaction sin AR and MR environments. Within these environments, I am looking at analyzing my line of inquiry within the context of interaction models of navigational and instructional modules as they present a level of uniformity and they form the initial and basic primitive map of the entire interaction system

Looking at Sign Language for Inspiration

Traditionally sign was often used synonymously with gestures due to the common manual modalities. Sign was therefore not considered to be a part of language as it was perceived to be more pictorial rather than symbolic.
It was said to lack precision, subtlety, and flexibility and be incapable of abstract thinking. However over time sign language soon managed to get adopted within the vast ecosystem of language. Sign language contains all the fundamental features of the language with its own set of rules, word order and word formation.
Due to the presence of a systematic structure and rules to govern sentence formation, sign can now unequivocally be presented under the same umbrella of language. Therefore the question arises if there is any commonality of physical modalities with gestures along with the structural aspect of a systemic structure of language, to bring the inclusion of sign into the development and innovations of gesture-based interactions to create a more integrated HCI.

Types of Gestures

ILLUSTRATORS
Illustrate the spoken language that they accompany
EMBLEMS
Have a universally accepted Meaning. Does not require the assistance of spoken language
MANUPILATORS
Behaviors' that are indicative of the individuals internal state

SWOT Analysis of Gestures as Interactions

Experiment (Draft 1)

Task 1
Your glasses are normal, you want to activate the MR technology as you need the GPS
Action
Switch On
Switch OFF
Task 2
There is a reminder that has popped up as you are writing on your book and you want to dismiss that notification as it is no longer of significance
Action
To Dismiss or remove the notification
Setting the scene
Task 3
You were following the steps whilst attaching your LEGOset and you want to go back to the previous step and then move to the next one as you missed a step
Action
Back and Next
Task 4
There is a holographic figure that is giving you instructions and talking to you, how would you tell that figure to hurry up
Action
Hurry Up /Increase speed
Task 5
There is music playing from your system, but someone is talking to you, so you want to pause the music and then play it again
Action
Pause
Play
Task 6
You are seeing an interesting fact about an object you want to share it with a friend who also has the same glasses as you
Action
Share and Communicate
Task 7
There is a reminder that has popped up as you are writing on your book and you want to dismiss that notification as it is no longer of significance
Action
To Dismiss or remove the notification
Setting the scene
Task 8
There is an object in the distance but you want to get a closure look, your device has the ability to zoom into real space
Action
Zoom in Zoom out

Participatory Body Storming

To carry out the pilot research, I engaged in a participatory body storming session with my peers, to understand the limitation of the experiment, how to improve on the structure what was the reasoning behind their choice of modality and gestures and if there was any other feedback on how the experiment is to be conducted

Findings

Final Experiment Mapping

Experiment I

The objective of this experiment is to ask the participants to perform a set of tasks within the realm of Mixed reality. With the constraints of the physical modality and 2 opposing environmental settings, the participants are given goals that they need to complete. The common gestures for the same tasks are noted down to create the vocabulary set
  • The user is shown 10 seconds of  the Microsoft Mesh Video to give context about the Mixed Reality realm
  • A storyline is narrated to stitch all the interactions together
  • The technological feedback is performed by the experimenter

Aim

To create and validate a gesture-based vocabulary set for Mixed Reality whilst also making note of the relevant inspirations and factors that influence the same

Participant Persona

Age Group
Profession
Background
Average Digital Literacy
20-24
Student Designers
Upper Middle-Class Urban Upbringing
7.5 (1-10)
Here digital literacy account for measuring the steepness of the learning curve that each participant goes through whilst interacting with technology

Defining the Environment

Mixed Reality (MR), as its name implies, is a combination of AR and VR. It is also specified as Hybrid Reality as it blends real-world and digital elements. While it is mainly a technology used for mixing the physical and virtual world, the best side of MR is the realistic interaction between the users and the digital objects.

Tasks

  • Activate and deactivate the technology
  • View the object from different perspective
  • Scale the object to be bigger and smaller
  • Increase and Decrease the size of the object
  • Initiate and Terminate a process

Environmental Settings

  • In solitude
  • In a Crowded Space

Setting up the Context

You are part of the design team, before your presentation, you need to fix a part of the chair design before presenting its hologram to your clients. You put on your glasses, activate (switch on) the device, and view the chair design. You view it from different perspectives, Scale it up and down and increase and decrease its size (example from 150 cm to 300 cm). Once happy with the final design, you initiate (start) the process of rendering the chair. Midway through the render, you realize that you need to make another change so you terminate(stop) the process. After making the change, you initiate the process again and let it complete. Happy with your design, you deactivate(switch off) your device, take it off, and head to work.
Material Used

Conducting the Experiment

Final Findings

Influences

  • Whilst referring to influence, the experiment is accounting for conscious influence that the participant is drawing from.
  • 55% of the participants drew their influences from interaction models that they had witnessed in movies. Out of that 45% drew their inspiration from one specific character and franchise- Iron Man (Tony Stark)
  • In fact, movies influenced interactions more than screen-based replications of gestures.
  • Surprisingly interactions with the physical world barely had any conscious influence on the participants
  • Only 2/ 20 participants indicated any resembles with a familiar interaction with a physical entity

Developing and testing the Vocabulary set

The reverse of experiment 1 here the participant is informed about the technology, context, and space. After which the experimenter performs the gestures from the vocabulary set and the participants' task is to associate the gesture with the function they feel is relevant
The Participants chosen for this experiment are different from the set chosen to create the vocabulary
1
2

Accounting for Biases

Final conclusion

If you still wish to view this project in further detail, view my complete process book

Click Here

Heading