Facial expressions enhance virtual reality for users with disabilities

AZoSensors talks to Arindam Dey from the University of Queensland and Dr. Mark Billinghurst from the University of South Australia about their research into using facial expressions to navigate virtual reality environments, making the virtual reality more accessible to people with a wide range of disabilities.

Facial expressions have been a topic of interest in virtual reality for over a decade. However, the use of facial expressions as a means of interaction – navigation and manipulation of objects – in virtual reality has never been explored before. Why is it?

Facial expressions in virtual reality have been the subject of research for a long time. However, using facial expressions to enable interactions has been difficult, mainly because most of the face is covered by the VR headset, and therefore many facial features are hidden. So the researchers first had to figure out how they could reliably detect facial expressions with high accuracy. It took researchers a long time to achieve considerable accuracy, and in fact, it’s still a topic of ongoing research.

Image Credit: Shutterstock.com/franz12

However, we now have a sufficiently reliable technology to detect facial expressions, at least some of them, which can be used for various purposes in virtual environments. However, most of the research has used facial expressions to make virtual avatars more realistic, and so far no one has used them for interaction.

One of the reasons for this could be that facial expressions might not be a pleasant method of interaction, and secondly, most people are interested in developing VR technology for able-bodied users, who make up the majority of the consumer market.

One of our research directions has always been to use virtual reality “for good”, and therefore we decided to use facial expressions to allow users who otherwise could not use reality Virtual.

What inspired your research on using facial expressions to influence objects in a virtual reality (VR) environment?

We have always been interested in research “for good” using virtual and augmented reality technologies and how augmented reality and virtual reality can be used to improve people’s lives. We have held several workshops over the past few years on this topic to motivate the research community to present and discuss more research in this direction.

One of the directions of good research is to make these technologies more accessible and inclusive.

A major assumption about VR users is that they will use handheld controllers that come with commercial VR headsets for interaction. We thought of other user groups in the community who cannot use their hands to interact in VR. That’s when we started planning an alternate method of interaction that doesn’t require hands.

Fortunately, around this time, three motivated students (Bowen Yuan, Aaron Goh and Gaurav Gupta) joined our research group at the University of Queensland for their senior year research project, and when we presented the idea to them, they were very excited to make it a reality. With their help, the use of facial expressions for virtual reality interaction has become a reality.

In conventional VR settings, it’s common to use touchpads or handheld controllers to move objects around. Can you provide insight into how your team was able to capture facial expressions used to trigger specific actions in VR environments?

We used an EEG device manufactured by Emotive, which records brain activities through 14 different electrodes. At the same time, this device is capable of measuring certain muscle activities in the face, primarily to detect “noise” or unwanted data in the recorded brain activities.
[if–>

Interacting in VR with Facial Expressions

Video Credit: Arindam Dey/YouTube.com

Normally, there are some data processing techniques that are used to remove this noise from the neural data before any analysis is done. However, we used the noise in the data to detect facial expressions. The EEG device provides an interface to achieve this. They call them “smart artifacts.” So when the user clenches their teeth to make other facial expressions, this causes noise in the EEG data, which can be detected and the expression recognized.

We then connected this EEG system to our VR system and used the three chosen facial expressions in the VR environments to interact with them. So, in other words, we did not develop the technology to detect the facial expressions, but we used it in a novel way in the VR system for the first time to enable interaction with facial expressions. More technical details of this interface are available in the paper.

Can you provide some examples of certain facial expressions and what actions they enabled in the VR settings?

There are seven different facial expressions that the EEG device can detect. We used only three of them – smile, frown and clench. The smile was used to start moving, and the frown was used to stop movement.
[if–>

The clench was used to perform certain actions in the environment, such as picking up objects or shooting zombies. For example, in VR, a user could smile and would begin moving forward in the direction that they are looking and then frown to stop moving.
[if–>

Using Facial Expression to Enhance Virtual Reality for Disabled Users.

Image Credit: Shutterstock.com/ G-Stock Studio

The main reason for using these three expressions was that we found they are more reliably and accurately detected by the EEG system when wearing the headset, and users also found them easy to perform. 

How did you measure the efficacy of your facial expression-based method? What were the three virtual environments that you used?
[if–>

We ran a user study to measure the performance of the facial expression-based method compared to a traditional hand-held controller-based method. The primary measures were neural activities, physiological signals such as electrodermal activity, usability and the sense of presence or immersion in VR.
[if–>

We used three different environments eliciting three different types of emotions. A happy environment exposed the participants to a bright open field with many butterflies flying around them. They had to catch the butterflies using a net. A neutral environment placed the participants in a bright warehouse where they had to pick up objects from the shelves. In the scary environment, the participants were placed in a dark warehouse with many zombies attacking them and they had to survive by shooting (eliminating) the zombies. All of these environments had appropriate sound effects.
[if–>

Our results indicated that, in general, the controller-based method performed well, but participants felt more immersed in the VR environments when they used the facial expressions.

It should be noted that our goal was not to prove that facial expressions were better than hand-held controllers for interacting with VR, but it was to test the viability of facial expressions as an interaction method.

This is because it will give a group of users the ability to use VR that was otherwise not possible for them.

Now that we have proven the viability, in the future, we want to improve the facial expression-based interaction so that it is more comparable to the traditional methods but with more advanced and less cumbersome technologies.

Did you come across any challenges during your research, and if so, how did you overcome them?
[if–>

The main challenge was the pandemic. It restricted the use of lab facilities, and when we were ready and allowed to run the user study, we had to follow strict health and safety guidelines. Needless to say, finding participants to try our systems was also a challenge, as people were skeptical about stepping outside and using shared equipment, which is understandable.
[if–>

Technically, there were some challenges in terms of connecting the EEG device and the facial expressions to the experimental system that we developed. 
[if–>

In terms of usability, how does using facial expression compare to conventional controllers?

In terms of usability, fully-abled people found it harder to make facial expressions for input compared to using a traditional hand-held controller. Compared to using buttons on a controller, which is recognized every time, facial expression recognition has its flaws.
[if–>

However, for people who cannot use a hand-held control, our system gives them an ability that they did not have before.

At present, it is difficult for disabled people to interact in a VR environment. How will your facial-expression-based method make VR more inclusive to amputees or those with motor neuron disease, for example?

As previously mentioned, our goal was not to revolutionize VR but instead to make it a more inclusive environment. If people have control over their facial expressions, they will be able to use our system to interact in VR.

In this way, we are making VR more accessible for people with a wide range of disabilities.
[if–>

Are there any applications for this facial recognition technology beyond the entertainment industry?
[if–>

Our technique could be used in many different VR applications, not just for gaming and entertainment. For example, the technology could be used for VR training experiences or social VR experiences. Basically, any VR application that uses simple movement or interaction could be adapted for our system.

What are the next steps for your research?

There are many next steps that we could take. We need to test the system with disabled people to see how usable it is for them. Until now, we have only been able to test the technology with fully-abled people.

We would also like to explore methods for facial recognition that are faster and more usable than the current technique. For example, we could use fewer EEG sensors or other sensors like EMG to measure muscle movements. There are many exciting directions that this research could go in.

About Arindam Dey

Arindam Dey is a computer scientist on a mission to make Metaverse (AR/VR) better and more inclusive for users in various ways. He is currently an Honorary Academic at the University of Queensland. Until February 2022, he was a Lecturer at the University of Queensland’s School of ITEE, primarily focusing on Mixed Reality, Empathic Computing, and Human-Computer Interaction. He co-founded and directed Empathic XR and Pervasive Computing Laboratory. He believes in designing solutions for users and putting users ahead of the technology accordingly. Most of his work involves user research and statistics.

Before joining the University of Queensland, he was a Research Fellow at the Empathic Computing Laboratory (UniSA), working with one of the world leaders in the field of Augmented Reality Prof. Mark Billinghurst between 2015 and 2018. Mark’s pioneering work in the field of Empathic Computing has directed him to this enticing research area of utilizing emotion and cognition in the extended reality (XR) interfaces. Earlier, he held postdoctoral positions at the University of Tasmania, Worcester Polytechnic Institute (USA), and James Cook University.

About Prof. Mark Billinghurst

Prof. Mark Billinghurst has a wealth of knowledge and expertise in human-computer interface technology, particularly in the area of Augmented Reality (the overlay of three-dimensional images on the real world).

In 2002, the former HIT Lab US Research Associate completed his Ph.D. in Electrical Engineering, at the University of Washington, under the supervision of Professor Thomas Furness III and Professor Linda Shapiro. As part of the research for his thesis titled Shared Space: Exploration in Collaborative Augmented Reality, Dr. Billinghurst invented the Magic Book – an animated children’s book that comes to life when viewed through the lightweight head-mounted display (HMD).

Not surprisingly, Dr. Billinghurst has achieved several accolades in recent years for his contribution to Human Interface Technology research. He was awarded a Discover Magazine Award in 2001 for Entertainment for creating the Magic Book technology. He was selected as one of eight leading New Zealand innovators and entrepreneurs to be showcased at the Carter Holt Harvey New Zealand Innovation Pavilion at the America’s Cup Village from November 2002 until March 2003. In 2004 he was nominated for a prestigious World Technology Network (WTN) World Technology Award in the education category and in 2005, he was appointed to the New Zealand Government’s Growth and Innovation Advisory Board.

Originally educated in New Zealand, Dr. Billinghurst is a two-time graduate of Waikato University, where he completed a BCMS (Bachelor of Computing and Mathematical Science)(first-class honors) in 1990 and a Master of Philosophy (Applied Mathematics & Physics) in 1992.

Research interests: Dr. Billinghurst’s research focuses primarily on advanced 3D user interfaces such as:

  • Wearable Computing – Spatial and collaborative interfaces for small portable computers. These interfaces address the idea of ​​what is possible when you merge pervasive computing and on-body communications.
  • Shared Space – An interface that demonstrates how augmented reality, the superimposition of virtual objects on the real world, can radically improve face-to-face and remote collaboration.
  • Multimodal Input – Combination of natural language and artificial intelligence techniques to enable human-computer interaction with an intuitive mix of voice, gesture, speech, gaze and body movement.

Disclaimer: The opinions expressed here are those of the respondent and do not necessarily represent the opinions of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the terms of use of this website.

About Clara Barnard

Check Also

On-the-road review: Hyundai Ioniq5 Limited electric vehicle

Hyundai Motor Co. (including Kia and Genesis) will soon be America’s best-selling electric vehicle maker, …