Artificial Intelligence
Please fill the required field.

The VR system uses a Meta Quest Pro headset which is equipped with advanced sensors to capture facial expressions and head movements. These sensors generate 63 blendshape values per frame, which represent facial muscle activity such as eyebrow movement, c


The VR system utilizes a Meta Quest Pro headset that boasts advanced sensors designed to track facial expressions and head movements with great precision. These sensors can produce 63 blendshape values for each frame, capturing subtle changes in facial muscles like eyebrow raises, cheek puffs, and lip movements. Each value falls between 0 and 1, where 0 means that the specific muscle is completely relaxed and 1 indicates that it's fully engaged. In addition to these facial expressions, the system gathers other important information, such as how intensely a user is moving their eyes—indicating if they're looking up or down—and the orientation of their head, which involves tilting and rotation. There’s also the option to capture the tone of voice, adding more emotional context to the input data. All of this raw data serves as the foundation for predicting emotions. However, it’s important to note that the quality of the gathered data can significantly vary based on factors like the sensors used and the surrounding environment. For example, poor lighting conditions or quick head movements can lead to inaccuracies, making it essential to preprocess the data before any kind of analysis can be effectively done. Update (2025-11-30): The VR system, using a Meta Quest Pro headset, is pretty impressive because it can detect facial expressions and head movements through some advanced sensors. These sensors capture 63 different blendshape values for each frame, which gives insights into how various facial muscles are being used. For instance, they can track things like how much your eyebrows are raised, how puffed your cheeks are, or how your lip corners are positioned. Each of these values falls within a range from 0 to 1, where 0 means nothing is happening and 1 means there’s full activation. On top of the facial data, the system also gathers a few other important inputs. It measures how intensely you’re moving your eyes, whether you’re looking up or down, and tracks the orientation of your head, including any tilts or rotations. There’s also an interesting option to capture your voice tone, which can add another layer of emotional context to the data being collected. This raw data serves as the foundation for predicting emotions. However, it’s crucial to recognize that the accuracy of this input really depends on the quality of the sensors and the environment around you. For example, if the lighting is poor or if you’re moving your head around quickly, it can create noise in the data. That’s why preprocessing is such an important step before diving into any analysis.