The Virtual Self 2.0

May 29, 2023

You read our blog post about Imagined Movements and some of you strongly expressed your feelings about its current potential in Gaming.

You are heard. We hear you.

What you have experienced until now is slow, somewhat unreliable and slightly difficult to understand.

Plus, the current attempts at replacing thumb-guided movements require you to focus like Michael Jordan in his prime.

Like a real-life digital Jedi.

I don't know about you, but we aren't Jedis yet. What do we do until we literally make up our minds?

This post is about the Virtual Self: the digital embodiment of our imagination.

Be it Gaming, be it a Virtual social gathering - do we have all the tools to be who we want to be in Virtual Reality? Do we get all the bidirectional interaction that we wished for? What's the potential for more? What's the need for more?

NeuralEcho Labs | Blade Runner 9372 VR experience
Blade Runner 9372 VR experience

The Virtual Self status-quo

Body movements

The first thing that comes to mind: my Virtual Self should move like I move, when I move.

And the current VR setups can achieve this. How?

At first, through (A) tracked hand-controllers: your virtual arms would be driven by a spatial localisation system that uses radiowaves to estimate the position of your handheld controllers.

Which lead to (B) tracked body-controllers: what if using similar localisation protocols, we add more instances positioned across the body?

NeuralEcho Labs | Body tracking in VR
Left (A) - Handheld controller tracking; Mid-Right (B) - Full body radiowave tracking by Manus in 2020

Let's fast-forward through the iterations inspired by work in Visual Effects and advances in computing and optics.

(C) Visual body-tracking: an RGB or IR camera that reads the real-body to drive a virtual one. Aided by AI's Computer Vision, it turns to using other wavelengths to analyse the body's position. Use a wide-FOV camera on the headset to see your hands, or a 3rd person view placed camera to see your whole body.

NeuralEcho Labs | RGB camera tracking
(C) - RGB camera tracking using 2D inputs and 3D pose estimation

That's cool.

The prices range from a few tens to a few thousands of dollars, depending on your realism appetite and pocket length.

How do we go about not smashing the TV or our fibula while moving?

There is a crowd's favourite: the VR Treadmill - a curved, pressure-sensitive floor panel, with a harness for safety, and additional sensors to capture extra movements.

NeuralEcho Labs | VR Treadmill
Virtuix Omni-One VR treadmill at around $2000

All-in-one, the solutions on the market today are as good as it can get today. This was the believable part of Ready Player One.

It's encouraging that we progressed from using a thumb-driven VIrtual Self, to using devices that capture a real body to drive a limitless, virtual body.

But it shouldn't, and we believe it won't stop there.

Let's move on though. What else is there in the Virtual Reality today?

Facial expressions

When we're talking about a self we expect identity. Both in social interactions and in Gaming, the ability to express more than generic limb movements paves the way to a truly profound and authentic immersion.

Facial expression tracking is one of the most telling signals of an identity: How does one react to a situation? How does an environment affect a person? Are you ready to show who you are inside to the Virtual World? Is authenticity important to your virtual experience?

Again, AI's Computer Vision comes to help: using headset mounted wide Field of View cameras, (A) eye orientation and (B) facial expressions are reproduced in the Virtual Self.

As far as Gaming goes, PSVR2 is waiting for exciting titles that make use of the tech: from doors opening only if you don't blink, to NPCs that get shy if you stare them. Brilliant.

NeuralEcho Labs | Facial expressions in VR
(A) - Eye-tracking for PSVR2 headset; (B) - Youtuber Virtual Reality Oasis tests HTCs face-tracker

There have been attempts with muscle electrodes that detect electromyographic patterns (EMG) to detect a smile, but it turns out that wearing patches on the face is not that comfortable. There is potential if the tech progresses to more a more wearable state.

What else is there?


We're reaching a more inner form of expression into a Virtual Self: emotions.

Let's gain some intuition about emotions. How do they work?

Short and quick, the latest theory: an emotion is in itself a feedback loop of non-conscious reactions, as interpreted by the conscious mind.

How come?

A lion approaches, adrenaline is pumped, among others, by the non-conscious: heart-rate increases, reaction time is decreased - the brain wants the body to stay alive.

The conscious mind tries to make sense of it: The body is heating up, it's sweating - these are signs of danger. Let's see, I'm surrounded by large plants, there's a brown, bushy animal with large fangs looking at me. I'll run now, and next time I'll predict this better by consciously bringing back this memory repeatedly during the next few days, weeks, years.

So next time when we venture into the wilderness, the non-conscious triggers the same body reactions even at the sight of a Chow Chow dog. A much softer form of PTSD - a malformed automated process based on the conscious idea of Fear.

It doesn't really matter, after all, does it? We're humans, how much of this do we want to be represented by our Virtual Self?

Well, some games have used emotions to a creative, beneficial extent.

One of them is using the authentic emotions of the Self to harden the control of the mind over its daily emotions. Using the Heart-Rate Variability as a telltale sign of the presence of Fear and Stress they monitor the biomarker using an Apple Watch.

NeuralEcho Labs | Nevermind - a game that is played by your emotions
Nevermind - A game that is played by your emotions

Show too much Fear or Stress and the game becomes harder to play.

Feedback - or what about the other way?

Obviously, there are visual and auditory feedback channels. Covering your whole eye-sight and tickling the ear-drums with surround audio gives satisfying goosebumps when it's done right.

What else?

Haptic feedback.

It's currently the main feedback modality add-on that your Virtual Self can communicate with your mind with.

Gloves, chest, or a full suit, armed with small vibrating motors, are reproducing the interactions with the environment from the Virtual World.

Some with body-tracking included, some with more or less haptic points.

Prices range from a few hundred dollars to more than ten thousand.

NeuralEcho Labs | Teslasuit full-body haptic and tracking suit
Teslasuit - All-in-one tracking and feedback for around $13,000

Concluding the immersion of today.
With a budget ranging from $2,000 to $10,000, enough space and perseverance, there are plenty of exciting opportunities to try and truly be virtual.

Have any other thoughts? Get in touch with us, scroll to the bottom of the page and make yourself heard.

And now let's see: What's there to come? Why does it need to come?

Virtual Self 2.0

What can we do better?

Detecting a state of mind.

Games and other Virtual Worlds can infer a psychological state solely based on your interaction: Choosing a particular dialogue option, avoiding to enter a door marked as "Dangerous", and so on.

Clearly, our Self is much more than that. The emotions taking place in the conscious mind greatly affect and define who we are.

Brain-Computer Interfaces can detect a range of state of minds, from anxiety, stress and fear, to empathy, calm and drowsiness.

By themselves, or combined with other bio-signals, they become clear mirrors.

A research paper dives into Brain-Computer Interface controlled Narrative Guidance to make use of empathy detection to drive a storyline.

NeuralEcho Labs | BCI Empathy detection in a story
Detection of empathy toward the main character of a story changes the story

By analogy, could you change a game by feeling something?

That sounds a bit like life, doesn't it? The Virtual Self could really be a reflection of yourself.

We'll delve deeper into the implications of such technology in a future blog post.

A teaser meanwhile: would the crosshairs of an anxious player wiggle more or less than the ones of a player that masters their mind?
Decision making

The pre-frontal cortex is our reasoning unit that analyses, compares and takes decisions.

The difference between tasing a criminal and controlling the urge when you see an innocent person is made there.

Obviously, that means that the visual information travels through the brain, is interpreted and analysed, and the decision is sent in turn to activate the right muscles for the desired outcome.

It happens so fast that after a while it passes the conscious domain.

Classifying Action A and Action B is also done with non-invasive Brain-Computer Interfaces.

And yes, there are ways to shortcut half of the pathway that leads to an action of the body.

What do you think, does it have any use-cases in your favourite game?

Imagining the impossible and doing it. Virtually.

Imagined Movements rely on what we already know: Move a right arm, Bend the left knee.

At first, that won't reveal its true worth to everyone.

Why imagine something when I can do it and a suit reproduces my movements in the Virtual World?


What if there are more things to imagine?

Can you imagine flying, like a Superhero, to signal your Virtual Self to do that?


Could you imagine changing direction in a way that your Virtual Self does it faster than your own body?


Could you eventually end up controlling your Virtual Body with the same ease and intuition of controlling your Material Body?

We'll see.

This is what we're after.

Is 2.0 needed? Is it wanted?

The best products have a long, successful life because they're useful, they are simple, and they make sense.

Are the current VR add-ons and enhancements useful? Absolutely.

Are they simple? Maybe individually, complexity grows when mixed.

Do they make sense? Let's answer this question together.

The current pathway for Virtual Reality inputs is sequenced through a varied number of middle-men. The figure below explains it better.

NeuralEcho Labs | VR acquired signal pathway
The current road of a signal: From the Brain, through the Body, through one or more sensors, to finally reach the Game

Turns out, our Virtual Self is in fact a scripted, sequentially filtered representation of our mind.

Is that good? You tell us.

Are there alternatives? There are.

Is there more to it? There is.

A different, shorter path is to gain as much information as possible directly from the brain. And try to do it for 1/5th of the price of today's complete immersive systems.

And here's the catch. You might find yourself training to be a Jedi while playing games. Your mind might become something you've never thought of possible, and you might become the master, or the friend, of your mind.
Are you ready for that?
NeuralEcho Labs | The Neurogamer
Share this story
Grow your business.
Today is the day to build the business of your dreams. Share your mission with the world — and blow your customers away.
Start Now
NeuralEcho Labs logo
NeuralEcho Labs | Discord linkNeuralEcho Labs | Reddit linkNeuralEcho Labs | LinkedIn linkNeuralEcho Labs | Instagram linkNeuralEcho Labs | Twitter linkNeuralEcho Labs | Facebook link