You read our blog post about Imagined Movements and some of you strongly expressed your feelings about its current potential in Gaming.
What you have experienced until now is slow, somewhat unreliable and slightly difficult to understand.
Plus, the current attempts at replacing thumb-guided movements require you to focus like Michael Jordan in his prime.
I don't know about you, but we aren't Jedis yet. What do we do until we literally make up our minds?
Be it Gaming, be it a Virtual social gathering - do we have all the tools to be who we want to be in Virtual Reality? Do we get all the bidirectional interaction that we wished for? What's the potential for more? What's the need for more?
The first thing that comes to mind: my Virtual Self should move like I move, when I move.
And the current VR setups can achieve this. How?
At first, through (A) tracked hand-controllers: your virtual arms would be driven by a spatial localisation system that uses radiowaves to estimate the position of your handheld controllers.
Which lead to (B) tracked body-controllers: what if using similar localisation protocols, we add more instances positioned across the body?
Let's fast-forward through the iterations inspired by work in Visual Effects and advances in computing and optics.
(C) Visual body-tracking: an RGB or IR camera that reads the real-body to drive a virtual one. Aided by AI's Computer Vision, it turns to using other wavelengths to analyse the body's position. Use a wide-FOV camera on the headset to see your hands, or a 3rd person view placed camera to see your whole body.
The prices range from a few tens to a few thousands of dollars, depending on your realism appetite and pocket length.
There is a crowd's favourite: the VR Treadmill - a curved, pressure-sensitive floor panel, with a harness for safety, and additional sensors to capture extra movements.
All-in-one, the solutions on the market today are as good as it can get today. This was the believable part of Ready Player One.
It's encouraging that we progressed from using a thumb-driven VIrtual Self, to using devices that capture a real body to drive a limitless, virtual body.
But it shouldn't, and we believe it won't stop there.
Let's move on though. What else is there in the Virtual Reality today?
When we're talking about a self we expect identity. Both in social interactions and in Gaming, the ability to express more than generic limb movements paves the way to a truly profound and authentic immersion.
Facial expression tracking is one of the most telling signals of an identity: How does one react to a situation? How does an environment affect a person? Are you ready to show who you are inside to the Virtual World? Is authenticity important to your virtual experience?
Again, AI's Computer Vision comes to help: using headset mounted wide Field of View cameras, (A) eye orientation and (B) facial expressions are reproduced in the Virtual Self.
As far as Gaming goes, PSVR2 is waiting for exciting titles that make use of the tech: from doors opening only if you don't blink, to NPCs that get shy if you stare them. Brilliant.
There have been attempts with muscle electrodes that detect electromyographic patterns (EMG) to detect a smile, but it turns out that wearing patches on the face is not that comfortable. There is potential if the tech progresses to more a more wearable state.
What else is there?
We're reaching a more inner form of expression into a Virtual Self: emotions.
Short and quick, the latest theory: an emotion is in itself a feedback loop of non-conscious reactions, as interpreted by the conscious mind.
A lion approaches, adrenaline is pumped, among others, by the non-conscious: heart-rate increases, reaction time is decreased - the brain wants the body to stay alive.
The conscious mind tries to make sense of it: The body is heating up, it's sweating - these are signs of danger. Let's see, I'm surrounded by large plants, there's a brown, bushy animal with large fangs looking at me. I'll run now, and next time I'll predict this better by consciously bringing back this memory repeatedly during the next few days, weeks, years.
So next time when we venture into the wilderness, the non-conscious triggers the same body reactions even at the sight of a Chow Chow dog. A much softer form of PTSD - a malformed automated process based on the conscious idea of Fear.
It doesn't really matter, after all, does it? We're humans, how much of this do we want to be represented by our Virtual Self?
One of them is using the authentic emotions of the Self to harden the control of the mind over its daily emotions. Using the Heart-Rate Variability as a telltale sign of the presence of Fear and Stress they monitor the biomarker using an Apple Watch.
Show too much Fear or Stress and the game becomes harder to play.
Obviously, there are visual and auditory feedback channels. Covering your whole eye-sight and tickling the ear-drums with surround audio gives satisfying goosebumps when it's done right.
It's currently the main feedback modality add-on that your Virtual Self can communicate with your mind with.
Gloves, chest, or a full suit, armed with small vibrating motors, are reproducing the interactions with the environment from the Virtual World.
Some with body-tracking included, some with more or less haptic points.
Prices range from a few hundred dollars to more than ten thousand.
Have any other thoughts? Get in touch with us, scroll to the bottom of the page and make yourself heard.
And now let's see: What's there to come? Why does it need to come?
What can we do better?
Games and other Virtual Worlds can infer a psychological state solely based on your interaction: Choosing a particular dialogue option, avoiding to enter a door marked as "Dangerous", and so on.
Clearly, our Self is much more than that. The emotions taking place in the conscious mind greatly affect and define who we are.
Brain-Computer Interfaces can detect a range of state of minds, from anxiety, stress and fear, to empathy, calm and drowsiness.
By themselves, or combined with other bio-signals, they become clear mirrors.
A research paper dives into Brain-Computer Interface controlled Narrative Guidance to make use of empathy detection to drive a storyline.
By analogy, could you change a game by feeling something?
That sounds a bit like life, doesn't it? The Virtual Self could really be a reflection of yourself.
We'll delve deeper into the implications of such technology in a future blog post.
The pre-frontal cortex is our reasoning unit that analyses, compares and takes decisions.
The difference between tasing a criminal and controlling the urge when you see an innocent person is made there.
Obviously, that means that the visual information travels through the brain, is interpreted and analysed, and the decision is sent in turn to activate the right muscles for the desired outcome.
It happens so fast that after a while it passes the conscious domain.
Classifying Action A and Action B is also done with non-invasive Brain-Computer Interfaces.
And yes, there are ways to shortcut half of the pathway that leads to an action of the body.
What do you think, does it have any use-cases in your favourite game?
Imagined Movements rely on what we already know: Move a right arm, Bend the left knee.
At first, that won't reveal its true worth to everyone.
Why imagine something when I can do it and a suit reproduces my movements in the Virtual World?
What if there are more things to imagine?
Can you imagine flying, like a Superhero, to signal your Virtual Self to do that?
Could you imagine changing direction in a way that your Virtual Self does it faster than your own body?
Could you eventually end up controlling your Virtual Body with the same ease and intuition of controlling your Material Body?
This is what we're after.
The best products have a long, successful life because they're useful, they are simple, and they make sense.
Are the current VR add-ons and enhancements useful? Absolutely.
Are they simple? Maybe individually, complexity grows when mixed.
Do they make sense? Let's answer this question together.
The current pathway for Virtual Reality inputs is sequenced through a varied number of middle-men. The figure below explains it better.
Turns out, our Virtual Self is in fact a scripted, sequentially filtered representation of our mind.
Is that good? You tell us.
Are there alternatives? There are.
Is there more to it? There is.
A different, shorter path is to gain as much information as possible directly from the brain. And try to do it for 1/5th of the price of today's complete immersive systems.