Remember in Minority Report (and that dopey insurance ad) everything is really random and confusing – that is directly because of the air-gesture interface
First impression – very nice presentation (Apple-esque design, packaging and quality); it works; feels very strange for a concept that should be so natural; going to take some figuring and practice (a lot of practice).
Next – where to put it? I am using the MacBook17 on my desk, the back raised up to ~ 30 degrees for ergos.
Tried it on the desk below the keyboard and trackpad, like they show in the brochure. General pickup was good, detecting within about 300mm dome (FoV a bit over 45 from vertical). However the reach didn’t feel right, and using the trackpad caused random pickups (hard to remember NOT to use the trackpad).
I put it just above the keyboard (with a little blu-tack). This seems better for reach, but being sloped towards me the virtual movements are out of square with the screen, poking forwards is detected as forward+down, sliding off the mark.
Now I have aligned it with the screen plane instead of the keyboard, it is more natural. Being so close to the panel half the field range is lost and gestures are quite close to the glass, but I can reach all corners of the display with enough z-movement to activate “clicks”. This would not be as nice for 3d interactions (I downloaded the 3D Molecules app, Sim will be amused) but as a 2.5D mouse replacement it is probably pretty good.
It explicitly did not like the window being open – diags reported suboptimal IR lighting and dropping the blind seems to improve sensitivity. That is annoying, as the view and natural light are a major feature of this desk (and the pull-cord on the blind is broken).
Sensitivity is not pixel level, but close enough for menu and button selection @1920×1200/17. Fine movements are quite jittery, but I won’t judge that too much until I get some muscle control. Holding a hand in the air for extended mousing could get very tiring, but being able to lift off the keyboard to quickly point and click seems nice – if this can become a touchscreen without fingerprints I will be happy.
It does detect 5 or 10 digits plus overall hand orientation quite well (limp-wristed jokes go here). Fingertips are tracked individually as long as they are reasonably separated and “out front”. Bending a finger under makes it disappear quite quickly, and the vulcan salute confuses it. I think there is plenty of range for a LOT of gesture combos, just a shame I was never much good at Streetfighter.
Alignment with eye-hand-screen is all over the place, it really needs some interactive calibration. There is a setting for “interaction height” which is a simple scaling and helps a lot (I was holding my hand like a puppeteer, coaxing the cursor out of the dock-bar towards the action, then kept dropping out of field before finding the top) A manual corner-calibration system would be good, maybe it is tucked here somewhere. The software is very simple so far and the drivers only do the basic device handling in some proprietary API, relying on 3rd party apps to convert to mouse emulation. I will put my 2c on the forums re better calibration. I just hope it doesn’t drift and have to be reset every hour like some analog inputs of the past.
This is experimental stuff really, so I think I might be crosseyed with RSI before I can put the mouse away. Actually, I expect I will find some kind of flexible interchange between Motion, trakpad, super mouse and also the pen tablet on the side. Getting away/alternative from TP and mouse would be good as both give me cramps, but each has its strengths in different applications.
Well, more news as I get used to it. Cris if you want to come and check it out, be my guest.