We're not trying to re-build stores from the ground up, but retrofit them with new technologies. All we need in order to operate is the right number of cameras, which are designed to be easy to install and operate at low power consumption.
Our research enables us to autonomously understand activities such as picking up items, opening a door, or eating an apple. these activities can be specialized to a domain, or generically applied to any situation depending on the action of interest.
V7 specializes in understanding human-object interaction cognitively, allowing us to understand pick-up actions of objects from short or long distances.
Our AI adapts to lighting changes, lens distortion, occlusion, and other visual edge cases that might occur. Our neural networks work on any person and most product types, and are agnostic to the camera angle they operate on.