We are developing a future where scientific research is understood and performed by machines, shifting the way study the natural world from pen and paper to a machine-readable paradigm.
By adding vision AI to laboratories we aim to fully digitize life sciences so they can be interfaced with computer sciences.

To gain a full understanding of laboratory practice we must understand activities and instrumentation as building blocks. Through deep learning we enable the perception of human hands, object pose, and ultimately agent control.
By simplifying the world into these perceivable units we can virtualize activities and understand discrete actions as part of a longer series of steps.

This ability to perceive entities and actions is extended to appliances, creating a fully connected laboratory experience. We are working on developing the tools and framework for deploying this visual understanding at scale. Meanwhile, see some of our existing products below.