
The GPU acceleration is great but does actually (currently) add latency, so it feels slower to use than a Kinect 2. The promise of this device is that it’s easier hardware to work with, has wider field of view, and uses GPU acceleration on it’s processes! But like all dev kits, you’ll find the tooling involves a lot of DIY, command line, or workarounds. It’s what is supposed to replace the Kinect 2….problem is it’s technically still only a dev kit and the tooling is still FAR behind what Kinect 2 has. Then there’s the Kinect Azure, the new kid on the block. Even with all that said, it’s still my go-to sensor. Not the end of the world, but not exactly thrilling for permanent installations. At the moment, if you want to use a Kinect 2, you can still buy an Xbox One sensor on websites like Amazon, but annoyingly you’ll have to buy a 3rd party OEM adapter to connect it to your computer.
ZED CAMERA GET DEPTH MAP VALUE PYTHON SOFTWARE
It’s easy to use, integrates with almost any software at this point, is stable, has great tooling built around it, and has good quality sensor. That’s more than 3 years ago and we’re still using this sensor. Sounds good? Bad news is they actually discontinued the Kinect 2 at the end of 2017 / beginning of 2018. I have a number of them in our office and continue to use them on installations. The Kinect 2.0, or XBOX One Kinect, is a staple of our industry. Now Microsoft isn’t immune to drama in the depth sensor space.

Let’s start with the tried and true: Microsoft Kinect. In this post, we’re going to look at the recent depth sensor news and offer recommendations based on what we’ve been using on installations. The amount of companies that have entered and exited the depth sensors market in the last few years is wild. Print( "Numpy get_value %f ms" % (( end - start) * 1000))Įrr, point_cloud_value = point_cloud.As the years pass and newer and newer gear comes out, we often have to re-evaluate what we’re using in our installations. Print( "Numpy conversion %f ms" % (( end - start) * 1000)) core as core import math import numpy as np import sys import time def main(): Print("Can't estimate distance at this position, move the camera\n") Print("Distance to Camera at ( mm\n".format(x, y, distance)) If not np.isnan(distance) and not np.isinf(distance): # We measure the distance camera - object using Euclidean distanceĮrr, point_cloud_value = point_cloud.get_value(x, y)ĭistance = math.sqrt(point_cloud_value * point_cloud_value + # Get and print distance value in mm at the center of the image

Zed.retrieve_measure(point_cloud, sl.PyMEASURE.PyMEASURE_XYZRGBA)


Point cloud is aligned on the left image. Zed.retrieve_measure(depth, sl.PyMEASURE.PyMEASURE_DEPTH) Zed.retrieve_image(image, sl.PyVIEW.PyVIEW_LEFT) If zed.grab(runtime_parameters) = tp.PyERROR_CODE.PySUCCESS: # A new image is available if grab() returns PySUCCESS Runtime_nsing_mode = sl.PySENSING_MODE.PySENSING_MODE_STANDARD # Use STANDARD sensing mode Runtime_parameters = zcam.PyRuntimeParameters() # Create and set PyRuntimeParameters after opening the camera Init_ordinate_units = sl.PyUNIT.PyUNIT_MILLIMETER # Use milliliter units (for depth measurements) Init_pth_mode = sl.PyDEPTH_MODE.PyDEPTH_MODE_PERFORMANCE # Use PERFORMANCE depth mode # Create a PyInitParameters object and set configuration parameters
