n l i t e d

:



Thread Links
next

Picoseconds

📢 PUBLIC Page 759:22/22 | edit | chip 2018-01-03 09:15:29
Tags: FutureThink LiDAR

Various documents discussing measuring things with photons. Key terms are LIDAR, TAC.

There seem to be two approaches to measuring picoseconds: TAC (Time to Amplitude Converter) which measures the voltage of a charging capacitor, and TDC (Time to Digital Converter) using the propagation delay through cascading flip-flops.

TAC is extremely accurate, with measurements down to 1ps, but is difficult to manufacture.

TDC is less accurate, with a limit of 60ps, possibly as low as 30ps. However, TDC circuits are relatively easy to design and extremely easy to manufacture. TDC circuits can be constructed as standard FPGA cells.


Velodyne VLP-16 LiDAR

The Velodyne PUCK (VLP-16) is a commercial LIDAR TOF system available for retail purchase at $8000. This is my analysis, based on the datasheet and some wild guesses.


Datasheet

  • 16 channels
  • Range 100m
  • Accuracy +-3cm
  • Dual returns
  • Vertical FOV: 30°
  • Horizontal FOV: 360°
  • Vertical resolution: 2°
  • Horizontal resolution: 0.1° - 0.4°
  • Rotation rate: 5Hz - 20Hz
  • Ouput: Up to 300K points/second

Operation

The puck uses a fixed array of 16 lasers splayed in a vertical line with each laser covering 2° of the vertical resolution, providing a vertical field of view of +-15°. This array spins on a vertical axis to cover the horizontal field of view at 5Hz (best resolution) while collecting one vertical line every 0.1°. Each laser pulse generates a single "pixel" of range data. A pixel contains range and brightness (reflectivity) information, its position within the frame provides spatial (x,y) information, and the frame contains time and location information.

A frame is one complete revolution, generated at 5Hz, 200ms/frame.

The lasers need to be fired sequentially, as there is no way to discern the return signal from adjacent lasers. However, there is mention of "dual returns". I believe this means the lasers are arranged as two opposing banks of 8, with two return sensors facing 180° apart. The data from the two banks is interleaved much like the 2 fields of a NTSC video frame. This approach has the benefit of making the mechanical design easier and providing more time between the sequential firing of each laser in each bank (2 opposing lasers can fire simultaneously without interfering), which increases the range and provides more time to read and reset the timing circuit.

The downside to interleaving is that it introduces temporal tearing in the frame, as each adjacent vertical line was collected 100ms apart. This would appear as "jaggies" in the horizontal features of moving objects, and a "smearing" of the overall image if the PUCK itself is moving. This can be especially troublesome for computer vision systems that rely on detecting sharp edges in the image to define objects. The tearing effect can be mitigated by increasing the spin rate from 5Hz to 20Hz, sacrificing horizontal resolution (dropping from 0.1° to 0.4° or 900 lines per frame) for a more coherent frame with lower resolution but better edges and more frames per second.

A line of resolution is then 8 laser pixels along the vertical axis.

At 10 lines per degree (0.1° resolution), 360° per revolution, the puck must collect 3600 lines every 200ms. This is 3600/200ms= 18000 lines/s. Each line is 8 pixels, so the puck is collecting 144000 pixels/s. 2 sensors are running in parallel, for a total of 288000 pixels/s of data.

For each pixel, a number of things need to happen:

  • Fire the laser
  • Start the Time Of Flight (TOF) timer
  • Wait for the return signal (round trip)
  • Detect the returning photon(s)
  • Read the timer
  • Reset the timing circuit
  • Convert and store the timing data

All this must happen in 1/144000= 6.9us.

The advertised range of the system is 100m. Light travels 300M m/s, requiring 0.67us to make the round trip of 200m. This is roughly 10% of the time available, leaving 6us to do everything else. The datasheet indicates 0.1° of resolution at 5HZ spin rate and 0.4° at 20HZ, which tells me the 7us/line is a fixed value.

Timing Circuit

The advertised depth accuracy is +-3cm, which would be the accuracy of the timing circuit. 6cm/300Mm= 20ps. This is a bit surprising, as it seems 1ps should be achievable. The range of the timing circuit is then 20ps - 670ns (670,000ps) I would assume this is implemented as a primary first-stage timer and a set of secondary timers. The first stage is a free-running 500MHz clock running on 2ns ticks. The second stage is a bank of one-shot timers with 20ps resolution, most likely a TAC (Time to Amplitude Converter) using charging (or discharging) capacitors. On each 2ns tick a tick counter is incremented, a secondary timer is started, and any previously started secondary timer can be reset. The time required to reset a secondary timer dictates how many redundant timers are required. When the return signal is detected, the voltage on the currently running secondary timer is read, converted to a time offset, and added to the tick count.

Cost

The retail cost of the VLP-16 is $8000. If the BOM markup is 4X, the parts would be $2000 and each laser unit would need to be well under $100 (more like $50) including the laser and outbound optics.

If the laser bank costs $800 ($50*16), this leaves $1200 for the mechanicals, sensor, timing circuit, and processing unit. A wild guess at the processor might be $200, the mechanicals (including the housing) might be $300-$400. Leaving about $600 for the two sensors and timing circuits. Maybe less if the lasers are more. If each sensor and timer costs $300, the sensor might account for $150 and the timer would then be $150.

Reconstructing a single-point LIDAR would be:

  • Timer: $150
  • Sensor: $150
  • Laser: $50
  • Optical filter: $20
  • Processor: $200

For a total under $600. This would provide only a single pixel of range data. Once this system was working, building it back up to a full-frame LIDAR system would be a complicated, but doable, mechanical design problem.

Future

Autonomous cars will grind the cost of high-resolution, real-time, 3D ranging to less than $100 per node. What can be done with a hand-held, mass-market 3D ranging device? I can walk into any room and quickly generate a live 3D voxel map. This can be fed into my phone/laptop via bluetooth to generate a live 3D model of my environment.

How will this be used?

  • Virtual reality
  • Teleconference
  • Architecture
  • Online shopping
  • Navigation assistance for the blind
  • Surveillance, burglar alarms
  • Retail customer tracking


close comments Comments are closed.

Comments are moderated. Anonymous comments are not visible to other users until approved. The content of comments remains the intellectual property of the poster. Comments may be removed or reused (but not modified) by this site at any time without notice.

  1. [] ok delete


Page rendered by tikope in 299.952ms