The Pelican Depth-Sensing Array

The team at Pelican Imaging has developed revolutionary new technology: a small, super-thin array camera that captures in 3D. The Pelican depth-sensing array calculates the depth of the scene and simultaneously marries the depth information with the RGB images of the primary camera in a mobile device.

This unique approach allows mobile handset manufacturers to choose their own primary camera module (whether it’s 8MP, 13MP, or 20MP), and benefit from excellent image quality paired with the depth data from the scene. Because the Pelican depth sensor modules do not have autofocus actuators, the sensor modules are extremely thin (2.5-3 mm) and will not be the height limiting factor for the mobile device.

When the Pelican depth sensor array is combined with the primary camera’s auto focus mechanism, auto focus time can be substantially decreased (sub-100ms) allowing faster first photos and shot to shot times.

Users will realize the ability to capture 3D images and video from their handheld device, giving them unprecedented freedom to produce their own 3D content for a wide range of applications: 3D selfies, 3D printing, gaming, virtual reality, etc.

Still images can be edited with ease. Because the image contains depth information, it’s extremely simple to refocus the photo, or select multiple objects of focus. Pelican’s software also enables users to capture quick distance measurements, adjust lighting, apply filters to all or part of the image, and easily replace backgrounds or combine photos. See Pelican’s 3D Image Viewer to view photos and depth maps captured with the array camera, and try out refocus and motion parallax.

Ecosystem Support

Pelican recently announced partnerships with AAC for the supply of high-quality lenses, and electronics giant Jabil for manufacturing the module in volume.

The Pelican depth-sensing array provides a flexible, scalable solution: it can be combined with any existing mobile device camera, or used as a standalone depth sensor.

One 2x2 module captures highly accurate near-field depth, but it is also possible to combine multiple arrays to enable far-field 3D scanning as well. An optional patterned light source increases depth accuracy.

Best Solution for Mobile


According to IDC, by 2018, 85% of the global image capture volume will come from mobile devices. Users increasingly want more features and functionality from the camera in their mobile phone, because that’s the ever-present camera in their pocket.

Mobile device manufacturers can differentiate their products from simple stereo camera arrangements by being one of the first to offer customers a meaningful depth capture solution.

Not only is the Pelican sensor perfectly suited for mobile by virtue of its super-thin form factor (2.5-3mm), but because the sensor has no auto-focus mechanism and no moving parts, it’s capable of extremely fast shooting. When paired with the primary camera in a mobile device, the Pelican array can actually be used to decrease the shutter lag in the primary camera as well, while yielding beautiful, high-res, all-in-focus images.

For a closer look at how mobile handset makers can realize the benefits of the Pelican depth-sensing array, see the recently published IDC report: How Computational Photography Can Drive Profits in the Mobile Device Market.

The following video highlights the vision for depth-enabled imaging, and features Kartik Venkataraman (Pelican co-founder and CTO), Raj Talluri (SVP of Product Management at Qualcomm), and Hao Li (Assistant Professor of Computer Science at USC). Depth: the Future of Imaging.

Learn more...