Today there are two main approaches to produce depth from a single RGB image absolute measuring or relative estimation. Absolute depth measurement is a reliable method that achieves accurate results. However, this method, which is performed using sensors such as LIDAR, stereo photography and more, is expensive, complex and requires time.
In the second approach, which is based on relative depth estimation, the image is acquired using a simple RGB camera. Although the prediction in this approach is much cheaper, much more simple and available, the results are less accurate.
Our goal in this work is to fuse the above two approaches to estimate depth from a single image and to examine the feasibility of creating a dense absolute depth map, using a monochromatic image and sparse absolute depth points.