316 194
Full Length Article
Fusion: Practice and Applications
Volume 15 , Issue 1, PP: 144-156 , 2024 | Cite this article as | XML | Html |PDF

Title

Improving Shape Transformations for RGB Cameras Using Photometric Stereo

  H. I. Wahhab 1 ,   A. N. Alanssari 2 ,   Ahmed L. Khalaf 3 ,   Ravi Sekhar 4 * ,   Pritesh Shah 5 ,   Jamal F. Tawfeq 6

1  College of Computer Science & Information Technology, University of Kerbala, Kerbala, 56002, Iraq
    (haider.wahhab@uokerbala.edu.iq)

2  Faculty of Basic education, Department of Mathematics, University of Kufa, Najaf, Iraq
    (alaan.azeez@uokufa.edu.iq)

3   College of Engineering, Al-Iraqia University, Baghdad, 10054, Iraq
    (ahmed.l.khalaf@aliraqia.edu.iq)

4  Symbiosis Institute of Technology (SIT) Pune Campus, Symbiosis International (Deemed University) (SIU), Pune, 412115, Maharashtra,India
    (ravi.sekhar@sitpune.edu.in)

5  Symbiosis Institute of Technology (SIT) Pune Campus, Symbiosis International (Deemed University) (SIU), Pune, 412115, Maharashtra,India
    (pritesh.shah@sitpune.edu.in)

6  Department of Medical Instrumentation Technical Engineering, Medical Technical College, Al-Farahidi University, Baghdad, 10070, Iraq
    ( jamaltawfeq55@gmail.com)


Doi   :   https://doi.org/10.54216/FPA.150112

Received: August 22, 2023 Revised: December 11, 2023 Accepted: February 27, 2024

Abstract :

The emergence of low-cost red, green, and blue (RGB) cameras has significantly impacted various computer vision tasks. However, these cameras often produce depth maps with limited object details, noise, and missing information. These limitations can adversely affect the quality of 3D reconstruction and the accuracy of camera trajectory estimation. Additionally, existing depth refinement methods struggle to distinguish shape from complex albedo, leading to visible artifacts in the refined depth maps. In this paper, we address these challenges by proposing two novel methods based on the theory of photometric stereo. The first method, the RGB ratio model, tackles the nonlinearity problem present in previous approaches and provides a closed-form solution. The second method, the robust multi-light model, overcomes the limitations of existing depth refinement methods by accurately estimating shape from imperfect depth data without relying on regularization. Furthermore, we demonstrate the effectiveness of combining these methods with image super-resolution to obtain high-quality, high-resolution depth maps. Through quantitative and qualitative experiments, we validate the robustness and effectiveness of our techniques in improving shape transformations for RGB cameras.

Keywords :

RGB cameras; photometric stereo; RGB ratio model; multi-light model; computer vision; RGB-Fusion

References :

[1] K. A Tychola, Tsimperidis and G. A. Papakostas, "On 3D reconstruction using RGB-D cameras, "Digital, vol.2, no.3, pp. 401-421, 2022.

[2] M. Zollhöfer, P. Stotko, A. Görlitz, C. Theobalt, M. Nießner et al., "State of the art on 3D reconstruction with RGB‐D cameras, "Computer Graphics Forum, vol. 37, no. 2, pp. 625-652, 2018.

[3] C. Delasse, H. Lafkiri, R. Hajji, I. Rached and T. Landes, "Indoor 3D reconstruction of buildings via azure Kinect RGB-D camera," IEEE Sensors Journal, vol. 22, no. 23, pp. 2–22, 2022.

[4] A. Boonsongsrikul and J. Eamsaard, "Real-time human motion tracking by Tello EDU drone, " Sensors, vol. 23, no. 897, pp.21-25, 2023.

[5] A. Tourani, H. Bavle, J. L. Sanchez-Lopez and H. Voos, "Visual SLAM: what are the current trends and what to expect?, " Sensors, vol. 22, no. 23, pp. 9297, 2022.

[6] I. A. Kazerouni, L. Fitzgerald, G. Dooly and D. Toal, "A survey of state-of-the-art on visual SLAM, " Expert Systems with Applications, vol. 205, pp. 117734, 2022.

[7] M. Robert, "Out-of-core bundle adjustment for 3d workpiece reconstruction, " Master's thesis, Technische Universitat Munchen, Germany, September 2013.

[8] N. Engelhard, F. Endres, J. Hess, J. Sturm and W. Burgard, "Real-time 3D visual SLAM with a hand-held RGB-D camera, " in Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, Vol. 180, pp. 1-15, April 2011.

[9] C. Wu, M. Zollhöfer, M. Nießner, M. Stamminger, S. Izadi et al., "Real-time shading-based refinement for consumer depth cameras, " ACM Transactions on Graphics, vol. 33, no. 200, pp. 1-10, 2014.

[10] R. Or-El, R. Hershkovitz, A. Wetzler, G. Rosman, A. M. Bruckstein et al., "Real-time depth refinement for specular objects, " in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, pp. 4378-4386, 27-30 June 2016.

[11] R. Or-El, G. Rosman, A. Wetzler, R. Kimmel and A. M. Bruckstein, "RGBD-fusion: real-time high precision depth recovery," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, vol. 3, no. 4, pp. 59–59, 7-12 June 2015.

[12] C. Kerl, J. Sturm and D. Cremers, "Dense visual SLAM for RGB-D cameras, " in Proc. of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, vol. 11, no. 4, pp. 48–59, 3 -7 Nov. 2013.

[13] J. T. Barron and J. Malik, "Shape, illumination, and reflectance from shading, " IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.37, pp. 1670-1687, 2015.

[14] Y. Han, J. Y. Lee and I. S. Kweon, "High-quality shape from a single RGB-D image under uncalibrated natural illumination, " in Proc. of the IEEE International Conference on Computer Vision, Sydney, Australia, pp. 1617-1624, 1-8 Dec. 2013.

[15] M. Haque, A.Chatterjee and V.M. Govindu, "High-quality photometric reconstruction using a depth camera, " in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, pp. 2275-2282, 23-28 June 2014.

[16] J. Park, S. N. Sinha, Y. Matsushita, Y. W. Tai and I. S. Kweon, "Multiview photometric stereo using planar mesh parameterization, " in Proc. of the IEEE International Conference on Computer Vision, Sydney, Australia, vol. 3, no. 4, pp. 110–119, 1-8 Dec. 2013.

[17] C. Kerl "Odometry from rgb-d cameras for autonomous quadrocopters, " Ph.D. dissertation, Fakultat fur Informatik der Technischen Universitat Munchen, Germany, 2012.

[18] A. Trémeau, S. Bianco and R. Schettini, "RGB-D-λ: 3D Multispectral Acquisition with Stereo RGB Cameras," in Proc. of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, vol. 77, no. 45, pp. 9–20, 2016.

[19] M. H. Brill, "How the CIE 1931 color-matching functions were derived from Wright-Guild data," Color Research & Application, vol. 23, no. 4, pp. 259–259, Aug. 1998.

[20] O. Ikeda, "Shape reconstruction for color objects using segmentation and photometric stereo," in Proc. of the  International Conference on Image Processing, vol. 33, no. 44, pp. 12–25, 2004.

[21] A. S. Huang, A. Bachrach, P. Henry, M. Krainin, D. Maturana, et al., "Visual odometry and mapping for autonomous flight using an RGB-D camera, " in Robotics Research: The 15th International Symposium ISRR, vol. 1, no. 1, pp. 235-252, 2017.

[22] Y. Quéau, J. Mélou, J. D. Durou and D. Cremers, "Dense multi-view 3d-reconstruction without dense correspondences, " arXiv preprint arXiv:1704.00337, vol. 5, no. 6, pp. 110–130, 2017.

[23] S. Alsubai, M. Hamdi, S. Abdel-Khalek, A. Alqahtani, A. Binbusayyis, and R. F. Mansour, “Bald eagle search optimization with deep transfer learning enabled age-invariant face recognition model,” Image and Vision Computing, vol. 126, p. 104545, Oct. 2022, doi: https://doi.org/10.1016/j.imavis.2022.104545.


Cite this Article as :
Style #
MLA H. I. Wahhab , A. N. Alanssari , Ahmed L. Khalaf , Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq. "Improving Shape Transformations for RGB Cameras Using Photometric Stereo." Fusion: Practice and Applications, Vol. 15, No. 1, 2024 ,PP. 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)
APA H. I. Wahhab , A. N. Alanssari , Ahmed L. Khalaf , Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq. (2024). Improving Shape Transformations for RGB Cameras Using Photometric Stereo. Journal of Fusion: Practice and Applications, 15 ( 1 ), 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)
Chicago H. I. Wahhab , A. N. Alanssari , Ahmed L. Khalaf , Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq. "Improving Shape Transformations for RGB Cameras Using Photometric Stereo." Journal of Fusion: Practice and Applications, 15 no. 1 (2024): 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)
Harvard H. I. Wahhab , A. N. Alanssari , Ahmed L. Khalaf , Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq. (2024). Improving Shape Transformations for RGB Cameras Using Photometric Stereo. Journal of Fusion: Practice and Applications, 15 ( 1 ), 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)
Vancouver H. I. Wahhab , A. N. Alanssari , Ahmed L. Khalaf , Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq. Improving Shape Transformations for RGB Cameras Using Photometric Stereo. Journal of Fusion: Practice and Applications, (2024); 15 ( 1 ): 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)
IEEE H. I. Wahhab, A. N. Alanssari, Ahmed L. Khalaf, Ravi Sekhar, Pritesh Shah, Jamal F. Tawfeq, Improving Shape Transformations for RGB Cameras Using Photometric Stereo, Journal of Fusion: Practice and Applications, Vol. 15 , No. 1 , (2024) : 144-156 (Doi   :  https://doi.org/10.54216/FPA.150112)