Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

OutCast: Single Image Relighting with Cast Shadows

David Griffiths, Tobias Ritschel, Julien Philip

EuroGraphics 2022

We address the problem of single image relighting. Our work shows monocular depth estimators can provide sufficient geometry when combined with our novel 3D shadow map prediction module.

Posts

Weighted point cloud augmentation for neural network training data class-imbalance

David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2019

A key issue when training deep neural networks for outdoor point clouds is the inevitable large data imbalance. For example, a typical street scene will contain orders of magnitudes more ground points than street furniture. We develop a novel solution to apply a weighted augmentation to reduce the class-imbalance.

SynthCity: A Large Scale Synthetic Point Cloud

David Griffiths, Jan Boehm

arXiv preprint 2019

We release a synthetic Mobile Laser Scanning (MLS) point cloud named SynthCity. Every point has a per-class and per-instance classification, along with colour, return intensity, end-of-line indicator and time.

Finding Your (3D) Center: 3D Object Detection using a Leant Loss

David Griffiths, Jan Boehm, Tobias Ritschel

European Conference on Computer Vision (ECCV) 2020

We present a novel weakly-supervised approach for 3D object detection. Our method can be trained on upto 95% less labeled data and still benefits from unlabeled data.

Semantic Segmentation of Terrestrial LIDAR Data Using Co-Registered RGB Data

Erick Sanchez, David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2021

A pipeline which demonstrates that Terrestrial Laser Scanning (TLS) 3D data can be automatically labelled using off-the-shelf 2D semantic segmentation networks. With only a simple projection of a panoramic image, strong results can be generated with no additional training.

Curiosity-driven 3D Object Detection without Labels

David Griffiths, Jan Boehm, Tobias Ritschel

International Conference on 3D Vision (3DV) 2021

A novel method for self-supervised monocular 3D object detection. This is achieved through differentiable rendering and a GAN-like critic loss.

OutCast: Outdoor Single Image Relighting with Cast Shadows

David Griffiths, Tobias Ritschel, Julien Philip

EuroGraphics 2022

We address the problem of single image relighting. Our work shows monocular depth estimators can provide sufficient geometry when combined with our novel 3D shadow map prediction module.

4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities

Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir

NeurIPS 2024

A framework for training any-to-any multimodal foundation models. Scalable. Open-sourced. Across tens of modalities and tasks.

Cubify Anything: Scaling Indoor 3D Object Detection

Justin Lazarow, David Griffiths, Gefen Kohavi, Francisco Crespo, Afshin Dehghan

arXiv 2024

We scale 3D object detection to every object in indoor scenes. Our work demonstrates that as we scale to smaller objects, 3D inductive priors become less valuable and a fully-transformer architecture out-performs SOTA 3D networks.

patents

publications

Weighted point cloud augmentation for neural network training data class-imbalance

David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2019

A key issue when training deep neural networks for outdoor point clouds is the inevitable large data imbalance. For example, a typical street scene will contain orders of magnitudes more ground points than street furniture. We develop a novel solution to apply a weighted augmentation to reduce the class-imbalance.

SynthCity: A Large Scale Synthetic Point Cloud

David Griffiths, Jan Boehm

arXiv preprint 2019

We release a synthetic Mobile Laser Scanning (MLS) point cloud named SynthCity. Every point has a per-class and per-instance classification, along with colour, return intensity, end-of-line indicator and time.

Finding Your (3D) Center: 3D Object Detection using a Leant Loss

David Griffiths, Jan Boehm, Tobias Ritschel

European Conference on Computer Vision (ECCV) 2020

We present a novel weakly-supervised approach for 3D object detection. Our method can be trained on upto 95% less labeled data and still benefits from unlabeled data.

Semantic Segmentation of Terrestrial LIDAR Data Using Co-Registered RGB Data

Erick Sanchez, David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2021

A pipeline which demonstrates that Terrestrial Laser Scanning (TLS) 3D data can be automatically labelled using off-the-shelf 2D semantic segmentation networks. With only a simple projection of a panoramic image, strong results can be generated with no additional training.

Curiosity-driven 3D Object Detection without Labels

David Griffiths, Jan Boehm, Tobias Ritschel

International Conference on 3D Vision (3DV) 2021

A novel method for self-supervised monocular 3D object detection. This is achieved through differentiable rendering and a GAN-like critic loss.

OutCast: Outdoor Single Image Relighting with Cast Shadows

David Griffiths, Tobias Ritschel, Julien Philip

EuroGraphics 2022

We address the problem of single image relighting. Our work shows monocular depth estimators can provide sufficient geometry when combined with our novel 3D shadow map prediction module.

4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities

Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir

NeurIPS 2024

A framework for training any-to-any multimodal foundation models. Scalable. Open-sourced. Across tens of modalities and tasks.

Cubify Anything: Scaling Indoor 3D Object Detection

Justin Lazarow, David Griffiths, Gefen Kohavi, Francisco Crespo, Afshin Dehghan

arXiv 2024

We scale 3D object detection to every object in indoor scenes. Our work demonstrates that as we scale to smaller objects, 3D inductive priors become less valuable and a fully-transformer architecture out-performs SOTA 3D networks.