About

Hi! I’m an Senior Applied Scientist at Apple. I work primarily in the fields of 3D and multimodal scene understanding. My previous research involved adressing the scarcity of manually labelled 3D data through alternative learning approaches. This resulted in several publications, developing novel weakly and self-supervised learning pipelines for image, point cloud and geometric mesh data. Prior to Apple I was a Senior Research Scientist at Fujitsu Research of Europe.

I completed a Ph.D in 3D computer vision at UCL, London, where I was supervised by Prof. Jan Boehm and collaborated closely with Prof. Tobias Ritschel. I spent the summer of 2021 at Adobe as an intern in the Creative Intelligence Lab, London. Whilst there I worked on geometrically-driven single-image relighting, supervised by Dr. Julien Philip.

Download CV

 

Publications

4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities

Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir

NeurIPS 2024

A framework for training any-to-any multimodal foundation models. Scalable. Open-sourced. Across tens of modalities and tasks.

OutCast: Outdoor Single Image Relighting with Cast Shadows

David Griffiths, Tobias Ritschel, Julien Philip

EuroGraphics 2022

We address the problem of single image relighting. Our work shows monocular depth estimators can provide sufficient geometry when combined with our novel 3D shadow map prediction module.

Curiosity-driven 3D Object Detection without Labels

David Griffiths, Jan Boehm, Tobias Ritschel

International Conference on 3D Vision (3DV) 2021

A novel method for self-supervised monocular 3D object detection. This is achieved through differentiable rendering and a GAN-like critic loss.

Semantic Segmentation of Terrestrial LIDAR Data Using Co-Registered RGB Data

Erick Sanchez, David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2021

A pipeline which demonstrates that Terrestrial Laser Scanning (TLS) 3D data can be automatically labelled using off-the-shelf 2D semantic segmentation networks. With only a simple projection of a panoramic image, strong results can be generated with no additional training.

Finding Your (3D) Center: 3D Object Detection using a Leant Loss

David Griffiths, Jan Boehm, Tobias Ritschel

European Conference on Computer Vision (ECCV) 2020

We present a novel weakly-supervised approach for 3D object detection. Our method can be trained on upto 95% less labeled data and still benefits from unlabeled data.

SynthCity: A Large Scale Synthetic Point Cloud

David Griffiths, Jan Boehm

arXiv preprint 2019

We release a synthetic Mobile Laser Scanning (MLS) point cloud named SynthCity. Every point has a per-class and per-instance classification, along with colour, return intensity, end-of-line indicator and time.

Weighted point cloud augmentation for neural network training data class-imbalance

David Griffiths, Jan Boehm

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 2019

A key issue when training deep neural networks for outdoor point clouds is the inevitable large data imbalance. For example, a typical street scene will contain orders of magnitudes more ground points than street furniture. We develop a novel solution to apply a weighted augmentation to reduce the class-imbalance.