CAMERA | Research Areas | Research Funding | Research Team | Research Papers | Commercial Projects | Code and Data | Archive

I am a Professor in Computer Science at the University of Bath (since 2017). Previously I was a Lecturer (Assistant Prof./US) in Computer Science in 2012, and a Reader (Associate Prof./US) from 2014. I have been fortunate enough to be awarded two previous Research Fellowships: Royal Academy of Engineering, 2007-2012, Royal Society Industry Fellowship (with Double Negative Visual Effects), 2012-2016.

I am currently the Director of the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), funded by EPSRC/AHRC, with partner contributions from The Imaginarium, The Foundry, British Skeleton, Ministry of Defence and British Maritime Technologies.

Although primarily embedded in Visual Computing and AI, I'm a multi-disciplinary scientist/researcher interested in problems that cut across disciplines - resulting in papers published in diverse disciplines such as animation, vision, emotion perception and sport. I'm also interested in democratising technology to enable people to do great work - whether they are researchers or an every day person just wanting to be more productive or have fun. Past applications of my research have been in areas such as human motion analysis, recognition and synthesis, but I'm interested in any problem that involves understanding and modeling data. Applications have in the past been across creative industries (e.g. we worked on some great projects with the BBC and Aardman - 'Is Anna OK?' and '11:11 Memories Retold'), healthcare (e.g. AI to manage disease) and sport (e.g. markerless biomechanical analysis), but I'm always looking for new interesting ways to apply my research.

I'm always looking for bright potential PhD students or post-doctoral reseaerchers to work on projects in CAMERA. Please get in touch if people are interested even if positions are not advertised. Apologies in advance if I don't respond to every email but be assured that if you have a strong background and the right project looks to be coming up I will be in touch!

Research Areas

Below is an overview of my research based on different areas I have worked on. It is a representative list of papers only by broad topic area - for a full list see the research papers section.

Facial Analysis and Synthesis

Evolutionary Facial Animation - CGF 2019 Content Aware Deformation - CVMP 2015 Procedural Facial Animation - CGF 2018 Reading Between the Dots: Facial Capture- GI 2017 Dynamic Morphable Models: D3DFACS - ICCV 2011 4D Facial Movement for Biometrics - IEEE SMC 2010 Speech Driven Facial Animation - ICPR 2004

Performance Capture and Animation

RGBD-Dog: Predicting Canine Pose from RGBD - CVPR 2020 Scale Aware Performance Retargeting - CGI 2019 Markerless Motion Capture Survey - Sports Medicine 2018 Markerless Sprint Analysis - WACV 2018 Elastic Deformation - MIG 2018

Image and Video Processing

Image to Image Translation - NeurIPS 2018 Multi-task Learning - CVPR 2018 Blur Robust Robust Optical Flow - PR 2017 Inferring Focal Length - CG 2015 Shadow Removal - BMVC 2014 Robust Feature Tracking - ACCV 2013
Camera Tracking in Visual Effects - DigiPro 2016 Mesh based Optical Flow - CVPR 2013 Non-Rigid Optical Flow Ground Truth - RAL 2016 Water Reconstruction from Video - TVCG 2015

Applied Perception for Vision and Graphics

Nonlinear 4D Facial Perception - ACM APGV / SAP 2011 Facial Dynamics and Trustworthiness - Emotion 2008 Perceptual Evaluation of Video Based Facial Animation - ACM TAP 2005 Evaluation of Foveated Rendering Methods - ACM SAP 2016 Evaluation of Gesture Based Interfaces - Pervasive 2011

Virtual and Augmented Reality

Real World Objects for Egocentric VR - IEEEVR 2020 Creating Virtual Props - ISMAR 2019 Real Time Object Deformation for VR - SIGGRAPH 2019 (poster) Multi-Camera / RGBD Object Tracking - SCA 2019 Tracking Head Mounted Displays - VRST 2014 Latency Aware Foveated Rendering - CVMP 2015


Research and Engineering Team

Postdoctoral/Engineers:

Martin Parsons (CAMERA); Nadejda Roubtsova (CAMERA); Murray Evans (CAMERA); Yiguo Qiao (Living With/RUH/InnovateUK); Sinead Kearney (CAMERA)

PhD and EngD Students: Jack Saunders, George Fletcher, Maryam Naghizadeh; Jake Deane; Kyle Reed (Cubic Motion); Catherine Taylor (Marshmallow Laser Feast)

Alumni: Jose Serra (Industrial Light and Magic), Anamaria Ciucanu (MMU), Pedro Mendes, Shridhar Ravikumar (Amazon); Alastair Barber (The Foundry); Wenbin Li (Bath); Han Gong (UEA/Cambridge/Edinburgh/Apple); Charalampos Koniaris (Disney Research); Daniel Beale; Sinan Mutlu (Framestore); Nicholas Swafford

Commercial Projects

In CAMERA we translate research into impact using commercial projects. Below is a snapshot of some commercial projects myself and the CAMERA team have been proud to help deliver - wherever possible using tools based on our own research for motion capture, rigging and animation.

Cosmos Within Us - with Satore Studios 11:11 Memories Retold - with Aardman and Bandai Namco
'Is Anna OK?' - with BBC and Aardman Magic Butterfly - with REWIND and WNO

Research Funding and Awards

(PI) 2020-2025: CAMERA 2.0 - Centre for the Analysis of Motion, Entertainment Research and Applications (£4,151,614 FEC). EPSRC

(PI) 2019-2021: CAMERA Motion Capture Innovation Studio (£901,391) Horizon 2020

(PI) 2019-2022: A tool to reveal Individual Differences in Facial Perception (£402,113) Medical Research Council (MRC)

(PI) 2018-2020: Rheumatoid Arthritis Flare Profiler (£165,126, Total project value £663,290). Partners: Living With, NHS. InnovateUK

(Co-I) 2018-2022: Bristol and Bath Creative Cluster (~£4m). Partners: UWE, University of Bristol, Bath Spa University. AHRC

(PI) 2017-2019: DOVE: Deformable Objects for Virtual Environments (£128,746, Total project value £562,559 FEC). Partner: Marshmallow Laser Feast, Heston's Fat Duck. Innovate UK

(PI) 2016-2018: HARPC: HMC for Augmented Reality Performance Capture (£119,025, Total project value £517,616 FEC). Partner: The Imaginarium. Innovate UK

(PI) 2015-2020: Centre for the Analysis of Motion, Entertainment Research and Applications - CAMERA (£ 4,998,728 FEC). Partners: The Imaginarium, The Foundry, Ministry of Defence, British Maratime Technologies, British Skeleton. EPSRC/AHRC. (not including partner contributions, ~£5,000,000).

(PI) 2015-2017: Biped to Animal (£108,109 FEC). Parter: The Imaginarium. Innovate UK.

(PI) 2015: Goal Oriented Real Time Intelligent Performance Retargeting  (£29,997 FEC). Partner: The Imaginarium. Innovate UK.

(Co-I) 2013-2016: Acquiring Complete and Editable Outdoor Models from Video and Images (£1,003,256 FEC). EPSRC.

(PI-Bath) 2014-2017: Visual Image Interpretation in Man and Machine (VIIMM) (£121,030 FEC). Partner: University of Birmingham. EPSRC

(PI) 2012-2016: Next Generation Facial Capture and Animation (£100,887 FEC). Partner: Double Negative Visual Effects. The Royal Society Industry Fellowship.

(PI) 2007-2012: Exploiting 4D Data for Creating Next Generation Facial Modelling and Animation Techniques (£460,640FEC). The Royal Academy of Engineering Research Fellowship.

Other funding: PhD Studentships, EPSRC Innovation Acceleration Account (IAA), Nuffield Foundation.

Code and Data

RGBD-Dog RGBD-Dog contains motion capture and multiview (Sony) RGB and (Kinect) RGBD data for several dogs performing different actions (all cameras and mo-cap syncronised with calibration data included. You can get the data, code to view and the CVPR 2020 paper it is all based on from our GitHub page. In our CVPR 2020 paper we use the data to train a model to predict dog pose from RGBD data. However, it also works pretty well on other animals. In the future we will expand the data and code as we publish more of our research.

D3DFACS The D3DFACS Dataset contains over 500 FACS coded dynamic 3D (4D) sequences from 10 individuals - including 3D meshes, stereo UV maps, colour camera images and calibration files. You can find out more about it in our ICCV 2011 paper "A FACS Valid 3D Dynamic Action Unit Database with Applications to 3D Dynamic Morphable Facial Modelling". If you would like to download the dataset for academic research, please visit the data set website

Shadow Removal Ground Truth and Evaluation To encourage the open comparison of single image shadow removal in community, we provide an online benchmark site and a dataset. Our quantitatively verified high quality dataset contains a wide range of ground truth data (214 test cases in total). Each case is rated according to 4 attributes, which are texture, brokenness, colourfulness and softness, in 3 perceptual degrees from weak to strong. To access the evaluation website, please visit here.

Other Code I'm trying to archive and add new code to my GitHub page (under username dopomoc) when I get time - so please check that out now and again for updates!




Publications (Recent and Selected)

There are many better systems these days at keeping track of personal papers - e.g. my Google Scholar page or University of Bath Pure page. So, I apologies if this page is not maintained as well as I would like and papers are missing!

Archive

Below is a collection of previous activities - including workshops (at CVPR, ACCV, etc) and EPSRC research networks I have co-founded - kept here for future reference (mine as much as anything!)


EPSRC Network on Visual Image Interpretation in Humans and Machines (ViiHM)

Understanding the environment via the sense of vision represents a challenging problem in computer science. Yet biological vision, as evidenced in the human visual system, seems to process the visual environment effortlessly. This supports the notion that understanding biological vision will help to solve problems in machine vision. However, some of the biggest advances in our understanding of human vision have occurred as a direct result of modern computing techniques. We can only really say we understand a complex system fully when we can recreate or simulate it, test hypotheses on the simulation, and take the simulation to the limits of its validity. The aims of the EPSRC VIIHM Network are: 1. To foster communication and joint projects between relevant research groups including those working on biological vision (human and non-human animals) computer vision and machine vision. 2. To establish a series of grand challenges focused around well specified tasks where cross-over studies have a strong potential to provide robust solutions. 3. To foster joint cross-discipline grant applications. 4. To explore mechanisms to improve the utility of joint publications for both partners. 5. To equip individual PhD and post-doctoral scientists to be future leaders of cross-over research projects. 6. To establish a lasting vehicle for supporting cross-over biological and machine vision projects. 7. To increase public engagement with the concept of biologically inspired computer vision. You can join and register for our first workshop at the same time or just join the Network here: http://www.viihm.org.uk


EPSRC Network on Vision and Language (V&L Net)

The EPSRC Network on Vision and Language (V&L Net) is a forum for researchers from the fields of Computer Vision and Language Processing to meet, exchange ideas, expertise and technology, and form new partner- ships. Our aim is to create a lasting interdisciplinary research community situated at the language-vision interface, jointly working towards solutions for some of today's most challenging computational challenges, including image and video search, description of visual content and text-to-image generation. As a research collaboration forum, V&L Net has a real-life and a virtual dimension. We hold annual V&L Net meetings which combine the characteristics of an academic conference, a networking event and an exhibition. At the same time, the V&L Net website offers a wide variety of different tools and resources including networking tools and repositories of publications, data resources and software tools


2nd Meeting of the EPSRC Network on Visual Image Interpretation in Humans and Machines (ViiHM), July 1st/2nd, 2015

We invite all academics and relevant industrial practitioners interested in the fostering of human and computer vision research to the first annual meeting of the EPSRC VIIHM Network. The annual meeting will focus on community building and will comprise of plenary talks from internationally renowned human and computer vision researchers, networking and community building opportunities and poster sessions. Full workshop details may be found here http://www.viihm.org.uk/home/events/second-workshop/


2nd Workshop on User Centric Computer Vision (UCCV), 2014

UCCV 2014 is a workshop dedicated to research on interactive computer vision and methods for making computer vision more accessible to wider audiences. The workshop welcomes work on case studies, end-user applications, developer-centred approaches and many other aspects of computer vision. 1st Meeting of the EPSRC Network on Visual Image Interpretation in Humans and Machines (ViiHM), September 24th-25th, 2014 We invite all academics and relevant industrial practitioners interested in the fostering of human and computer vision research to the first annual meeting of the EPSRC VIIHM Network. The annual meeting will focus on community building and will comprise of plenary talks from internationally renowned human and computer vision researchers, networking and community building opportunities and poster sessions. Up to 80 applicants will then be invited to attend the workshop - based on a balance of early, mid and advanced career researchers. We will also aim to balance the mix of disciplines. The meeting duration is from midday on the 24th to the afternoon of 25th September. Those interested should complete the form below and send it to the Network Administrator by 30th June 2014. Workshop applicants may optionally request a poster presentation using the same form. Posters may represent new work, a review of past work, an outline of planned work or position piece, or an outline of collaboration interests and opportunities. Full workshop details may be found here http://viihm.org.uk/workshop.html


3rd Workshop On Vision And Language 2014 (VL'14), Dublin, 23rd August 2014

Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. In addition, labelled images are essential for training object or activity classifiers. On the other hand, visual data can help resolve challenges in language processing such as word sense disambiguation. Studying language and vision together can also provide new insight into cognition and universal representations of knowledge and meaning. Meanwhile, sign language and gestures are languages that require visual interpretation. We welcome papers describing original research combining language and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets, grand challenges, open problems, benchmarks and work in progress as well as survey papers. Full workshop details may be found here https://vision.cs.bath.ac.uk/VL_2014/


EPSRC Workshop on Vision and Language (2010-2013)

The EPSRC Network on Vision and Language (V&L Net) is a forum for researchers from the fields of Computer Vision and Language Processing to meet, exchange ideas, expertise and technology, and form new partner- ships. Our aim is to create a lasting interdisciplinary research community situated at the language-vision interface, jointly working towards solutions for some of today's most challenging computational challenges, including image and video search, description of visual content and text-to-image generation. As a research collaboration forum, V&L Net has a real-life and a virtual dimension. We hold annual V&L Net meetings which combine the characteristics of an academic conference, a networking event and an exhibition. At the same time, the V&L Net website offers a wide variety of different tools and resources including networking tools and repositories of publications, data resources and software tools. The networks home page may be found here.


Eurographics UK - Theory and Practice of Computer Graphics 2013

The 31st Conference organised by the UK chapter of the Eurographics Association took place at the University of Bath on the 5-6 September 2013. The aim of this conference is to focus on theoretical and practical aspects of Computer Graphics and to bring together top practitioners, users and researchers, which will inspire further collaboration between participants particularly between academia and industry. The meeting website contains more details of the event.

IEEE CVPR Workshop on Vision and Language 2013

The EPSRC Network on Vision and Language (V&L Net) has been set up to foster collaborative work in this area. It is a forum for researchers from the fields of Computer Vision and Natural Language Processing to meet, exchange ideas, expertise and technology, and form new partnerships. The aim is to create a lasting interdisciplinary research community situated at the language-vision interface, jointly working towards solutions for some of today's most challenging computational challenges, including image and video search, description of visual content and text-to-image generation. A workshop on this theme - held jointly with CVPR - took place in 2013. The meeting website may be found here.


Symposium on Facial Analysis and Animation (FAA), in Co-op with ACM, (2009, 2010, 2012)

The aim of this meeting is to bring together researchers and practitioners from both academia and industry – particularly in VFX and games - interested in all aspects of facial animation and related analysis. The meeting has previously been held in Edinburgh (2009/2010) and Vienna (2012). Watch this space for future meetings!

AVA/BMVA Biological and Computer Vision (2011,2012)

The study of biological and machine vision share much common history (e.g. Marr), and each discipline has benefited enormously from findings and techniques from the other. In the UK (in contrast to elsewhere) the discussion and collaboration between the sister disciplines seems to have declined. The aim of this meeting, organised jointly by the Applied Vision Association (AVA) (UK biological vision) and British Machine Vision Association (BMVA) (UK computer vision) is to reignite conversations between these two fields. The meeting was held at Cardiff University (2011) and at Microsoft Research, Cambridge (2012). Watch this space for 2013 meeting information