Hao Chen, Mario Valerio Giuffrida, Peter Doerner, Sotirios A. Tsaftaris

CVPPP Workshop (2018)

Hao Chen, Mario Valerio Giuffrida, Peter Doerner, Sotirios A. Tsaftaris (2018) “Root Gap Correction with a Deep Inpainting Model,” Computer Vision Problems in Plant Phenotyping.

wp-content/uploads/2020/10/tex.png
Get Paper
Get Paper
@Article{Giuffrida2018,
author="Giuffrida, M. Valerio
and Chen, Feng
and Scharr, Hanno
and Tsaftaris, Sotirios A.",
title="Citizen crowds and experts: observer variability in image-based plant phenotyping",
journal="Plant Methods",
year="2018",
month="Feb",
day="09",
volume="14",
number="1",
pages="12",
abstract="Image-based plant phenotyping has become a powerful tool in unravelling genotype--environment interactions. The utilization of image analysis and machine learning have become paramount in extracting data stemming from phenotyping experiments. Yet we rely on observer (a human expert) input to perform the phenotyping process. We assume such input to be a `gold-standard' and use it to evaluate software and algorithms and to train learning-based algorithms. However, we should consider whether any variability among experienced and non-experienced (including plain citizens) observers exists. Here we design a study that measures such variability in an annotation task of an integer-quantifiable phenotype: the leaf count.",
issn="1746-4811",
doi="10.1186/s13007-018-0278-7",
url="https://doi.org/10.1186/s13007-018-0278-7"
}

Abstract

Imaging roots of growing plants in a non-invasive and affordable fashion has been a long-standing problem in image-assisted plant breeding and phenotyping. One of the most affordable and diffuse approaches is the use of mesocosms, where plants are grown in soil against a glass surface that permits the roots visualization and imaging. However, due to soil and the fact that the plant root is a 2D projection of a 3D object, parts of the root are occluded. As a result, even under perfect root segmentation, the resulting images contain several gaps that may hinder the extraction of finely grained root system architecture traits.

We propose an effective deep neural network to recover gaps from disconnected root segments. We train a fully supervised encoder-decoder deep CNN that, given an image containing gaps as input, generates an inpainted version, recovering the missing parts. Since in real data ground-truth is lacking, we use synthetic root images that we artificially perturb by introducing gaps to train and evaluate our approach. We show that our network can work both in dicot and monocot cases in reducing root gaps. We also show promising exemplary results in real data from chickpea root architectures.