Machine learning (ML) has been recognized as central to artificial intelligence (AI) for many decades. The question of how the things that have been learned in one context can be re-used and adapted in other related contexts, however, has only been brought to the attention of the wider ML research community over the past few years. In parallel (and sometimes preceding this), transfer learning has been receiving increasing attention in other research areas, e.g., psychology.

In deep learning context, problems are abstract concepts observed through the data which consists of instances and associated labels to learn from, while the solutions are considered to be the parameters of the model that will be learned for solving the problem. Transfer learning and domain adaptation refer to the situation where a model is learnt in one setting, and is exploited to improve generalization in another setting. The transfer process begins with a) a target task to be learnt in a target context; b) a set of solutions to the source tasks (already learnt in the source contexts); c) the transfer of knowledge based on the similarity between the target and source tasks. This is commonly understood in a supervised learning context, where the input is the same but the target may be of a different nature. If there is significantly more data in the first setting, then that may help to learn representations that are useful to quickly generalize. This happens because many visual categories share low-level notions of edges and visual shapes, changes in lighting, etc. Recent works have focused on incorporating transfer learning into deep visual representations, to combat the problem of insufficient training data. Pre-training CNNs on ImageNet or Places has been the standard practice for other vision problems. However, features learnt in pre-trained models are not perfectly fitted for the target learning task. Using the pre-trained network as a feature extractor or fine-tuning the network have become a frequently used method to learn task-specific features, while extensive efforts have been made to perceive transfer learning itself.

Therefore, this Special Issue welcomes new research contributions proposing novel (federated) transfer learning and domain adaptation approaches to real imaging-related problems, such as (but not limited to):

  • Medical imaging
  • Plant biology
  • Microscopy
  • Remote sensing
  • Hyperspectral imaging
  • Video surveillance
  • Human rights technology
  • COVID-19
  • Multi- and cross-modality

These topics solve one (or more) machine learning-related tasks, such as classification, regression, segmentation, detection, etc.

Dr. Christos Chrysoulas
Dr. Mario Valerio Giuffrida
Prof. Dr. Aris Perperoglou
Dr. Grigorios Kalliatakis
Mr. Alexandros Stergiou
Guest Editors


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *