1. Home
  2. >
  3. Junior Research Group –...

Junior Research Group ”Knowledge Extraction & Integration"

Knowledge is one of the cornerstones for the progress of a society. Likewise, humans and machines can benefit from each other by sharing knowledge. The junior research group ”Knowledge Extraction & Integration“ headed by Dr. Jörg Schlötterer aims to foster knowledge exchange between machines and between humans and machines.

 

A basic step in making knowledge accessible to machines is the extraction of structured information from unstructured data sources in a machine-readable format. Such unstructured information sources are for example patient reports, written in natural language, for which the research group develops methods to represent them in a structured manner. A structured representation enables targeted search, access on various levels of granularity, obtaining statistics (cf. cancer registry) and simplifies downstream machine learning tasks.

 

Knowledge exchange between machine learning models can be achieved by transfer learning, which is based on the assumption that a model’s knowledge gained by solving task A is beneficial to solving task B. The research group investigates transfer learning in multimodal settings, both, for separate and end-to-end models. End-to-end multimodal transfer learning incorporates data from different modalities (e.g., text and images) into a single model and exploits the information contained in one modality to complement the other and vice versa. Separate models learn condensed representations of the data, which are then combined in a predictive model – similar to a tumor board in which interdisciplinary experts discuss their findings to plan the treatment, rather than looking at raw data.

 

Trained on huge amounts of data, machine learning models are capable to identify facts and interrelations clinicians may not yet be aware of. However, this new knowledge is trapped in powerful but complex models. Due to their complexity, these models are non-interpretable black boxes to humans. Explainable Artificial Intelligence (X-AI) allows insights to the decision-making process of black box models, uncovering hidden knowledge. Further, these insights allow clinicians to judge the quality and decisions of a machine learning model and to correct the latter if necessary. The research group’s vision is to go beyond correction of wrong model decisions and instead correct the model’s decision process itself, leveraging X-AI methods. This correction enables the integration of clinical knowledge into machine learning models, closing the gap in knowledge exchange cycles between humans and machines.

 

Head of Junior Research Group

Dr. Jörg Schlötterer

joerg.schloetterer@uni-due.de

AI Futurelab Logo
Institut für künstliche Intelligenz
in der Medizin
Universitätsklinikum Essen
Girardetstraße 2, 45131 Essen
AI Futurelab Logo
Institut für künstliche Intelligenz
in der Medizin
Universitätsklinikum Essen
Girardetstraße 2, 45131 Essen
Anreise