Laboratoire de Génie Informatique et d’Automatique de l’Artois

Siti MUTMAINAH

Ph.D. student
(Left the LGI2A in 2021)
Member of the research themes:

Conférence Internationale avec Comité de Lecture

2021
International conference with review committee
DOI
Improving an Evidential Source of Information Using Contextual Corrections Depending on Partial Decisions
International Conference on Belief Functions 2021, pp 247-256, Shanghai, China, 10/2021
On Learning Evidential Contextual Corrections from Soft Labels Using a Measure of Discrepancy Between Contour Functions
Proceedings of the 13th international conference on Scalable Uncertainty Management, SUM 2019, pp 382-389, Compiègne, France, 12/2019

Conférence Nationale avec Comité de Lecture

2021
French conference with review committee
Corrections contextuelles crédibilistes en fonction de décisions partielles
30e Rencontres Francophones sur la Logique Floue et ses Applications, LFA 2021, pp 217-224, Paris, France, 10/2021
Siti MUTMAINAH -- Frédéric PICHON -- David MERCIER
Apprentissage de corrections contextuelles crédibilistes à partir de données partiellement étiquetées en utilisant la fonction de contour
Actes des 28èmes rencontres francophones sur la Logique Floue et ses Applications, LFA 2019, pp 157-164, Alès, France, Cépaduès, 11/2019

Author of the Ph.D. thesis "Learning to adjust an evidential source of information using partially labeled data and partial decisions"

2017 - 2021

The quality of the information provided by a source (e.g. a sensor, a classifier, …) plays an important role in the success of a pattern recognition task. Indeed, the latter may turn out to be false, biased or irrelevant.

In this thesis, this source adjustment problem is tackled within the framework of the Dempster-Shafer theory of belief functions, which provides a rich and flexible mathematical model for handling imperfect information, this model generalizing probability theory for instance. We also consider the source as a black box, meaning we do not know how it works. We only have a source and its possible outputs on a set of labeled data. This situation occurs, for example, in the case of a company that has an equipment from another company to perform a given task and whose technology is protected.

Two main contributions are made in this manuscript to learn to adjust a source from data. First, we propose to extend the performances of the contextual correction mechanisms by taking into account a partitioning according to partial decisions associated to the source outputs. These contextual correction mechanisms allow us to take into account fine-grained knowledge about the quality of a source such as its relevance, meaning its ability to answer the question of interest, and its truthfulness, meaning its ability to say what it knows, this ability being either conscious - such as a lie for instance - or unconscious - such as a bias for instance. Second, we show how it is possible to learn these corrections even in the case where the data are only partially labeled. The advantages of the proposed methods are illustrated in numerical experiments using synthetic and real data.