Post-Doctoral in Interpretable transformers Organisation/Company Le Mans Université Department LIUM Research Field Computer science » Informatics Researcher Profile Recognised Researcher (R2) Positions PhD Positions Country France Application Deadline 13 Oct 2024 - 16:00 (Europe/Paris) Type of Contract Temporary Job Status Full-time Offer Starting Date 4 Nov 2024 Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Is the Job related to staff position within a Research Infrastructure? No Offer Description Description of the research subject Based on the representations learned with SINr, with the interpretable plunging approach as the first building block, the candidate recruited will be in charge of end-to-end interpretable classification neural architectures. The aim is to remain in an interpretable space throughout the classification. In this way, deep mechanisms can be implemented based on the hierarchical structure of the dives produced by SINr, and inspired for example by the work of Victoria Bourgeais [BZBHH21]. Attention mechanisms of the dot product type as in Bahdanau [BCB14], using an attention vector dedicated to the task, which, if it is in the same space as the input, will also be interpretable. But other approaches are also possible for exploiting interpretability within more complex models such as transformers. Clark et al [CKLM19] have highlighted the roles played by attention heads, and in particular their specialisation. Geva et al [GSBL20] worked on the feed-fordward modules of the transformer to determine their importance. Finally, Mickus et al [MPC22] dissected the transformer to measure the contribution of each of its modules (attention, bias, feed-forward, initial embedding) in the output representations and also in the prediction of the hidden word. In this way, the state of the art has made progress on the explicability of transformers and their mechanisms, allowing us to envisage reduced and interpretable architectures inspired by them. To evaluate these architectures, we will consider classification tasks such as named entity recognition, polarity analysis or hate content detection. But it will also involve developing an end-to-end interpretability evaluation framework. Planning research project First months : Literature review, coding transformers. 6 months : upgrading the code to make them interpretable and evaluation. 2 last months : writing a paper. Assigned activities and expected results Literature review, coding transformers, upgrading the code to make them interpretable, evaluation E-mail nicolas.dugue@univ-lemans.fr Requirements Research Field Computer science » Informatics Education Level PhD or equivalent Specific Requirements The contract is for 12 months from November 4, 2024. The gross salary is ¤2800 Apply on-line: https://euraxess.ec.europa.eu/jobs/245387