Open Positions
Master Thesis: Uncertainty Quantification for Robust Auditory Attention Decoding
Project Description:
The ability to follow a conversation in a noisy environment—the “cocktail party problem”—is fundamental to social engagement and quality of life. For millions of individuals with hearing loss, this everyday challenge can lead to communication breakdown, social withdrawal, and has even been linked to an increased risk of accelerated cognitive decline and dementia.
A promising future solutions is the concept of neuro-steered hearing technology. These putative systems would use brain signals (EEG) to decode a listener’s focus of attention (a process called Auditory Attention Decoding) and selectively amplify the target speaker [1, 2]. However, for such technology to ever become viable, its decisions must be highly reliable; an incorrect decoding can be highly disruptive.
This thesis addresses this challenge by exploring the integration of Uncertainty Quantification (UQ) into AAD models. By investigating methods for more reliable decoding, this research will inform the development of future AAD-based systems that can decide when to intervene, acting only when certain about the listener’s intent.
Thesis Outline and Tasks:
- Conduct a literature review on Auditory Attention Decoding and Uncertainty Quantification in deep learning.
- Formulate precise research questions and define suitable evaluation metrics to assess both decoding accuracy and the quality of uncertainty estimates.
- Implement a method to estimate uncertainty for a linear model as baseline (e.g. ensembling).
- Implement and integrate two state-of-the-art UQ techniques (e.g., MC Dropout, Deep Ensembles) into a deep learning-based AAD classifier (CNN).
- Systematically evaluate and compare the UQ methods, first on EEG data from typically hearing subjects and then potentially on more challenging data from hearing-impaired individuals.
- Document the research process and results in a comprehensive thesis, with the potential for contributing to a conference or journal publication.
Your Profile:
We are looking for a highly motivated student with a strong background in engineering/ computer science, eager to work at the intersection of neuroscience, machine learning, and healthcare technology.
Required Skills:
- Proficiency in Python
- Hands-on experience with a deep learning framework (e.g., PyTorch, TensorFlow)
- A high degree of motivation and the ability to work independently on a challenging research topic
Beneficial Skills:
- Prior experience with biosignal processing (especially EEG)
- Familiarity with uncertainty quantification methods
- Experience with statistical analysis and testing
- Strong scientific writing skills
How to Apply:
Interested candidates are invited to send their application, including a CV and a current transcript of records, to:
Constantin Jehn (constantin.jehn@fau.de)
References:
[1] Hjortkjær, Jens et al. (2025). “Real-time control of a hearing instrument with EEG-based attention decoding”. In: Journal of Neural Engineering 22.1, p. 016027.902
[2 ]Jehn, C. et al. (2025). “CNNs improve decoding of selective attention to speech in cochlear implant users”. In: Journal of Neural Engineering.
Master Thesis: Development of an MEG Experiment to Investigate Audiovisual Speech Processing
As part of this Master’s thesis, an experiment will be designed and prepared to investigate audiovisual speech processing. The aim is to use magnetoencephalography (MEG) to gain insights into the temporal dynamics of processing spoken language in combination with visual information.
The thesis will include the following tasks:
- Designing an experimental paradigm suitable for presenting audiovisual speech stimuli in an MEG environment
- Creating and editing appropriate video stimuli, e.g., featuring speaking individuals or artificially generated content
- Developing a presentation script in Python that allows for reliable and synchronous video playback during MEG recordings
- Conducting pilot measurements
If you are interested in this project please contact Jasmin Riegel (jasmin.riegel@fau.de
Master Thesis: Investigating Attentional Modulation of Otoacoustic Emissions: Simulation and Experimental Comparison
Starting date: February 2025
Otoacoustic emissions (OAEs) are subtle sounds produced by the cochlea as a result of its active amplification processes. They are not only critical for understanding cochlear function but also serve as a valuable tool for non-invasive auditory diagnostics. Recent research suggests that OAEs may be influenced by higher-order processes such as auditory attention, making them a promising area of study to explore the interplay between cognitive and peripheral auditory mechanisms.
This Master thesis will utilize an existing cochlear model capable of simulating otoacoustic emissions. The candidate will test stimuli that have been used in recent experiments to compare simulation results with experimental data. A key focus will be implementing a mechanism within the model to simulate the ability to attend to a specific speaker in a “speech-in-noise” scenario, where male and female speakers are presented simultaneously. The goal is to determine whether attentional focus introduces measurable differences in the simulated OAEs.
Additionally, the thesis will aim to quantify these potential changes in OAEs due to attentional modulation and systematically compare the findings with experimental results. This project provides an exciting opportunity to bridge computational modeling with real-world auditory experiments, contributing to the understanding of how cognitive processes interact with auditory physiology.
Skills and Qualifications:
- Programming proficiency: Advanced skills in MATLAB, Python, or similar programming environments for implementing models and analyzing data. The cochlear model is written in Python.
- Signal processing expertise: Understanding of time-frequency analysis, filtering, and noise handling techniques.
- Mathematics skills: Proficiency in mathematical concepts related to differential equations, linear systems, and data modeling.
- Understanding of electrodynamics and circuit theory: Familiarity with concepts such as resistances, admittances, and their role in modeling physical systems. The candidate should be able to apply these principles to understand and work with the cochlear model.
- Experience with computational modeling: Knowledge of simulating physiological processes, preferably within auditory systems.
Application and Contact:
- If you are interested, please send an email to Janna Steinebach (janna.steinebach@fau.de)
- Include your CV and a transcript of records in your application