PhD position in Sensory Neurotechnology
About us
The Chair for Sensory Neuroengineering at the new Department Artificial Intelligence in Biomedical Engineering is looking for a PhD student to work on an innovative project in neurotechnology. Together with partners in the Cluster4Future SEMECO (TU Dresden, MED-EL, Fraunhofer) you will develop a Brain-Computer-Interface (BCI) for the next generation of hearing aids. To this end you will develop AI methods for analyzing brain signals and for steering the processing of acoustic signals. The new hearing aids with neurofeedback will then be able to automatically adjust to the needs of their users and to adapt in real time.
Your tasks
- Non-invasive measurement of brain activity (EEG) during speech processing
- Development and application of AI methods for the analysis of EEG signals as well as acoustic signals
- Close collaboration with partners in the Cluster4Future SEMECO (TU Dresden, MED-EL, Fraunhofer)
- Analysis and communication of the obtained results
- Publication of the results in scientific journals and presentation of the results in national and international conferences
Your profile
- Excellent degree in computer science, mathematics, physics, engineering or a similar disciplince
- Strong interest in neurological research at the interface to AI
- Experience in one (or several) of these areas: signal processing, machine learning, neurobiology and neuroimaging
- Excellent organisational capabilities
- High own-initiative and desire to work independently
- Excellent cooperation, communication and team skills
- Excellent knowledge of written and oral English
- Strong motivation to graduate within three years
Your benefits
AI at the interface to neurobiology is a highly timely topic of immense societal relevance. We offer you the possibility to actively shape this exciting and rapidly developing field. We support your work through:
- A lively scientific environment within the Department and the possibility to cooperate with excellent partners at the TU Dresden, at the Friedrich-Alexander-University Erlangen-Nürnberg as well as with industry partners
- An excellent environment to conduct leading science and to realise own scientific ideas
- An excellent training in the development of AI methods for neuronal data
- Development of your personal strengths, for instance through extensive courses on personal development
- 26 days of holiday per year
The renumeration is according to Group 13 of the German public service (TV-L) or equivalent stipend.
How to apply
To apply, please send a letter detailing your motivation, a CV, copies of your degrees and details of two referees. Please combine all documents into a single pdf and send them per email with subject “Application PhD student” to Prof. Dr. Tobias Reichenbach (tobias.j.reichenbach(at)fau.de). Applications will be considered until the 10th of April 2026. The preferred start date of the PhD position is the 1st of October 2026.
Master Thesis: Uncertainty Quantification for Robust Auditory Attention Decoding
Project Description:
The ability to follow a conversation in a noisy environment—the “cocktail party problem”—is fundamental to social engagement and quality of life. For millions of individuals with hearing loss, this everyday challenge can lead to communication breakdown, social withdrawal, and has even been linked to an increased risk of accelerated cognitive decline and dementia.
A promising future solutions is the concept of neuro-steered hearing technology. These putative systems would use brain signals (EEG) to decode a listener’s focus of attention (a process called Auditory Attention Decoding) and selectively amplify the target speaker [1, 2]. However, for such technology to ever become viable, its decisions must be highly reliable; an incorrect decoding can be highly disruptive.
This thesis addresses this challenge by exploring the integration of Uncertainty Quantification (UQ) into AAD models. By investigating methods for more reliable decoding, this research will inform the development of future AAD-based systems that can decide when to intervene, acting only when certain about the listener’s intent.
Thesis Outline and Tasks:
- Conduct a literature review on Auditory Attention Decoding and Uncertainty Quantification in deep learning.
- Formulate precise research questions and define suitable evaluation metrics to assess both decoding accuracy and the quality of uncertainty estimates.
- Implement a method to estimate uncertainty for a linear model as baseline (e.g. ensembling).
- Implement and integrate two state-of-the-art UQ techniques (e.g., MC Dropout, Deep Ensembles) into a deep learning-based AAD classifier (CNN).
- Systematically evaluate and compare the UQ methods, first on EEG data from typically hearing subjects and then potentially on more challenging data from hearing-impaired individuals.
- Document the research process and results in a comprehensive thesis, with the potential for contributing to a conference or journal publication.
Your Profile:
We are looking for a highly motivated student with a strong background in engineering/ computer science, eager to work at the intersection of neuroscience, machine learning, and healthcare technology.
Required Skills:
- Proficiency in Python
- Hands-on experience with a deep learning framework (e.g., PyTorch, TensorFlow)
- A high degree of motivation and the ability to work independently on a challenging research topic
Beneficial Skills:
- Prior experience with biosignal processing (especially EEG)
- Familiarity with uncertainty quantification methods
- Experience with statistical analysis and testing
- Strong scientific writing skills
How to Apply:
Interested candidates are invited to send their application, including a CV and a current transcript of records, to:
Constantin Jehn (constantin.jehn@fau.de)
References:
[1] Hjortkjær, Jens et al. (2025). “Real-time control of a hearing instrument with EEG-based attention decoding”. In: Journal of Neural Engineering 22.1, p. 016027.902
[2 ]Jehn, C. et al. (2025). “CNNs improve decoding of selective attention to speech in cochlear implant users”. In: Journal of Neural Engineering.