
Constantin Jehn
Professorship for Sensory Neuroengineering
Research associates
Contact
Constantin is an engineer who is fascinated by the brain and its remarkable capabilities in making sense of complex signals. After earning his undergraduate degree in Mechanical Engineering at FAU and spending a year in industry, Constantin shifted his focus toward machine learning and optimization. This led him to pursue a double Master’s degree in Computational Engineering (FAU) and Computational Science (USI Lugano). His research explores how we can improve the listening experience for hearing-impaired individuals in “cocktail party” environments. To pave the way towards tomorrow’s hearing technology, Constantin studies how we can use neural data to enhance voices that users of hearing aids and cochlear implants want to hear. Outside of work he is still intrigued by complex signals, and loves to practice the guitar or analyze the data from his latest bike-ride.
- Jehn, Constantin, et al. “Attention decoding at the cocktail party: Preserved in hearing aid users, reduced in cochlear implant users.” NeuroImage (2026): 121771. https://www.sciencedirect.com/science/article/pii/S1053811926000893
- Jehn, Constantin, et al. “CNNs improve decoding of selective attention to speech in cochlear implant users.” Journal of Neural Engineering 22.3 (2025): 036034. https://iopscience.iop.org/article/10.1088/1741-2552/addb7b/meta
- Jehn, Constantin, Johanna P. Müller, and Bernhard Kainz. “Learnable Slice-to-volume Reconstruction for Motion Compensation in Fetal Magnetic Resonance Imaging.” BVM Workshop. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. https://link.springer.com/chapter/10.1007/978-3-658-41657-7_10
- Thornton, Mike, et al. “Detecting gamma-band responses to the speech envelope for the ICASSP 2024 Auditory EEG Decoding Signal Processing Grand Challenge.” 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). IEEE, 2024. https://ieeexplore.ieee.org/abstract/document/10626244
- Riegel, Jasmin, et al. “Talking avatars can differentially modulate cortical speech tracking in the high and in the low delta band.” bioRxiv (2026): 2026-01. https://www.biorxiv.org/content/10.64898/2026.01.07.695461v1.abstract