Schuller, Björn 1975-
Schuller, Björn
Björn Wolfgang Schuller
Schuller, Björn Wolfgang
Schuller, Björn W.
VIAF ID: 13454205 ( Personal )
Permalink: http://viaf.org/viaf/13454205
Preferred Forms
- 100 0 _ ‡a Björn Wolfgang Schuller
- 100 1 _ ‡a Schuller, Bjorn
-
-
-
-
- 100 1 _ ‡a Schuller, Björn ‡d 1975-
- 100 1 _ ‡a Schuller, Björn, ‡d 1975-....
-
-
-
-
- 100 1 _ ‡a Schuller, Björn ‡d 1975-
4xx's: Alternate Name Forms (15)
5xx's: Related Names (3)
- 510 2 _ ‡a Imperial College of Science, Technology and Medicine ‡4 affi ‡4 https://d-nb.info/standards/elementset/gnd#affiliation ‡e Affiliation
- 551 _ _ ‡a München ‡4 ortg ‡4 https://d-nb.info/standards/elementset/gnd#placeOfBirth
- 510 2 _ ‡a Universität Passau ‡4 affi ‡4 https://d-nb.info/standards/elementset/gnd#affiliation ‡e Affiliation
Works
Title | Sources |
---|---|
Affective Computing and Intelligent Interaction : Fourth International Conference, ACII 2011, Memphis, TN, USA, October 9–12, 2011, Proceedings, Part II | |
Analyse automatique des comportements multimodaux lors d'entretiens vidéo différés pour le recrutement | |
Analyse de visages à l'aide d'une régularisation multi-tâches contrainte pour un apprentissage de métrique adaptée à un régresseur par noyaux. | |
Analyse et reconnaissance des émotions lors de conversations de centres d'appels | |
Apprentissage profond appliqué à la reconnaissance des émotions dans la voix | |
Audio-visual detection of emotional (laugh and smile) and attentional markers for elderly people in social interaction with a robot. | |
Automatic analysis of multimodal behaviors during asynchronous video interviews for recruitment. | |
Automatic emotions recognition during call center conversations. | |
Automatic prediction of emotions induced by movies | |
Automatic Recognition of Affective Dimensions in the Oral Human-Machine Interaction for Dependent People. | |
Automatische Emotionserkennung aus sprachlicher und manueller Interaktion | |
Computational methods for affect detection from natural language | |
Computational paralinguistics : emotion, affect and personality in speech and language processing | |
CovNet: a transfer learning framework for automatic COVID-19 detection from crowd-sourced cough sounds | |
Deep neural networks for source separation and noise-robust speech recognition. | |
Détection de la maladie de Parkinson par analyse multimodale combinant signaux d’écriture et de parole. | |
Détection de marqueurs affectifs et attentionnels de personnes âgées en interaction avec un robot | |
Diarisation multimodale : vers des modèles robustes et justes en contexte réel | |
Driver frustration detection from audio and video in the wild | |
Early detection of Parkinson's disease through voice analysis and correlations with neuroimaging. | |
An Evaluation of Speech-Based Recognition of Emotional and Physiological Markers of Stress | |
Foundations, user modeling, and common modality combinations | |
The handbook of multimodal-multisensor interfaces. | |
Intelligent Audio Analysis | |
Learning complementary representations via attention-based ensemble learning for cough-based COVID-19 recognition | |
Multi-task deep neural network with shared hidden layers: breaking down the wall between emotion representations | |
Multimodal diarization : towards robustness and fairness in the wild. | |
Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech | |
MuSe 2020 challenge and workshop: multimodal sentiment analysis, emotion-target engagement and trustworthiness detection in real-life media: emotional car reviews in-the-wild | |
On laughter and speech-laugh, based on observations of child-robot interaction | |
openBliSSART: user manual | |
Parkinson's desease detection by multimodal analysis combining handwriting and speech signals | |
Patterns, prototypes, performance: classifying emotional user states | |
perceived emotion of isolated synthetic audio: the EmoSynth dataset and results | |
perception of emotion in the singing voice: the understanding of music mood for music organisation | |
Personalised depression forecasting using mobile sensor data and ecological momentary assessment | |
Prosodic, spectral or voice quality? Feature type relevance for the discrimination of emotion pairs | |
A prototypical network approach for evaluating generated emotional speech | |
Reading Faces. Using Hard Multi-Task Metric Learning for Kernel Regression | |
Reading the author and speaker: towards a holistic and deep approach on automatic assessment of what is in one’s words | |
A real-time speech enhancement framework for multi-party meetings | |
Real-time speech separation by semi-supervised nonnegative matrix factorization | |
Recent advances in computer audition for diagnosing COVID-19: an overview | |
Recent advances in intelligent assistive technologies : paradigms and applications | |
Recognition of interest in human conversational speech | |
Reconnaissance automatique des émotions induites par les films. | |
Réseaux de neurones profonds pour la séparation des sources et la reconnaissance robuste de la parole | |
Robust spelling and digit recognition in the car: switching models and their like | |
Robust vocabulary independent keyword spotting with graphical models | |
Snore-GANs: improving automatic snore sound classification with synthesized data | |
Speaker trait characterization in web videos: uniting speech, language, and facial features | |
Speech analysis for health: current state-of-the-art and the increasing impact of deep learning | |
Speech-based diagnosis of autism spectrum condition by generative adversarial network representations | |
Speech denoising and compensation for hearing aids using an FTCRN-based metric GAN | |
State of mind: classification through self-reported affect and word use in speech | |
A summary of the ComParE COVID-19 challenges | |
Supervised and semi-supervised suppression of background music in monaural speech recordings | |
Supervised contrastive learning for game-play frustration detection from speech | |
Supporting multi camera tracking by monocular deformable graph tracking | |
Survey of deep representation learning for speech emotion recognition | |
Switching linear dynamic models for recognition of emotionally colored and noisy speech | |
Synchronization in interpersonal speech | |
Teaching machines to know your depressive state: on physical activity in health and major depressive disorder | |
Toward detecting and addressing corner cases in deep learning based medical image segmentation | |
Towards automatic airborne pollen monitoring: from commercial devices to operational by mitigating class-imbalance in a deep learning approach | |
Towards automation of usability studies | |
Towards conditional adversarial training for predicting emotions from speech | |
Towards cross-lingual automatic diagnosis of autism spectrum condition in children's voices | |
Towards intelligent crowdsourcing for audio data annotation: integrating active learning in the real world | |
Towards silent paralinguistics: deriving speaking mode and speaker ID from electromyographic signals | |
Towards sonification in multimodal and user-friendly explainable artificial intelligence | |
Towards temporal modelling of categorical speech emotion recognition | |
Transfer learning emotion manifestation across music and speech | |
Transferring cross-corpus knowledge: an investigation on data augmentation for heart sound classification | |
Universal onset detection with bidirectional long-short term memory neural networks | |
Universum autoencoder-based domain adaptation for speech emotion recognition | |
Using multiple databases for training in emotion recognition: to unite or to vote? | |
Vocalisation repertoire at the end of the first year of life: an exploratory comparison of Rett syndrome and typical development | |
Voice Analysis for Neurological Disorder Recognition – A Systematic Review and Perspective on Emerging Trends | |
voice of the body: why AI should listen to it and an archive | |
VoicePlay: an affective sports game operated by speech emotion recognition based on the component process model | |
VOTE versus ACLTE: Vergleich zweier Schnarchgeräuschklassifikationen mit Methoden des maschinellen Lernens | |
Wavelet features for classification of vote snore sounds | |
Wavelets Revisited for the Classification of Acoustic Scenes | |
You sound like your counterpart: interpersonal speech analysis |