新万博体育下载_万博体育app【投注官网】

图片

2018

Approaches to overcome lab restraints in locomotion biomechanics

Abstract

I will provide a brief overview of work in the field of human locomotion biomechanics performed in the Institute of Biomechanics and Orthopaedics at the German Sport 新万博体育下载_万博体育app【投注官网】 in Cologne. Here, we analyze humans performing all kinds of locomotor behaviors, including walking, running, sprinting, jumping and changing directions. In our studies, we try to elucidate the determinants of joint loading and performance with an emphasis on the role of technical devices (e.g. footwear, orthoses or prostheses).

Traditionally, we use state-of-the-art 3D motion analysis techniques and measure the external forces acting on the human body e.g. via force platforms. Using these measurements, we can perform inverse dynamics calculations to determine joint loading, energy generation and absorption. The drawback of these techniques is that they are all lab based, which challenges the ecological validity of these procedures. Therefore, the second purpose of my talk will be to suggest approaches to overcome lab restraints and collect data in the real world. This approach will strongly improve the validity of studies aiming to understand, for example, overuse injury development in sport, the development of osteoarthritis and the prevention of falls in the elderly.

?

Biography

Steffen Willwacher is a postdoc at the German Sport 新万博体育下载_万博体育app【投注官网】 (GSU) Cologne, from which he received both his Diploma degree in 2009 and PhD in Sport Sciences in 2014. Since then he has been working at the Institute of Biomechanics of the GSU, where he analyzes human locomotion over a wide range of speeds (including walking, running, sprinting and jumping) and subject groups (from patients to elite athletes) with a focus on the biomechanical loading and performance of the lower extremities.

Willwacher is also a co-owner of the Institute of Functional Diagnostics in Cologne where biomechanical analysis techniques are utilized to help patients with movement related pain and injuries. He also lectures biomechanics at the Coaches’ Academy of the German Olympic Sports Confederation (DOSB).

?

Dozent(in): Dr. Steffen Willwacher, Deutsche Sporthochschule K?ln

Termin: Freitag 9. November 2018, 14:00

Geb?ude/Raum: EIHW social room

Analysing communication requirements for crowd-sourced backend generation of HD Maps used in automated driving

Abstract

Highly automated vehicles rely on high-definition maps to ensure both safety and comfort of their passengers while driving.The maps provide a centimetre-accurate representation of the surrounding infrastructure to the car and ease tasks like localisation and object recognition by providing comparable reference material. Maintaining the maps with updates for the current traffic situation (e.g.traffic jams or construction work) is a challenging task.

?

In the German government-funded research project Cooperative Highly Automated Driving (Ko-HAF) it is investigated to what extent vehicle sensors can be used to update these kinds of maps. In this work, we present our obtained results and investigate the requirements for the cellular network infrastructure required for highly automated driving. To the best of our knowledge, this work is one of the first that provides this correlation between data requirements and network infrastructure capabilities.

?

Dozent(in): Josef Schmid

Termin: Dienstag 20. November 2018, 13:30

Geb?ude/Raum: EIHW Gemeinschaftsraum, 305

Multimodal Affective Computing and its Applications

Abstract?

In this talk, I will give an overview of our research into developing multimodal technology that analyses the affective state and more broadly behaviour of humans. Such technology is useful for a number of applications, with applications in healthcare, e.g. mental health disorders, being a particular focus for us. Depression and other mood disorders are common and disabling disorders. Their impact on individuals and families is profound. The WHO Global Burden of Disease reports quantify depression as the leading cause of disability worldwide. Despite the high prevalence, current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. There currently exist no laboratory-based measures of illness expression, course and recovery, and no objective markers of end-points for interventions in both clinical and research settings. Using a multimodal analysis of facial expressions and movements, body posture, head movements as well as vocal expressions, we are developing affective sensing technology that supports clinicians in the diagnosis and monitoring of treatment progress. Encouraging results from a recently completed pilot study demonstrate that this approach can achieve over 90% agreement with clinical assessment. After more than ten years of research, I will also talk about the lessons learnt in this project, such as measuring spontaneous expressions of affect, subtle expressions, and affect intensity using multimodal approaches.

?

Bio?

Roland Goecke is Professor of Affective Computing at the 新万博体育下载_万博体育app【投注官网】 of Canberra, Australia, where he leads the Human-Centred Technology Research Centre. He received his Masters degree in Computer Science from the 新万博体育下载_万博体育app【投注官网】 of Rostock, Germany, in 1998 and his PhD in Computer Science from the Australian National 新万博体育下载_万博体育app【投注官网】, Canberra, Australia, in 2004. Before joining UC in December 2008, Prof Goecke worked as a Senior Research Scientist with start-up Seeing Machines, as a Researcher at the NICTA Canberra Research Labs, and as a Research Fellow at the Fraunhofer Institute for Computer Graphics, Germany. His research interests are in affective computing, pattern recognition, computer vision, human-computer interaction, multimodal signal processing and e-research. Prof Goecke has been an author and co-author of over 130 peer-reviewed publications. His research has been funded by grants from the Australian Research Council (ARC), the National Health and 新万博体育下载_万博体育app【投注官网】ical Research Council (NHMRC), the National Science Foundation (NSF), the Australian National Data Service (ANDS) and the National eResearch Collaboration Tools and Resources project (NeCTAR).

?

Titel: Multimodal Affective Computing and its Applications

Dozent(in): Prof. Roland Goecke

Termin: Monday 17th September, 11:00am

Geb?ude/Raum: EIHW Social Room

Talking about more than words.

Abstract?

When we interact with each other, not only the content of the words matter (what we say), but also the manner in which these words are spoken matter (how we speak), as well as the body language. Non-verbal behavior, including paralinguistic information, plays a key role in communicating affective and social information in human-human interaction. Non-verbal behavior is also informative of a speaker's mental wellbeing and physical state. With the increasing acceptance of technology in our daily lives, such as virtual agents and robots, the need for developing technology that can sense and give meaning to these non-verbal behaviors increases as well. In this talk, I will present some studies we have been carrying out investigating non-verbal elements in speech. In particular, I will present results from our studies on laughter in conversation, and exercise intensity detection in speech. > Bio: Khiet Truong is an assistant professor in the Human 新万博体育下载_万博体育app【投注官网】ia Interaction group, 新万博体育下载_万博体育app【投注官网】 of Twente, working in the fields of affective computing and social signal processing. Her interests lie in the automatic analysis and understanding of verbal and nonverbal (vocal) behaviors in human-human and human-machine interaction, and the design of socially interactive technology to support human needs. Khiet holds a master in Computational Linguistics (Utrecht 新万博体育下载_万博体育app【投注官网】) and a PhD in Computer Science. During her PhD carried out at TNO, she investigated emotion recognition in speech and automatic laughter detection. She has served/is serving on numerous program committees and has chaired positions in major conferences such as Interspeech, ACM ICMI, ICASSP, and ACII.

?

Titel: Talking about more than words.

Dozent(in): Ass. Prof. Dr. Khiet P. Truong

Termin: 27 June 2018 / 17:30

Geb?ude/Raum: 1057 N

Mutlimodal Intergration in Speech Recognition and Speaker Localization

Abstract?

Speech recognition has profited immensely from the recent developments in deep learning, and together with signal enhancement strategies for multiple microphones, it is now possible to successfully employ speech recognition even in difficult environments. This talk will focus on strategies for achieving even greater robustness by including visual information, i.e. lip movements, in addition to the acoustic channel alone. This is a strategy that is often employed by human listeners in noisy environments, and this talk will show how that capability can aid machine listening as well. Together with an appropriate stream weighting strategy, error rates of neural-network-based speech reocgnition can be cut in half in difficult situations by the addition of video information, while achieving reliable improvements even in good acousic conditions. The same strategy is also applicable to speaker localization, where, again, stream weighting is of significant value to maximally gain from the availability of both sources of information. This talk will discuss the architecture of recognition and tracking systems that enable such improvements, video features that can be employed, and, importantly, the adaptive stream weighting that allows one to profit from the addition of video information under all circumstances. > Bio: Prof. Dr.-Ing. Dorothea Kolossa operates since 2010 as head of the Cognitive workgroup signal processing at the 新万博体育下载_万博体育app【投注官网】 of Bochum. There they engaged in robust voice and pattern recognition, therefore developed methods and algorithms to make pattern recognition can also be used in difficult and changing environments. This topic has initially engaged in many projects in her doctoral thesis at the Technical 新万博体育下载_万博体育app【投注官网】 of Berlin, then in several research visits, including at NTT (Kyoto), at the 新万博体育下载_万博体育app【投注官网】 of Hong Kong and in 2009 as a visiting faculty at UC Berkeley Prof. Kolossa. More than eighty publications and patents and a book to robust speech recognition have arisen in the context of this work, and current collaborations, including with the International Computer Science Institute (ICSI) in Berkeley, are aimed at the existing today speech recognition technology reliable for everyday mobile use to design.

?

Titel: Mutlimodal Intergration in Speech Recognition and Speaker Localization

Dozent(in): Prof. Dr.-Ing Dorothea Kolossa

Termin: 22 June 2018 / 15:45

Geb?ude/Raum: 1058 N

Passive monitoring and geo-based prediction of mobile network communication

Abstract

Predicting mobile network parameters while driving is quite a challenge. In this Obersemenar both a possibility for measurement as well as geo-based predection methods of such a mobile network connection are presented. Finally, these predictors are compared and a conclusion is drawn.

?

Titel: Passive monitoring and geo-based prediction of mobile network communication

Dozent(in): Josef Schmid

Termin: 15 June 2018 / 14:00

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

The SEWA Database - Results and Challenges

Abstract

The SEWA Database (DB) is an audio-visual corpus of naturalistic human-to-human interaction in the wild. It consists of 300 subjects of 6 different cultures in dyadic conversations recorded through a video chat platform. In the Oberseminar, latest results on the SEWA database will be presented and suitable features, their representations, and models will be discussed.

?

Titel: The SEWA Database - Results and Challenges

Dozent(in): Maximilian Schmitt

Termin: 08 June 2018 / 14:00

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

Introduction of an Elementary Insect Sound Database

Abstract

This seminar is about introducing an insect sound database based on Thomas J. Walker’s contribution. This database includes over 5,000 audio clips and hundreds of species of insect. However, the preliminary classification experiments proved that this database still needs to be fixed to achieve the requirement of classification tasks. Pros and cons of this database, lately progress and future works will be discussed in the end.

?

Titel: Introduction of an Elementary Insect Sound Database

Dozent(in): Zijang Yang

Termin: 14:00 01-06-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

Speech production and the source-filter model

This talk will be an overview of the key concepts of speech production and will review the source-filter model of speech production. Other concepts covered will include Fourier analysis, filtering and the z-transform.

?

Titel: Speech production and the source-filter model

Dozent(in): Dr. Nicholas Cummins

Termin: 14:00 25-05-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304?

Tracking Authentic and In-the-wild Emotions using Speech

Talk for the paper accepted at ACII-Asia 2018, which Prof. Schuller is going to present.

?

Abstract?

This first-of-its-kind study aims to track authentic affect representations in-the-wild. We use the `Graz Real-life Affect in the Street and Supermarket (GRAS2)' corpus featuring audiovisual recordings of random participants in non-laboratory conditions. The participants were initially unaware of being recorded. This paradigm enabled us to use a collection of a wide range of authentic, spontaneous and natural affective behaviours. Six raters annotated twenty-eight conversations averaging 2.5 minutes in duration, tracking the arousal and valence levels of the participants. We generate the gold standards through a novel robust Evaluator Weighted Estimator (EWE) formulation. We train Support Vector Regressors (SVR) and Recurrent Neural Networks (RNN) with the low-level-descriptors (LLDs) of the ComParE feature-set in different derived representations including bag-of-audio-words. Despite the challenging nature of this database, a fusion system achieved a highly promising concordance correlation coefficient (CCC) of .372 for arousal dimension, while RNNs achieved a top CCC of .223 in predicting valence, using a bag-of-features representation.

?

Titel: Tracking Authentic and In-the-wild Emotions using Speech

Dozent(in): Vedhas Pandit

Termin: 14:00 18-05-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

Update on the De-Enigma project.

This talk will be a brief update on the Chairs ongoing work the the De-Engima project. It will also highlight the remaining tasks and challenges associated with this project.

?

Titel: Update on the De-Enigma project.

Dozent(in): Dr. Nicholas Cummins

Termin: 14:00 11-05-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

Ethical considerations, and the UAU Ethics application process

The ethical decisions behind the acquisition and analysis of multi-modal data harnessed for (deep) machine learning algorithms, is an increasing concern for the Artificial Intelligence community. In this regard we discuss the ethical considerations which should be in place when designing a data acquisition paradigm. Additionally, offering a general outline of the procedure for applying for ethical clearance at UAU.

?

Titel: Ethical considerations, and the UAU Ethics application process

Dozent(in): Alice Baird

Termin: 14:00 04-05-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F2 304

Robust Laughter Detection for Mobile Wellbeing Sensing on Wearable Devices, Gerhard Hagerer

To build a noise-robust online-capable laughter detector for behavioural monitoring on wearables, we incorporate context-sensitive Long Short-Term Memory Deep Neural Networks. We show our solution's improvements over a laughter detection baseline by integrating intelligent noise-robust voice activity detection (VAD) into the same model. To this end, we add extensive artificially mixed VAD data without any laughter targets to a small laughter training set. The resulting laughter detection enhancements are stable even when frames are dropped, which happen in low resource environments such as wearables. Thus, the outlined model generation potentially improves the detection of vocal cues when the amount of training data is small and robustness and efficiency are required.

?

Titel: Robust Laughter Detection for Mobile Wellbeing Sensing on Wearable Devices

Dozent(in): Gerhard Hagerer

Termin: 14:00 20-04-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F1 304

Learning Image-based Representations for Heart Sound Classification

Machine learning based heart sound classification represents an efficient technology that can help reduce the burden of manual auscultation through the automatic detection of abnormal heart sounds. In this regard, we investigate the efficacy of using the pre-trained Convolutional Neural Networks (CNNs) from large-scale image data for the classification of Phonocardiogram (PCG) signals by learning deep PCG representations. First, the PCG files are segmented into chunks of equal length. Then, we extract a scalogram image from each chunk using a wavelet transformation. Next, the scalogram images are fed into either a pre-trained CNN, or the same network fine-tuned on heart sound data. Deep representations are then extracted from a fully connected layer of each network and classification is achieved by a static classifier. Alternatively, the scalogram images are fed into an end-to-end CNN formed by adapting a pre-trained network via transfer learning. Key results indicate that our deep PCG representations extracted from a fine-tuned CNN perform the strongest, 56.2 % mean accuracy, on our heart sound classification task. When compared to a baseline accuracy of 46.9 %, gained using conventional audio processing features and a support vector machine, this is a significant relative improvement of 19.8 % (p < .001 by one-tailed z-test).

?

Titel: Learning Image-based Representations for Heart Sound Classification

Dozent(in): Zhao Ren

Termin: 14:00 20-04-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F1 304

Towards Conditional Adversarial Training for Predicting Emotions from Speech

Motivated by the encouraging results recently obtained by generative adversarial networks in various image processing tasks, we propose a conditional adversarial training framework to predict dimensional representations of emotion, i. e., arousal and valence, from speech signals. The framework consists of two networks, trained in an adversarial manner: The first network tries to predict emotion from acoustic features, while the second network aims at distinguishing between the predictions provided by the first network and the emotion labels from the database using the acoustic features as conditional information. We evaluate the performance of the proposed conditional adversarial training framework on the widely used emotion database RECOLA. Experimental results show that the proposed training strategy outperforms the conventional training method, and is comparable with, or even superior to other recently reported approaches, including deep and end-to-end learning.

?

Titel: Towards Conditional Adversarial Training for Predicting Emotions from Speech

Dozent(in): Jing Han

Termin: 14:00 13-04-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F1 304

What is my dog trying to tell me? The automatic recognition of the context and perceived emotion of dog barks

A wide range of research disciplines are deeply interested in the measurement of animal emotions, including evolutionary zoology, affective neuroscience and comparative psychology. However, only a few studies have investigated the effect of phenomena such as emotion on the acoustic parameters of (non-human) mammalian species. In this contribution, we explore if commonly used affective computing-based acoustic feature sets can be used to classify either the context, the emotion, or predict the emotional intensity of dog bark sequences. This comparison study includes an in-depth analysis of obtainable classification performances. Results presented indicate that the tested feature representations are suitable for the proposed recognition tasks. Of particular note are results that demonstrate machine learning-based acoustic analysis can achieve above human level performance when classifying the context of a dog bark.

?

Titel: What is my dog trying to tell me? The automatic recognition of the context and perceived emotion of dog barks

Dozent(in): Simone Hantke

Termin: 14:00 13-04-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / F1 304

Hierarchical Temporal Memory (HTM) Theory and Sparse Distributed Representation (SDR)

An alternate machine learning framework called 'Hierarchical Temporal Memory (HTM)' is claimed to be a lot better abstraction of the human brain.
For your ready use, the theory I will present can be revisited by reading these papers? Paper 0,? Paper 1,? Paper 2, or at this? playlist:

?

In a nutshell:

  • It doesn't use backpropagation.
  • Learns with sensory-*motor* model like we humans do.
  • HTM uses binary sparse input representations only (unlike classical NNs), called Sparse Distributed Representation (SDR).
  • Sparsity is the key to learning = similar to our brain activations.
  • Learning takes place through reinforcement.
  • Each node accepts inputs also from the same layer, and the layer above (unlike 'classical' DL).
  • Node can be in 3 states: active, inactive and predictive.
  • HTM ended up implementing ideas 'similar' to Hinton's Capsule Network.
  • Not having as great results as of the deep learning approach.
  • Implementations (NuPIC) are open source?https://github.com/numenta/nupic?and are currently maintained by Numenta:?https://numenta.org/code/

?

Titel: Hierarchical Temporal Memory (HTM) Theory and Sparse Distributed Representation (SDR)

Dozent(in): Vedhas Pandit

Termin: 11AM on 06-02-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Methods for biosignal data collection: A practical study for wellbeing applications

Titel: Methods for biosignal data collection: A practical study for wellbeing applications

Dozent(in): Miriam Berschneider

Termin: 10AM on 30-01-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Ansprechpartner: Supervised by Dr. Emilia Parada-Cabaleiro, and Prof.Dr. habil. Bj?rn Schuller

Collecting and analysing spoken language for an emotional intelligent operating room

Titel: Collecting and analysing spoken language for an emotional intelligent operating room

Dozent(in): Romeo D?ring

Termin: 12 noon on 30-01-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Ansprechpartner: Supervised by Simone Hantke, M. Sc. and Prof.Dr. habil. Bj?rn Schuller

Influences of music on heart rate: A case study in sport informatics

Titel: Influences of music on heart rate: A case study in sport informatics

Dozent(in): Alexander Heimerl

Termin: 10AM on 30-01-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Ansprechpartner: Supervised by Dr. Emilia Parada-Cabaleiro, and Prof.Dr. habil. Bj?rn Schuller

Hybrid Convolutional Recurrent Neural Networks for Rare Acoustic Event Classification

Titel: Hybrid Convolutional Recurrent Neural Networks for Rare Acoustic Event Classification

Dozent(in): Sahib Julka

Termin: 10AM on 30-01-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Ansprechpartner: Supervised by Shahin Amiriparian, M. Sc. and Prof.Dr. habil. Bj?rn Schuller

A Speech-Based Approach for Early Detection of Stroke

Titel: A Speech-Based Approach for Early Detection of Stroke

Dozent(in): Phillip Müller

Termin: 10AM on 30-01-2018

Geb?ude/Raum: Eichleitnerstra?e 30 / 207

Ansprechpartner: Supervised by Shahin Amiriparian, M. Sc. and Prof.Dr. habil. Bj?rn Schuller

Suche