Multimodal System for Depression Analysis using Machine Learning Techniques by Observing Human Behaviour
This project aims to develop a multimodal system for depression analysis by observing human behavior. The system incorporates machine learning techniques to analyze data from three different modalities: EEG (Electroencephalography), Speech/Audio, and Facial Expressions/Video.
The EEG module of the system uses the MODMA dataset from Lanzhou University. This dataset contains EEG data that is used for depression analysis.
Dataset: MODMA dataset - Lanzhou University
The speech/audio module of the system focuses on analyzing speech and audio data for depression analysis.
Dataset: DAIC-WOZ (Distress Analysis Interview Corpus Wizard-of-Oz)
The facial expressions/video module analyzes facial expressions and video data to assess depression levels.
Dataset: DAIC-WOZ (Distress Analysis Interview Corpus Wizard-of-Oz)
The multimodal system combines the insights from these three modules to provide a comprehensive analysis of an individual's depression levels. By integrating data from multiple modalities, the system aims to improve the accuracy and effectiveness of depression analysis.
Please refer to the respective modules for more detailed information on the datasets and the techniques used for depression analysis.