Skip to content

Multimodal System for Depression Analysis using Machine Learning Techniques by Observing Human Behaviour

Notifications You must be signed in to change notification settings

adityamhaske/Multimodal_Depression_Analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal System for Depression Analysis using Machine Learning Techniques by Observing Human Behaviour

This project aims to develop a multimodal system for depression analysis by observing human behavior. The system incorporates machine learning techniques to analyze data from three different modalities: EEG (Electroencephalography), Speech/Audio, and Facial Expressions/Video.

1. EEG Module

The EEG module of the system uses the MODMA dataset from Lanzhou University. This dataset contains EEG data that is used for depression analysis.

Dataset: MODMA dataset - Lanzhou University

2. Speech/Audio Module

The speech/audio module of the system focuses on analyzing speech and audio data for depression analysis.

Dataset: DAIC-WOZ (Distress Analysis Interview Corpus Wizard-of-Oz)

3. Facial Expressions/Video Module

The facial expressions/video module analyzes facial expressions and video data to assess depression levels.

Dataset: DAIC-WOZ (Distress Analysis Interview Corpus Wizard-of-Oz)

The multimodal system combines the insights from these three modules to provide a comprehensive analysis of an individual's depression levels. By integrating data from multiple modalities, the system aims to improve the accuracy and effectiveness of depression analysis.

Please refer to the respective modules for more detailed information on the datasets and the techniques used for depression analysis.

About

Multimodal System for Depression Analysis using Machine Learning Techniques by Observing Human Behaviour

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published