Skip to the content.

Overview

Date February 26, 2024
Location Vancouver Convention Center, Vancouver, Canada

Representation learning has become a key research area in machine learning and artificial intelligence, with the goal of automatically learning useful and meaningful representations of data for a wide range of tasks. Powerful models like GPT4 or Stable Diffusion are trained in a self-supervised way in order to learn generalized representations. However, traditional representation learning approaches often fail to consider the human perspective and context, leading to representations that may not be interpretable or relevant for both models and humans. For example, in self-supervised learning, contrastive methods and masked autoencoders operate on the sample level and do not account for multiple views/modalities that might belong to the same person.

In this workshop at AAAI 2024, we aim to bring together researchers, practitioners, and industry experts to discuss original and unpublished research papers, case studies, and/or technical reports on all aspects of human-centric representation learning for real-world data, looking beyond commonly used benchmarks or modalities.

Schedule

The full schedule is available here. Keynote details can be found here. All accepted papers are available here.