Call for Papers
|Acceptance notification||December 11, 2023|
|Early registration||December 20, 2023|
|Workshop||February 26, 2024|
Workshop Theme and Goals
Representation learning has become a key research area in artificial intelligence, with the goal of automatically learning meaningful representations of data for a wide range of tasks. However, existing approaches often fail to consider the human perspective, leading to representations that may not be interpretable or relevant to both models and humans. Indicatively, in self-supervised learning, existing models operate on the sample level and do not account for multiple views/modalities belonging to the same person. We invite researchers, practitioners, and industry experts to submit original research papers on all aspects of representation learning, with a focus on human-centric data beyond commonly used ML benchmarks.
- Effectiveness of self-supervised, semi-supervised, or supervised representation learning approaches in a human-centric context, such as through user studies or benchmarking experiments.
- Learning and fine-tuning with human feedback and interaction (e.g., human-in-the-loop systems such as RLHF).
- Efficacy of multimodal data in learning approaches, including the integration of visual, audio, time-series, and text data sources.
- Representation learning for novel and underrepresented data sources.
- Explainable and interpretable aspects of the learned representations.
- Novel ways of encoding non-language data into pre-trained models and LLMs.
- Human-centric applications: Speech and audio processing, pose estimation, affective computing, activity recognition, egocentric perception, biosignal analysis (ECG, EEG, EMG, PPG, EDA, and others), electronic health records, imaging, and wearable data.
We encourage submissions from a wide range of disciplines, including machine learning, human-computer interaction, health data science, and related fields.
All papers should be a maximum of 4 pages in length, plus additional pages for references and supplementary materials, using the AAAI'24 Author Kit. Publication in the workshop is considered non-archival but all accepted papers will be hosted on our website (with permission). We welcome submissions currently under consideration in other venues. Submissions will go through a double-blind review process.
All papers need to be anonymized. Any questions should be mailed to email@example.com.
We will host a full-day workshop with multiple invited keynotes from academic and industry experts, oral presentations, and poster sessions to give researchers a chance to engage in discussion with the workshop attendees. Remote participation options will be available to registered attendees. The last session of the day will be an interactive panel on ”training representation learning models in the real world” with all invited speakers engaging with the audience through an open Q&A session.
Dimitris Spathis (Nokia Bell Labs / University of Cambridge)
Aaqib Saeed (Eindhoven University of Technology)
Ali Etemad (Queen’s University)
Sana Tonekaboni (University of Toronto)
Stefanos Laskaridis (Brave)
Shohreh Deldari (University of New South Wales)
Ian Tang (University of Cambridge)
Patrick Schwab (GSK)
Shyam Tailor (Google)