Discrete abstractions such as objects, concepts, and events are at the basis of our ability to perceive the world, relate the pieces in it, and reason about their causal structure. The research communities of object-centric representation learning and causal machine learning, have – largely independently – pursued a similar agenda of equipping machine learning models with more structured representations and reasoning capabilities. Despite their different languages, both fields operate under the assumption that, compared to a monolithic/black-box representation, a structured model will improve systematic generalization, robustness to distribution shifts, downstream learning efficiency, and interpretability. However, both communities typically approach the problem from opposite directions. Work on causality often assumes a known (true) decomposition into causal factors and is focused on inferring and leveraging interactions between them. On the other hand, object-centric representation learning, typically starts from an unstructured input and aims to infer a useful decomposition into meaningful factors, and has so far been less concerned with their interactions.

This workshop aims to bring together researchers from object-centric and causal representation learning. To help integrate ideas from these areas, we invite perspectives from the other fields including cognitive psychology and neuroscience. We hope that this creates opportunities for discussion, presenting cutting-edge research, establishing new collaborations and identifying future research directions.

In particular, we welcome contributions in the direction of:

  • Benchmarks that quantify the benefits of structured representations (e.g. systematic generalization, OOD performance, robustness wrt. interventions, etc.)
  • Methods for discovering / extracting abstract entities from raw data, especially self-supervised learning of structured representations
  • Integrating ideas from causality into neural network architectures
  • Applying tools from deep learning to more traditional causal discovery approaches, which may sacrifice recovery guarantees
  • Structure Inference (relations, interactions, compositions, etc.) especially between unobserved variables
  • Reasoning tasks, interventional and counterfactual questions
  • Theoretical work on the challenges of learning abstractions and invariances from data
  • Discovering or leveraging objects, concepts or causal structures for reinforcement learning (e.g. for exploration or model-learning)
  • Integration of neural networks and symbolic or probabilistic reasoning (e.g. neurosymbolic methods or probabilistic programming)
  • Applications of objects, structured representations, or causal reasoning (e.g. in computer vision, audio processing, robotics)


Program Committee

William Agnew Ondrej Biza Johann Brehmer
Michael Chang Taco Cohen Antonia Creswell
Fei Deng Yilun Du Martin Engelcke
Yanwei Fu Anand Gopalakrishnan Pim De Haan
Rishabh Kabra Nan Rosemary Ke T. Anderson Keller
Andrew Kyle Lampinen Phillip Lippe Anthony Zhe Liu
Yingru Liu Anthony Zhe Liu Jiachen Lu
Sara Magliacane Jiayuan Mao Loic Matthey
Ricardo Pio Monti Yash Sharma Gautam Singh
Sungryull Sohn Wolfgang Stammer Aleksandar Stanić
Sjoerd van Steenkiste Frederik Träuble Vivek Veeriah
Ziyi Wu Tianjun Xiao Matej Zečević
Hang Zhang Yi Zhu Daniel Zoran

Outstanding reviewer: Matej Zečević

For questions / comments, reach out to: objects-structure-causality@googlegroups.com

Website template adapted from the ORLR/OOL workshops, originally based on the template of the BAICS workshop.