Workshop on the Elements of Reasoning:
Objects, Structure, and Causality (OSC)

April 28 or 29, 2021, Virtual ICLR 2022 Workshop

Discrete abstractions such as objects, concepts, and events are at the basis of our ability to perceive the world, relate the pieces in it, and reason about their causal structure. The research communities of object-centric representation learning and causal machine learning, have – largely independently – pursued a similar agenda of equipping machine learning models with more structured representations and reasoning capabilities. Despite their different languages, both fields operate under the assumption that, compared to a monolithic/black-box representation, a structured model will improve systematic generalization, robustness to distribution shifts, downstream learning efficiency, and interpretability. However, both communities typically approach the problem from opposite directions. Work on causality often assumes a known (true) decomposition into causal factors and is focused on inferring and leveraging interactions between them. On the other hand, object-centric representation learning, typically starts from an unstructured input and aims to infer a useful decomposition into meaningful factors, and has so far been less concerned with their interactions.

This workshop aims to bring together researchers from object-centric and causal representation learning. To help integrate ideas from these areas, we invite perspectives from the other fields including cognitive psychology and neuroscience. We hope that this creates opportunities for discussion, presenting cutting-edge research, establishing new collaborations and identifying future research directions.

In particular, we welcome contributions in the direction of:

  • Benchmarks that quantify the benefits of structured representations (e.g. systematic generalization, OOD performance, robustness wrt. interventions, etc.)
  • Methods for discovering / extracting abstract entities from raw data, especially self-supervised learning of structured representations
  • Integrating ideas from causality into neural network architectures
  • Applying tools from deep learning to more traditional causal discovery approaches, which may sacrifice recovery guarantees
  • Structure Inference (relations, interactions, compositions, etc.) especially between unobserved variables
  • Reasoning tasks, interventional and counterfactual questions
  • Theoretical work on the challenges of learning abstractions and invariances from data
  • Discovering or leveraging objects, concepts or causal structures for reinforcement learning (e.g. for exploration or model-learning)
  • Integration of neural networks and symbolic or probabilistic reasoning (e.g. neurosymbolic methods or probabilistic programming)
  • Applications of objects, structured representations, or causal reasoning (e.g. in computer vision, audio processing, robotics)


For questions / comments, reach out to:

Website template adapted from the ORLR/OOL workshops, originally based on the template of the BAICS workshop.