MolmoSpaces: A Rich Asset Hub for Robotics and Embodied AI
MolmoSpaces is a dataset released by the Allen Institute for AI (AI2) that bundles asset data for the MolmoSpaces project. It provides a comprehensive collection of 3‑D objects, robot models, scene definitions, grasp annotations, and benchmark suites aimed at research in robotics, embodied AI, and simulation.
The repository offers dozens of configurations covering both Isaac‑compatible USD assets and MuJoCo assets. Each config (e.g., `isaac__objects__objaverse__20260128` or `mujoco__robots__franka_droid__20260127`) contains parquet files with fields such as `path`, `shard_id`, `offset`, and `size`. The dataset size falls in the 100 K–1 M example range and is stored in optimized parquet format, making it suitable for loading with the Hugging Face `datasets`, `pandas`, or `polars` libraries. Tags highlight its focus on robotics, embodied AI, grasps, objects, scenes, and benchmarks.
MolmoSpaces is distributed under two permissive licenses: the Objaverse subset uses ODC‑BY 1.0, while all other subsets use CC‑BY 4.0. The README advises using the provided `download.py` script (with dependencies like `zstandard`, `datasets`, and `tqdm`) to cache selected asset groups locally. The data is intended for research and educational purposes and follows AI2’s Responsible Use Guidelines.
A recent update (2026‑02‑16) added Isaac‑compatible USD objects and scenes, expanding the dataset’s cross‑simulation appeal. This addition, combined with the breadth of assets and benchmarks, has driven community interest and contributed to its trending status on the Hub.
Project Ideas
- Use the grasp configs to train a neural network that predicts viable grasp poses for novel objects in simulation.
- Build an end‑to‑end manipulation pipeline in Isaac or MuJoCo by loading the provided robot, object, and scene assets to evaluate pick‑and‑place policies.
- Create an embodied AI navigation benchmark using the scene configs, measuring agents' ability to traverse complex indoor environments.
- Develop a Blender rendering pipeline that reads the object paths to generate photorealistic image datasets for vision research.
- Study cross‑simulation transfer by training a control policy in MuJoCo with the robot assets and testing its performance on the Isaac USD equivalents.