Bridging Latent Space Reasoning to External World Model Representation for Language Models with Iterative Hypothesis Cycles

This paper explores how language models generate and refine internal hypotheses while constructing world models, aiming to bridge their latent reasoning with structured external representations. By analyzing iterative hypothesis cycles, we investigate whether fundamental system rules emerge from latent space dynamics and propose methods to extract and refine these representations for structured reasoning.

March 2025 · Diksha Shrivastava, Mann Acharya, Dr. Tapas Badal

Grounding Inferred Relationships in Complex World Models with Continual Reasoning

This paper proposes a Continual Reasoning framework to improve language models’ ability to infer relationships in complex world models like ARC-AGI and DABStep. By leveraging a structured external memory for hypothesis generation and refinement, our approach allows models to iteratively learn relationships at inference time, enhancing their adaptability to out-of-distribution tasks.

February 2025 · Diksha Shrivastava, Mann Acharya, Dr. Tapas Badal