Bridging Latent Space Reasoning to External World Model Representation for Language Models with Iterative Hypothesis Cycles

This paper explores how language models generate and refine internal hypotheses while constructing world models, aiming to bridge their latent reasoning with structured external representations. By analyzing iterative hypothesis cycles, we investigate whether fundamental system rules emerge from latent space dynamics and propose methods to extract and refine these representations for structured reasoning.

March 2025 · Diksha Shrivastava, Mann Acharya, Dr. Tapas Badal

The Need for Hypotheses Generation Cycles, Similar Link Prediction & Agency for Dynamic Databases

A robust framework for reasoning requires more than memorization; it must dynamically form and refine hypotheses. Inspired by theorem-proving frameworks, I propose a dynamic database with static relationships and evolving entities, enabling hypothesis cycles and similar link prediction. This method allows LLMs to infer hidden relationships across subsystems, addressing challenges in AI-driven scientific discovery and decision-making.

November 2024 · Diksha Shrivastava

Complete Blog: The Problem of Reasoning in Holistic Systems

This blog presents my research and engineering efforts in language model reasoning, abstract representation of linked entities, and link prediction. It explores my work at BMZ, where I developed agentic multi-hop reasoning systems for policy decisions, and how this experience led me to investigate hidden relationships in complex datasets. Through Swan AI, I examined whether language models can learn, predict, and represent unseen links in dynamic databases. The blog discusses experiments, insights from ARC-AGI, agency in dynamic learning pipelines, and the role of hypothesis cycles in continual learning, culminating in a proposed framework for link prediction and adaptive data representation.

September 2024 · Diksha Shrivastava