Bridging Latent Space Reasoning to External World Model Representation for Language Models with Iterative Hypothesis Cycles

This paper explores how language models generate and refine internal hypotheses while constructing world models, aiming to bridge their latent reasoning with structured external representations. By analyzing iterative hypothesis cycles, we investigate whether fundamental system rules emerge from latent space dynamics and propose methods to extract and refine these representations for structured reasoning.

March 2025 · Diksha Shrivastava, Mann Acharya, Dr. Tapas Badal

Grounding Inferred Relationships in Complex World Models with Continual Reasoning

This paper proposes a Continual Reasoning framework to improve language models’ ability to infer relationships in complex world models like ARC-AGI and DABStep. By leveraging a structured external memory for hypothesis generation and refinement, our approach allows models to iteratively learn relationships at inference time, enhancing their adaptability to out-of-distribution tasks.

February 2025 · Diksha Shrivastava, Mann Acharya, Dr. Tapas Badal

Can Language Models Formulate ML Problems?

LLMs struggle to identify ML problems in real-world data, limiting their reliability for analytical tasks. While agentic systems offer partial solutions, true automation requires reasoning over complex systems. This blog examines these challenges and explores a new data representation model as a potential step forward.

November 2024 · Diksha Shrivastava

The Need for Hypotheses Generation Cycles, Similar Link Prediction & Agency for Dynamic Databases

A robust framework for reasoning requires more than memorization; it must dynamically form and refine hypotheses. Inspired by theorem-proving frameworks, I propose a dynamic database with static relationships and evolving entities, enabling hypothesis cycles and similar link prediction. This method allows LLMs to infer hidden relationships across subsystems, addressing challenges in AI-driven scientific discovery and decision-making.

November 2024 · Diksha Shrivastava

Developing Swan AI & the Six Graphical Representations for Complex Systems

I developed Swan AI to explore hybrid vector-graph representations for complex, interrelated systems. The goal was a data pipeline enabling AI to search, converse, and query while preserving hierarchical relationships. Existing knowledge graphs and vector databases lacked dynamic dependency modeling, prompting our exploration of six graphical representations, including hybrid vector-graph models and TensorDB. The core research question: Can LLMs infer hidden relationships in unstructured, hierarchical data to automate decision-making?

October 2024 · Diksha Shrivastava

Thought Experiment: Can LLMs Understand & Predict Similar Links in the 'Arc' World?

I tested LLMs’ ability to predict similar links in complex systems using a thought experiment in the ‘Arc’ World. Inspired by the ARC-AGI benchmark, I created hidden relationships where objects had different meanings. The experiment revealed LLMs struggle with semantic ambiguity and adapting to unseen structures. This highlights challenges in enabling reasoning and generalization in AI.

October 2024 · Diksha Shrivastava

Introduction: The Problem with Holistic, Interrelated Systems

While working on BMZ’s AI system, I realized the problem required a merged vector-graph approach, but the closed nature of the project limited its broader impact. A friend’s advice to ‘generalize it’ led me to formalize holistic, interrelated systems—complex, multi-layered decision-making structures where subsystems interact dynamically. Inspired by Taikyoku Shogi, I explored how AI can infer hidden dependencies within unstructured data, a challenge spanning domains like global logistics, finance, and governance. The key research question emerged: Can we automate the discovery of implicit relationships in such systems?

October 2024 · Diksha Shrivastava