By Mahesh Sadupalli, M.Sc. Artificial Intelligence
Supervisor: Dr. Mahdi Taheri
Chair: Fakultät 1 – Fachgebiet Technische Informatik, BTU Cottbus-Senftenberg
Large Language Models (LLMs) like GPT-4 and Claude have transformed how we generate and understand human-like language. However, they come with certain limitations:
Retrieval-Augmented Generation (RAG) addresses these limitations by combining LLMs with external search:
This project applies the RAG+LLM architecture to hardware verification and EDA workflows, enabling smarter, explainable, and more accurate analysis pipelines.
Large Language Models (LLMs) like GPT or LLaMA are impressive in language generation, but their general training data often lacks the precision and depth required for technical domains like Electronic Design Automation (EDA). This mismatch results in hallucinations and limited reasoning when applied to tasks like formal verification, waveform interpretation, or RTL debugging.
Our project addresses this limitation by integrating Retrieval-Augmented Generation (RAG) into the LLM pipeline. By connecting LLMs to a curated knowledge base of EDA documents—such as SystemVerilog assertions, waveform logs, and simulation outputs—we empower them to reason with up-to-date, accurate, and domain-specific information.
This hybrid system not only improves factual accuracy but also introduces transparency through Explainable AI (XAI) techniques, offering insights into how the model arrived at each conclusion. Whether it’s detecting design bugs or suggesting testbench improvements, this tool represents a leap forward in intelligent automation for chip design.
Explore the source code, architecture, and example queries on our GitHub repository.
(Coming soon!) We'll upload the end-to-end RAG+LLM pipeline diagram that illustrates:
Questions or collaborations? Reach out via GitHub or email: maheshsadupalli [at] gmail.com