Project
Multi Agentic Framework for Code Bug Summarisation and Evaluation
Objective
The increasing complexity of modern software systems makes error detection and correction challenging for developers, necessitating accurate and insightful error explanations. Generative Large Language Models (LLMs) offer automated solutions but often suffer from hallucination, lack of relevance, and repetitiveness, making manual evaluation labour-intensive and non-scalable. To address these challenges, we introduce a multi-agentic framework designed to automate the generation and evaluation of erroneous code explanations, ensuring clarity, accuracy, and domain-specific relevance. Existing solutions to evaluate machine-generated summaries fall short in domain specificity, failing to capture the semantic precision needed for accurate error identification and resolution in code
Outcome
Paper
Apply By Date |
18 Oct 2024 |
Students |
1 / 1 |
Duration |
4 Months |
Mentor |
Debanjana Kar |
Tools-Technologies | NLP API, Spark |
Platform | 1 ) IBM Bluemix www.bluemix.com |
College | |