Register now After registration you will be able to apply for this opportunity online.
AI-Driven Python Code Quality Assessment
This project explores the use of Artificial Intelligence (AI) to assess the quality of Python code. It includes a literature review on classical and AI-based methods for code evaluation, the development of AI agents for collaborative quality assessment, and the creation or use of labeled datasets for tool validation. The project also investigates gamification strategies to engage users in improving their code quality through feedback and motivation.
Keywords: AI, code quality, Python, Large Language Models, gamification, automation, dataset creation
In this project, you support the young ETH Spin-Off “nihito” on their mission to empower everyone to build software for everyone.
Code quality is crucial for maintainability, performance, and collaboration in software development. However, assessing (and improving) code quality remains a challenge, especially for less experienced developers. This project explores how AI, particularly Large Language Models (LLMs), can assist in automated Python code quality assessment.
The project will start with a literature review on existing methods, both classical and AI-based, to define quality metrics and assessment techniques. In a second step, one or more agents are to be drafted and implemented which collaboratively assess code quality based on various criteria. To evaluate and test the developed tool, we need Python code for validation. If no suitable labeled datasets exist, we may have to create our own.
Lastly, we will explore gamification strategies to engage potential users of this tool and motivate them to work on their code to improve their quality score.
In addition to the technical aspects, this project offers a unique opportunity to gain insight into the workings of a young software start-up, giving you exposure to the fast-paced environment of an ETH Spin-Off.
In this project, you support the young ETH Spin-Off “nihito” on their mission to empower everyone to build software for everyone. Code quality is crucial for maintainability, performance, and collaboration in software development. However, assessing (and improving) code quality remains a challenge, especially for less experienced developers. This project explores how AI, particularly Large Language Models (LLMs), can assist in automated Python code quality assessment. The project will start with a literature review on existing methods, both classical and AI-based, to define quality metrics and assessment techniques. In a second step, one or more agents are to be drafted and implemented which collaboratively assess code quality based on various criteria. To evaluate and test the developed tool, we need Python code for validation. If no suitable labeled datasets exist, we may have to create our own. Lastly, we will explore gamification strategies to engage potential users of this tool and motivate them to work on their code to improve their quality score. In addition to the technical aspects, this project offers a unique opportunity to gain insight into the workings of a young software start-up, giving you exposure to the fast-paced environment of an ETH Spin-Off.
- Define and quantify code quality through literature review.
-Develop AI agents to assess code quality.
-Implement gamification to engage users.
- Define and quantify code quality through literature review. -Develop AI agents to assess code quality. -Implement gamification to engage users.