IJSRP, Volume 15, Issue 7, July 2025 Edition [ISSN 2250-3153]
Him Raj Singh
Abstract:
The rise of large language models (LLMs) and agentic AI is reshaping software testing. Traditional test automation struggles with fragmented data and brittle scripts. The Model Context Protocol (MCP), introduced by Anthropic, is an open client-server protocol (like a USB-C port for AI) that standardizes how AI agents connect to real-world systems (databases, APIs, browsers, etc.), giving agents dynamic context. By leveraging MCP, AI-powered agents (e.g. ChatGPT, Claude, Copilot in agent mode) can generate, execute, and adapt tests in natural language. This article examines how MCP enables agentic AI for testing – the benefits (faster test creation, broader coverage, self-healing), the risks (security, reliability), challenges of legacy testing, and future directions (industry adoption, new protocols, multi-agent workflows). We highlight practical examples and case studies, including AI-driven tools for Jira/Atlassian, GitHub, Playwright, and others, to illustrate MCP transformative potential in QA.