Testing Guide¶
Comprehensive testing documentation for the MCP Gateway Registry project.
Table of Contents¶
- Quick Start
- Backend Tests (Python)
- Frontend Tests (TypeScript/React)
- Test Structure
- Running Tests
- Test Categories
- Coverage Requirements
- CI/CD Integration
- Troubleshooting
Quick Start¶
Backend (Python)¶
# Run all backend tests
make test
# Unit tests only (fast)
make test-unit
# With coverage
make test-coverage
Frontend (TypeScript/React)¶
cd frontend
# Run all frontend tests
npm test
# Watch mode
npm run test:watch
# With coverage
npm run test:coverage
Backend Tests (Python)¶
The backend uses pytest for testing. See below for detailed commands.
Run Backend Tests¶
Run specific test categories:
# Unit tests only (fast)
make test-unit
# Integration tests
make test-integration
# E2E tests (slow)
make test-e2e
# With coverage report
make test-coverage
Run tests using pytest directly:
# All tests
uv run pytest
# Specific test file
uv run pytest tests/unit/test_server_service.py
# Specific test class
uv run pytest tests/unit/test_server_service.py::TestServerService
# Specific test function
uv run pytest tests/unit/test_server_service.py::TestServerService::test_register_server
# With verbose output
uv run pytest -v
# With coverage
uv run pytest --cov=registry --cov-report=html
Frontend Tests (TypeScript/React)¶
The frontend uses Vitest with React Testing Library. For complete documentation, see Frontend Testing Guide.
Quick Commands¶
cd frontend
# Run all tests
npm test
# Watch mode (recommended for development)
npm run test:watch
# With Vitest UI
npm run test:ui
# With coverage report
npm run test:coverage
# Run specific test file
npm test -- tests/components/Toast.test.tsx
# CI mode
npm run test:ci
Frontend Test Structure¶
frontend/
├── tests/
│ ├── components/ # Component tests
│ ├── hooks/ # Custom hook tests
│ └── unit/ # Utility function tests
│ ├── constants/
│ └── utils/
├── src/test/
│ ├── test-utils.tsx # Custom render with providers
│ └── mocks/ # Mock utilities
├── vitest.config.ts # Vitest configuration
└── vitest.setup.ts # Test setup
Frontend Test Statistics¶
| Metric | Value |
|---|---|
| Total Tests | 103 |
| Test Files | 6 |
| Duration | ~1.2s |
Frontend Coverage¶
Coverage reports are generated at frontend/tests/reports/coverage/.
For detailed frontend testing documentation including:
- Writing component tests
- Testing hooks with fake timers
- Mocking axios requests
- Troubleshooting common issues
See the Frontend Testing Guide.
Test Structure (Backend)¶
The backend test suite is organized into three main categories:
tests/
├── unit/ # Unit tests (fast, isolated)
│ ├── services/ # Service layer tests
│ ├── api/ # API endpoint tests
│ ├── core/ # Core functionality tests
│ └── agents/ # Agent-specific tests
├── integration/ # Integration tests (slower)
│ ├── test_mongodb_connectivity.py
│ ├── test_search_integration.py
│ └── test_server_lifecycle.py
├── fixtures/ # Shared test fixtures
│ └── factories.py # Factory functions for test data
├── conftest.py # Shared pytest configuration
└── reports/ # Test reports and coverage data
Test File Organization¶
- Unit tests: Test individual components in isolation
- Mock external dependencies
- Fast execution (< 1 second per test)
-
High coverage of edge cases
-
Integration tests: Test component interactions
- May use real services (databases, files)
- Moderate execution time (< 5 seconds per test)
-
Test realistic workflows
-
E2E tests: Test complete user workflows
- Test entire system end-to-end
- Slower execution (5-30 seconds per test)
- Marked with
@pytest.mark.slow
Running Tests¶
Using Make Commands¶
The project includes convenient Make targets for running tests:
# Run all tests
make test
# Run only unit tests (fast)
make test-unit
# Run only integration tests
make test-integration
# Run E2E tests
make test-e2e
# Run with coverage report
make test-coverage
# Run and open HTML coverage report
make test-coverage-html
Using Pytest Directly¶
For more control, use pytest commands:
# Run all tests
uv run pytest
# Run tests with specific markers
uv run pytest -m unit # Only unit tests
uv run pytest -m integration # Only integration tests
uv run pytest -m "not slow" # Skip slow tests
# Run tests in parallel (faster)
uv run pytest -n auto # Auto-detect CPU count
# Run with verbose output
uv run pytest -v
# Show print statements
uv run pytest -s
# Run specific tests by keyword
uv run pytest -k "server" # All tests with "server" in name
# Stop on first failure
uv run pytest -x
# Run last failed tests
uv run pytest --lf
# Run failed tests first
uv run pytest --ff
Integration Test Requirements¶
Integration and E2E tests may require:
- Authentication tokens: Generate tokens before running:
./keycloak/setup/generate-agent-token.sh admin-bot
./keycloak/setup/generate-agent-token.sh lob1-bot
./keycloak/setup/generate-agent-token.sh lob2-bot
- Running services: Ensure Docker containers are running:
- Environment variables:
Test Categories¶
Tests are organized using pytest markers:
Available Markers¶
@pytest.mark.unit- Unit tests (fast, isolated)@pytest.mark.integration- Integration tests@pytest.mark.e2e- End-to-end tests@pytest.mark.slow- Slow tests (> 5 seconds)@pytest.mark.auth- Authentication/authorization tests@pytest.mark.servers- Server management tests@pytest.mark.repositories- Repository layer tests@pytest.mark.search- Search functionality tests@pytest.mark.health- Health monitoring tests
Running Tests by Marker¶
# Run only unit tests
uv run pytest -m unit
# Run integration tests
uv run pytest -m integration
# Run E2E tests
uv run pytest -m e2e
# Skip slow tests
uv run pytest -m "not slow"
# Run auth and agent tests
uv run pytest -m "auth or agents"
# Run integration but not slow tests
uv run pytest -m "integration and not slow"
Coverage Requirements¶
The project maintains 35% minimum code coverage (configured in pyproject.toml).
Checking Coverage¶
# Run tests with coverage report
uv run pytest --cov=registry --cov-report=term-missing
# Generate HTML coverage report
uv run pytest --cov=registry --cov-report=html
# Open HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
Coverage Configuration¶
Coverage settings are configured in pyproject.toml:
[tool.pytest.ini_options]
addopts = [
"--cov=registry",
"--cov-report=term-missing",
"--cov-report=html:htmlcov",
"--cov-fail-under=35",
]
What Gets Covered¶
Coverage includes:
- All source code in
registry/directory - Excludes: tests, migrations, init.py files
- Reports missing lines for easy identification
CI/CD Integration¶
Tests run automatically in CI/CD pipelines on:
- Every pull request
- Every push to main branch
- Nightly scheduled runs
GitHub Actions¶
The project uses GitHub Actions for CI/CD. Test workflows are defined in:
.github/workflows/
├── registry-test.yml # Backend registry tests
├── frontend-test.yml # Frontend Vitest tests
├── auth-server-test.yml # Auth server tests
└── metrics-service-test.yml # Metrics service tests
Pre-commit Hooks¶
Install pre-commit hooks to run tests before commits:
# Install pre-commit (using uv)
uv add --dev pre-commit
# Install hooks
uv run pre-commit install
# Run hooks manually
uv run pre-commit run --all-files
Troubleshooting¶
Common Issues¶
1. Token File Not Found¶
Error: Token file not found: .oauth-tokens/admin-bot-token.json
Solution: Generate authentication tokens:
2. Docker Containers Not Running¶
Error: Cannot connect to gateway at http://localhost
Solution: Start Docker containers:
3. Import Errors¶
Error: ModuleNotFoundError: No module named 'registry'
Solution: Ensure you're using uv run:
4. Fixture Not Found¶
Error: fixture 'some_fixture' not found
Solution: Check fixture is defined in:
tests/conftest.py(shared fixtures)- Test file's conftest.py
- Imported from fixtures module
5. Slow Tests¶
Issue: Tests taking too long
Solution: Skip slow tests during development:
6. Failed Async Tests¶
Error: RuntimeError: Event loop is closed
Solution: Check async fixtures are properly defined:
7. Coverage Too Low¶
Error: FAIL Required test coverage of 35% not reached
Solution: Add tests for uncovered code:
# Check which lines are missing
uv run pytest --cov=registry --cov-report=term-missing
# Generate detailed HTML report
uv run pytest --cov=registry --cov-report=html
open htmlcov/index.html
Debug Mode¶
Run tests in debug mode for detailed output:
# Show print statements
uv run pytest -s
# Verbose output
uv run pytest -v
# Very verbose (shows fixtures)
uv run pytest -vv
# Show local variables on failure
uv run pytest -l
# Enter debugger on failure
uv run pytest --pdb
Logging During Tests¶
Enable logging output:
# Show all logs
uv run pytest --log-cli-level=DEBUG
# Show only INFO and above
uv run pytest --log-cli-level=INFO
# Log to file
uv run pytest --log-file=tests/reports/test.log
Additional Resources¶
Backend (Python)¶
- Writing Tests Guide - How to write effective tests
- Test Maintenance Guide - Maintaining test suite health
- Pytest Documentation - Official pytest docs
- Coverage.py Documentation - Coverage tool docs
Frontend (TypeScript/React)¶
- Frontend Testing Guide - Complete Vitest testing documentation
- Frontend README - Quick reference
- Vitest Documentation - Official Vitest docs
- React Testing Library - RTL docs
Getting Help¶
If you encounter issues:
- Check this troubleshooting guide
- Review test output for error messages
- Check relevant documentation
- Ask in team chat or create an issue
Summary¶
Key commands to remember:
Backend (Python)¶
# Development workflow
make test-unit # Quick unit tests
make test-coverage # Full test with coverage
uv run pytest -m "not slow" # Skip slow tests
# Before committing
make test # Run all tests
pre-commit run --all-files # Run all checks
# Debugging
uv run pytest -v -s # Verbose with prints
uv run pytest --pdb # Debug on failure