4.8 KiB
stage, group, info, title
stage | group | info | title |
---|---|---|---|
none | unassigned | Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review. | Testing AI features |
This document highlights AI-specific testing considerations that complement GitLab standard testing guidelines. It focuses on the challenges AI features bring to testing, such as non-deterministic responses from third-party providers. Examples are included for each testing level.
AI-powered features depend on system components outside the GitLab monolith, such as the AI Gateway and IDE extensions. In addition to these guidelines, consult any testing guidelines documented in each component project.
Unit testing
Follow standard unit testing guidelines. For AI features, always mock third-party AI provider calls to ensure fast, reliable tests.
Unit test examples
- GitLab:
ee/spec/lib/code_suggestions/tasks/code_completion_spec.rb
- VS Code extension:
code_suggestions/code_suggestions.test.ts
Integration tests
Use integration tests to verify request construction and response handling for AI providers. Mock AI provider responses to ensure predictable, fast tests that handle various responses, errors, and status codes.
Integration test examples
- GitLab:
ee/spec/requests/api/code_suggestions_spec.rb
- VS Code extension:
main/test/integration/chat.test.js
Frontend feature tests
Use frontend feature tests to validate AI features from an end-user perspective. Mock AI providers to maintain speed and reliability. Focus on happy paths with selective negative path testing for high-risk scenarios.
Frontend feature test example
- GitLab Duo Chat:
ee/spec/features/duo_chat_spec.rb
End-to-End testing
Use end-to-end tests sparingly to verify AI features work with real provider responses. Key considerations:
- Keep tests minimal due to slower execution and potential provider outages.
- Account for non-deterministic AI responses in test design. For example, use deterministic assertions on controlled elements like chatbot names, not AI-generated content.
E2E test examples
- GitLab:
specs/features/ee/browser_ui/3_create/web_ide/code_suggestions_in_web_ide_spec.rb
- JetBrains:
test/kotlin/com/gitlab/plugin/e2eTest/tests/CodeSuggestionTest.kt
Live environment testing
- GitLab.com: We run minimal E2E tests continuously against staging and production environments. For example, Code Suggestions smoke tests.
- GitLab Self-Managed: We use the
gitlab-qa
orchestrator with AI Gateway scenarios to test AI features on self-managed installations.
Exploratory testing
Perform exploratory testing before significant milestones to uncover bugs outside expected workflows and UX issues. This is especially important for AI features as they progress through experiment, beta, and GA phases.
Dogfooding
We dogfood everything. This is especially important for AI features given the rapidly changing nature of the field. See the dogfooding process for details.