This document explains how to run tests in the analyzer-lsp project, including unit tests, E2E tests, and the make test-all workflow.
- Quick Start
- Test Types
- Running Unit Tests
- Running E2E Tests
- Understanding make test-all
- Writing Tests
- Debugging Failed Tests
# Run all unit tests
go test ./...
# Run all E2E tests
make test-all
# Run specific provider E2E tests
make test-java
make test-generic
make test-yaml
# Run just the analyzer integration test
make test-analyzerUnit tests are Go test files (*_test.go) located throughout the codebase. They test individual functions and packages in isolation.
Location: Alongside source files in each package
Examples:
engine/engine_test.goparser/rule_parser_test.goprovider/provider_test.go
E2E tests validate that providers work correctly by running the full analyzer with real rules and comparing output against expected results.
Location: external-providers/*/e2e-tests/
Structure:
external-providers/
├── java-external-provider/e2e-tests/
│ ├── rule-example.yaml # Java-specific test rules
│ ├── demo-output.yaml # Expected output
│ └── provider_settings.json # Provider configuration
├── generic-external-provider/e2e-tests/
│ ├── golang-e2e/
│ ├── python-e2e/
│ └── nodejs-e2e/
└── yq-external-provider/e2e-tests/
├── rule-example.yaml
├── demo-output.yaml
└── provider_settings.json
Integration tests run the complete analyzer with all providers to validate multi-provider scenarios.
Location: Root-level test files
rule-example.yamldemo-output.yamlprovider_pod_local_settings.json
Performance benchmarks measure the speed of critical operations.
Location: /benchmarks
Running benchmarks:
go test -bench=. -benchmem ./benchmarks/...
# Java dependency index benchmark
make run-index-benchmarkgo test ./...go test ./engine/...
go test ./parser/...
go test ./provider/...go test -v ./...go test -cover ./...
# Generate coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.outgo test ./engine -run TestProcessRuleE2E tests use container-based providers to simulate real-world usage.
-
Build the images:
make build-external
This builds:
localhost/analyzer-lsp:latest- Main analyzerlocalhost/java-provider:latest- Java providerlocalhost/generic-provider:latest- Go/Python/Node.js providerlocalhost/yq-provider:latest- YAML provider
-
Ensure a container tool is installed and running
make test-javaWhat happens:
- Creates a
test-datavolume - Copies Java example files to the volume
- Starts
analyzer-javapod with Java provider container - Runs analyzer with Java-specific rules
- Compares output against
demo-output.yaml - Cleans up pod and volume
Manual steps:
# Start provider
make run-java-provider-pod
# Run test
make run-demo-java
# Check logs if needed
podman logs java-provider
# Stop provider
make stop-java-provider-podmake test-genericThis runs tests for all three languages sequentially:
test-golangtest-pythontest-nodejs
Individual language tests:
make test-golang
make test-python
make test-nodejsmake test-yamlWhat happens:
- Starts YQ provider for YAML analysis
- Runs analyzer with YAML-specific rules
- Validates Kubernetes manifest detection
- Cleans up
make test-all-providersRuns all provider-specific tests:
- Java
- Go
- Python
- Node.js
- YAML
make test-analyzerRuns the complete analyzer with all providers running simultaneously in a single pod.
The make test-all target is the comprehensive test suite that validates the entire system.
make test-allSteps executed:
-
test-all-providers - Tests each provider individually
make test-java- Java provider E2E testsmake test-generic- Generic provider E2E tests (Go, Python, Node.js)make test-yaml- YAML provider E2E tests
-
test-analyzer - Full integration test
make run-external-providers-pod- Start all providers in one podmake run-demo-image- Run analyzer with all providersmake stop-external-providers-pod- Clean up
Each E2E test follows this pattern:
-
Setup Phase
# Create volume for test data podman volume create test-data # Copy example files to volume podman run --rm -v test-data:/target -v $(PWD)/examples:/src \ --entrypoint=cp alpine -a /src/. /target/ # Create pod podman pod create --name=analyzer-java # Start provider container podman run --pod analyzer-java --name java-provider -d \ -v test-data:/analyzer-lsp/examples \ localhost/java-provider:latest --port 14651
-
Execution Phase
# Run analyzer in the same pod podman run --entrypoint /usr/local/bin/konveyor-analyzer \ --pod=analyzer-java \ -v test-data:/analyzer-lsp/examples \ -v $(PWD)/external-providers/java-external-provider/e2e-tests/demo-output.yaml:/analyzer-lsp/output.yaml \ -v $(PWD)/external-providers/java-external-provider/e2e-tests/provider_settings.json:/analyzer-lsp/provider_settings.json \ -v $(PWD)/external-providers/java-external-provider/e2e-tests/rule-example.yaml:/analyzer-lsp/rule-example.yaml \ localhost/analyzer-lsp:latest \ --output-file=/analyzer-lsp/output.yaml \ --rules=/analyzer-lsp/rule-example.yaml \ --provider-settings=/analyzer-lsp/provider_settings.json
-
Cleanup Phase
# Kill and remove pod podman pod kill analyzer-java podman pod rm analyzer-java # Remove volume podman volume rm test-data
Verification of the results is done with git, this workflow also allows you to re-generate the results and commit them, if there should be changes.
Each E2E test requires three files:
Defines rules specific to the provider being tested.
Example (Java):
- ruleID: java-servlet-reference
when:
java.referenced:
pattern: "javax.servlet.*"
message: "Found reference to Java Servlet API"
effort: 3
category: mandatoryExpected output from the analyzer. This is what the test validates against.
Example structure:
- name: konveyor-analysis
violations:
java-servlet-reference:
description: "..."
category: mandatory
incidents:
- uri: "file:///analyzer-lsp/examples/java/..."
message: "Found reference to Java Servlet API"
lineNumber: 42Provider configuration for the test.
Example (Java):
[
{
"name": "java",
"address": "127.0.0.1:14651",
"initConfig": [
{
"location": "/analyzer-lsp/examples/java",
"analysisMode": "full",
"providerSpecificConfig": {
"lspServerPath": "/jdtls/bin/jdtls",
"bundles": "/jdtls/java-analyzer-bundle/java-analyzer-bundle.core/target/java-analyzer-bundle.core-1.0.0-SNAPSHOT.jar"
}
}
]
}
]Unit tests follow standard Go testing conventions:
// engine/engine_test.go
package engine
import "testing"
func TestProcessRule(t *testing.T) {
// Setup
rule := Rule{
RuleID: "test-rule",
When: SimpleCondition{...},
}
// Execute
result, err := processRule(ctx, rule, ruleCtx, logger)
// Assert
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !result.Matched {
t.Error("expected rule to match")
}
}To add a new E2E test for a provider:
-
Add rule to
e2e-tests/rule-example.yaml:- ruleID: my-new-test when: java.referenced: pattern: "my.package.*" message: "Test message" effort: 1
-
Add expected output to
e2e-tests/demo-output.yaml:- name: konveyor-analysis violations: my-new-test: description: "..." incidents: - uri: "file:///analyzer-lsp/examples/..." message: "Test message"
-
Run the test:
make test-java # or appropriate provider
// benchmarks/rule_bench_test.go
package benchmarks
import "testing"
func BenchmarkRuleProcessing(b *testing.B) {
// Setup
engine := CreateRuleEngine(...)
rules := loadTestRules()
b.ResetTimer()
for i := 0; i < b.N; i++ {
engine.RunRules(context.Background(), rules)
}
}-
Run test with verbose output:
go test -v ./engine -run TestFailingTest -
Add debug logging:
t.Logf("Debug info: %+v", someValue)
-
Use a debugger:
For full setup see Debuging documentation.
# Install delve
go install github.com/go-delve/delve/cmd/dlv@latest
# Debug test
dlv test ./engine -- -test.run TestFailingTest-
Check provider logs:
podman logs java-provider podman logs golang-provider
-
Inspect actual output:
# The output is written to the mounted demo-output.yaml cat external-providers/java-external-provider/e2e-tests/demo-output.yaml -
Run provider manually:
# Start provider pod make run-java-provider-pod # Don't stop it - inspect while running podman exec -it java-provider /bin/sh # When done make stop-java-provider-pod
-
Compare expected vs actual:
git diff main -- <path_to_output_file>
-
Check analyzer logs:
# Analyzer runs in a container, so check its output # You may need to run it with additional verbosity # Edit the Makefile temporarily to add --log-level=9 make run-demo-java
# Error: pod already exists
make stop-java-provider-pod # Clean up old pod
make test-java # Try again# Error: volume test-data already exists
podman volume rm test-data
make test-java# Check if port is already in use
netstat -an | grep 14651
# Check provider container logs
podman logs java-providerCommon causes:
- File paths differ (check URI formatting)
- Line numbers off by one (check line counting logic)
- Extra/missing incidents (check rule conditions)
- Order of incidents changed (output is sorted)
Fix:
- Review actual vs expected output carefully
- If actual is correct, update
demo-output.yaml - If actual is wrong, debug the provider or rule
Test data is located in /examples:
examples/
├── java/ # Java test projects
├── golang/ # Go test projects
├── python/ # Python test projects
├── nodejs/ # Node.js test projects
├── yaml/ # YAML/K8s manifests
└── builtin/ # Test files for builtin provider
Providers may have additional examples:
external-providers/java-external-provider/examples/
The test suite is designed to run in CI/CD pipelines:
# Full test suite for CI
make test-allThis runs:
- All provider E2E tests
- Full integration test
- Validates all providers work independently and together
To measure test coverage across the project:
# Generate coverage for all packages
go test -coverprofile=coverage.out ./...
# View coverage in browser
go tool cover -html=coverage.out
# View coverage by package
go tool cover -func=coverage.out# Run all benchmarks
go test -bench=. -benchmem ./benchmarks/...
# Run specific benchmark
go test -bench=BenchmarkRuleProcessing -benchmem ./benchmarks/...
# Run with longer benchtime
go test -bench=. -benchtime=10s ./benchmarks/...Special benchmark for Java dependency indexing:
make run-index-benchmark-
Always run tests before committing:
go test ./... -
Add tests for new features:
- Unit tests for new functions
- E2E tests for new provider capabilities
-
Update E2E expected output when behavior changes:
- Review changes carefully
- Update
demo-output.yamlfiles
-
Clean up test resources:
- Make targets handle cleanup
- Don't leave pods/volumes running
-
Use descriptive test names:
func TestRuleEngineProcessesTaggingRulesFirst(t *testing.T)
-
Test error cases:
func TestRuleParserHandlesMalformedYAML(t *testing.T)
- Development Setup - Set up your development environment
- Provider Development - Build new providers
- Architecture - Understand the codebase structure