Testing Strategy - Audit Trail Platform (ATP)¶
Test-driven quality assurance — ATP's comprehensive testing strategy ensures reliability, security, compliance, and performance through automated unit tests, integration tests, BDD acceptance tests, contract tests, chaos tests, and performance validation at every stage of the development lifecycle.
📋 Documentation Generation Plan¶
This document will be generated in 18 cycles. Current progress:
| Cycle | Topics | Estimated Lines | Status |
|---|---|---|---|
| Cycle 1 | Testing Strategy Overview & Philosophy (1-2) | ~3,000 | ⏳ Not Started |
| Cycle 2 | Unit Testing Fundamentals (3-4) | ~3,500 | ⏳ Not Started |
| Cycle 3 | Unit Testing ATP Components (5-6) | ~4,000 | ⏳ Not Started |
| Cycle 4 | Integration Testing Strategy (7-8) | ~3,500 | ⏳ Not Started |
| Cycle 5 | Acceptance Testing with BDD (9-10) | ~4,000 | ⏳ Not Started |
| Cycle 6 | Architecture & Compliance Tests (11-12) | ~3,000 | ⏳ Not Started |
| Cycle 7 | Contract Testing (REST & Messages) (13-14) | ~3,500 | ⏳ Not Started |
| Cycle 8 | Database & Persistence Testing (15-16) | ~3,000 | ⏳ Not Started |
| Cycle 9 | Messaging & Event Testing (17-18) | ~3,500 | ⏳ Not Started |
| Cycle 10 | Security Testing (19-20) | ~3,000 | ⏳ Not Started |
| Cycle 11 | Performance & Load Testing (21-22) | ~4,000 | ⏳ Not Started |
| Cycle 12 | Chaos Engineering & Resilience (23-24) | ~3,000 | ⏳ Not Started |
| Cycle 13 | End-to-End (E2E) Testing (25-26) | ~3,000 | ⏳ Not Started |
| Cycle 14 | Test Data Management (27-28) | ~2,500 | ⏳ Not Started |
| Cycle 15 | Code Coverage & Quality Metrics (29-30) | ~3,000 | ⏳ Not Started |
| Cycle 16 | CI/CD Test Integration (31-32) | ~3,000 | ⏳ Not Started |
| Cycle 17 | Test Maintenance & Flakiness (33-34) | ~2,500 | ⏳ Not Started |
| Cycle 18 | Best Practices & Anti-Patterns (35-36) | ~3,000 | ⏳ Not Started |
Total Estimated Lines: ~58,000
Purpose & Scope¶
This document defines ATP's comprehensive testing strategy covering all test types, frameworks, patterns, and automation practices to ensure reliability, security, compliance, and performance of the Audit Trail Platform across the entire software development lifecycle.
Key Testing Objectives - Reliability: Prevent defects from reaching production via comprehensive test coverage - Security: Validate authentication, authorization, encryption, and vulnerability absence - Compliance: Ensure GDPR, HIPAA, SOC 2, PCI DSS requirements are tested and validated - Performance: Verify SLO/SLA targets under load, stress, and spike conditions - Maintainability: Keep tests fast, deterministic, and easy to diagnose - Traceability: Link tests to requirements, user stories, and acceptance criteria
Testing Frameworks & Tools
- MSTest: Unit and integration tests (.NET)
- Reqnroll (SpecFlow successor): BDD acceptance tests with Gherkin
- NetArchTest: Architecture and dependency rule enforcement
- FluentNHibernate.Testing: ORM mapping validation (PersistenceSpecification)
- MassTransit.Testing: Message consumer/publisher testing
- ASP.NET Core TestServer: In-process API integration testing
- k6: Performance and load testing
- Chaos Mesh: Chaos engineering (see chaos-drills.md)
- Coverlet: Code coverage collection
- SonarQube: Code quality and coverage gates
- OWASP Dependency-Check: Vulnerability scanning
- Pact: Contract testing (API and messages)
Detailed Cycle Plan¶
CYCLE 1: Testing Strategy Overview & Philosophy (~3,000 lines)¶
Topic 1: ATP Testing Philosophy¶
What will be covered: - Testing Pyramid for ATP
/\
/ \ Manual Exploratory
/____\
/ \ E2E Tests (Reqnroll)
/________\
/ \ Integration Tests
/____________\
/ \ Unit Tests
/________________\
- Test-Driven Development (TDD)
- Red → Green → Refactor cycle
- Write test first, then implementation
-
ATP requirement: All new features must have tests before code review
-
Behavior-Driven Development (BDD)
- Gherkin scenarios (Given/When/Then)
- Executable specifications
-
Collaboration between product, dev, QA
-
Shift-Left Testing
- Test as early as possible in lifecycle
- Unit tests run on every commit
- Integration tests run in CI pipeline
-
Performance tests in staging before production
-
Quality Gates Integration
- All tests must pass to merge PR
- ≥70% code coverage required (see
quality-gates.md) - 0 critical/high security vulnerabilities
-
0 architecture rule violations
-
Testing Principles
- Fast: Unit tests < 1 second each, suite < 5 minutes total
- Isolated: No shared state, no test dependencies
- Repeatable: Same input → same output (deterministic)
- Self-Validating: Clear pass/fail (no manual interpretation)
- Timely: Written close to production code (not as afterthought)
Code Examples: - Testing pyramid visualization - TDD workflow example (test-first) - BDD Gherkin scenario example
Diagrams: - ATP testing pyramid - Shift-left testing timeline - Quality gates integration
Deliverables: - Testing philosophy document - Test strategy matrix (type, count, frequency) - Quality gate alignment
Topic 2: Test Project Structure & Organization¶
What will be covered: - ATP Test Project Layout
ConnectSoft.Audit.sln
├── src/
│ ├── ConnectSoft.Audit.Ingestion/
│ ├── ConnectSoft.Audit.Query/
│ └── ... (other services)
├── tests/
│ ├── ConnectSoft.Audit.UnitTests/
│ │ ├── DomainModel/
│ │ │ ├── AggregateTests/
│ │ │ │ ├── AuditRecordTests.cs
│ │ │ │ └── IntegrityBlockTests.cs
│ │ │ └── ValidatorTests/
│ │ │ ├── AppendCommandValidatorTests.cs
│ │ │ └── ExportRequestValidatorTests.cs
│ │ ├── PersistenceModel/
│ │ │ ├── NHibernateClassMappings/
│ │ │ │ ├── AuditRecordEntityMapTests.cs
│ │ │ │ └── IntegrityBlockEntityMapTests.cs
│ │ │ └── Repositories/
│ │ │ └── AuditRecordRepositoryTests.cs
│ │ ├── MessagingModel/
│ │ │ ├── Consumers/
│ │ │ │ └── AuditAcceptedEventConsumerTests.cs
│ │ │ └── Sagas/
│ │ │ └── ExportWorkflowSagaTests.cs
│ │ └── ServiceModel/
│ │ ├── Validators/
│ │ └── Mappers/
│ │
│ ├── ConnectSoft.Audit.IntegrationTests/
│ │ ├── Ingestion/
│ │ │ └── IngestionEndpointTests.cs
│ │ ├── Query/
│ │ │ └── QueryEndpointTests.cs
│ │ ├── Messaging/
│ │ │ └── EventFlowIntegrationTests.cs
│ │ └── Persistence/
│ │ └── DatabaseIntegrationTests.cs
│ │
│ ├── ConnectSoft.Audit.AcceptanceTests/
│ │ ├── Features/
│ │ │ ├── IngestionWorkflow.feature
│ │ │ ├── QueryWorkflow.feature
│ │ │ ├── ExportWorkflow.feature
│ │ │ └── IntegrityVerification.feature
│ │ ├── Steps/
│ │ │ ├── IngestionSteps.cs
│ │ │ ├── QuerySteps.cs
│ │ │ └── ExportSteps.cs
│ │ └── Hooks/
│ │ └── BeforeAfterTestRunHooks.cs
│ │
│ ├── ConnectSoft.Audit.ArchitectureTests/
│ │ ├── LayeredArchitectureTests.cs
│ │ ├── DependencyRuleTests.cs
│ │ └── NamingConventionTests.cs
│ │
│ ├── ConnectSoft.Audit.ContractTests/
│ │ ├── RestAPI/
│ │ │ ├── IngestionAPIContractTests.cs
│ │ │ └── QueryAPIContractTests.cs
│ │ └── Messages/
│ │ ├── AuditAcceptedEventContractTests.cs
│ │ └── ProjectionUpdatedEventContractTests.cs
│ │
│ └── ConnectSoft.Audit.PerformanceTests/
│ ├── k6/
│ │ ├── load-test-ingestion.js
│ │ ├── stress-test-query.js
│ │ └── spike-test-export.js
│ └── NBomber/
│ └── IngestionLoadTest.cs
- Test Project Naming Conventions
- UnitTests:
{Service}.UnitTests - IntegrationTests:
{Service}.IntegrationTests - AcceptanceTests:
{Service}.AcceptanceTests - ArchitectureTests:
{Service}.ArchitectureTests - ContractTests:
{Service}.ContractTests -
PerformanceTests:
{Service}.PerformanceTests -
Test File Naming
- Test class:
{ClassUnderTest}Tests.cs - Feature file:
{Feature Name}.feature -
Step definition:
{Feature}StepDefinitions.cs -
Test Method Naming
- Pattern:
{MethodUnderTest}_Should_{ExpectedBehavior}_When_{Condition} - Example:
AppendAuditRecord_Should_ReturnSuccess_When_ValidInput - Or:
{MethodUnderTest}Should{ExpectedBehavior}When{Condition}(PascalCase)
Code Examples: - Test project structure (complete folder tree) - Test class template (MSTest) - Test naming examples (good vs. bad)
Diagrams: - Test project organization - Test pyramid with ATP coverage targets
Deliverables: - Test project structure template - Naming convention guide - Organization best practices
CYCLE 2: Unit Testing Fundamentals (~3,500 lines)¶
Topic 3: MSTest Framework Basics¶
What will be covered: - MSTest Attributes
[TestClass] // Mark class as containing tests
[TestMethod] // Mark method as test
[TestInitialize] // Run before each test
[TestCleanup] // Run after each test
[ClassInitialize] // Run once before all tests in class
[ClassCleanup] // Run once after all tests in class
[TestCategory("Unit")] // Categorize tests
[Ignore("WIP")] // Temporarily disable test
[Timeout(5000)] // Test must complete in 5 seconds
[DataTestMethod] // Parameterized test
[DataRow(1, 2, 3)] // Test data row
[ExpectedException(typeof(...))] // Expect exception (legacy)
-
Assertions
Assert.IsTrue(condition); Assert.IsFalse(condition); Assert.AreEqual(expected, actual); Assert.AreNotEqual(notExpected, actual); Assert.IsNull(value); Assert.IsNotNull(value); Assert.ThrowsException<TException>(() => action()); Assert.ThrowsExceptionAsync<TException>(() => actionAsync()); CollectionAssert.AreEqual(expected, actual); StringAssert.Contains(value, substring); -
Test Structure (AAA Pattern)
[TestMethod] public void AuditRecord_Should_HaveValidTimestamps_When_Created() { // Arrange var tenantId = "test-tenant"; var action = "resource.created"; var createdAt = DateTime.UtcNow; // Act var auditRecord = AuditRecord.Create( tenantId, action, createdAt); // Assert Assert.IsNotNull(auditRecord); Assert.AreEqual(tenantId, auditRecord.TenantId); Assert.AreEqual(action, auditRecord.Action); Assert.IsTrue(auditRecord.ObservedAt >= createdAt); } -
Parameterized Tests
[DataTestMethod] [DataRow("PUBLIC", DataClassification.Public)] [DataRow("SENSITIVE", DataClassification.Sensitive)] [DataRow("PII", DataClassification.PersonallyIdentifiableInformation)] [DataRow("SECRET", DataClassification.Secret)] public void ClassifyAuditRecord_Should_MapCorrectly( string input, DataClassification expected) { // Arrange & Act var result = ClassificationMapper.Map(input); // Assert Assert.AreEqual(expected, result); } -
Async Test Patterns
[TestMethod] public async Task AppendAsync_Should_ReturnAuditRecordId_When_ValidInput() { // Arrange var command = new AppendCommand { ... }; // Act var result = await _service.AppendAsync(command, CancellationToken.None); // Assert Assert.IsNotNull(result); Assert.IsFalse(string.IsNullOrEmpty(result.AuditRecordId)); }
Code Examples: - MSTest attribute usage (complete) - AAA pattern test template - Parameterized test examples - Async test patterns - Collection and string assertions
Diagrams: - AAA pattern visualization - Test lifecycle (Initialize → Test → Cleanup)
Deliverables: - MSTest reference guide - Test template library - Assertion patterns catalog
Topic 4: Test Doubles (Mocks, Stubs, Fakes)¶
What will be covered: - Mocking Frameworks - Moq (primary for ATP) - NSubstitute (alternative) - FakeItEasy (alternative)
- Mock vs. Stub vs. Fake
- Mock: Verify interactions (method called, parameters)
- Stub: Provide canned responses (return predefined data)
-
Fake: Working implementation (in-memory repository)
-
Moq Basics
[TestMethod] public async Task AppendService_Should_CallRepository_When_ValidInput() { // Arrange var mockRepository = new Mock<IAuditRecordRepository>(); var mockUnitOfWork = new Mock<IUnitOfWork>(); var service = new IngestionService( mockRepository.Object, mockUnitOfWork.Object); var command = new AppendCommand { TenantId = "test", Action = "test.action" }; // Act await service.AppendAsync(command, CancellationToken.None); // Assert mockRepository.Verify( r => r.InsertAsync(It.IsAny<AuditRecord>(), It.IsAny<CancellationToken>()), Times.Once); mockUnitOfWork.Verify( u => u.ExecuteTransactionalAsync( It.IsAny<Func<Task>>(), It.IsAny<CancellationToken>()), Times.Once); } -
Setup and Returns
// Setup method to return value mockPolicyClient .Setup(p => p.EvaluateAsync(It.IsAny<string>(), It.IsAny<CancellationToken>())) .ReturnsAsync(new PolicyDecision { Classification = DataClassification.Sensitive, RetentionDays = 2555 }); // Setup method to throw exception mockRepository .Setup(r => r.GetByIdAsync(It.IsAny<string>(), It.IsAny<CancellationToken>())) .ThrowsAsync(new NotFoundException("Record not found")); -
Verify Interactions
// Verify method called once mock.Verify(m => m.Method(), Times.Once); // Verify method never called mock.Verify(m => m.Method(), Times.Never); // Verify method called with specific parameters mock.Verify(m => m.Method( It.Is<string>(s => s.StartsWith("tenant-")), It.IsAny<CancellationToken>()), Times.Once); -
In-Memory Fakes
- In-memory repository implementation for tests
- In-memory event bus (MassTransit in-memory transport)
- Fake time provider (deterministic timestamps)
Code Examples: - Mock setup and verify (complete examples) - Stub implementations - In-memory fake repository - Fake time provider pattern
Diagrams: - Mock vs. Stub vs. Fake comparison - Moq setup/verify flow
Deliverables: - Mocking strategy guide - Test double patterns - Fake implementations
CYCLE 3: Unit Testing ATP Components (~4,000 lines)¶
Topic 5: Domain Model Unit Tests¶
What will be covered: - Aggregate Root Tests
[TestClass]
public class AuditRecordTests
{
[TestMethod]
public void Create_Should_GenerateULID_When_NoIdProvided()
{
// Arrange
var tenantId = "test-tenant";
var action = "resource.created";
// Act
var auditRecord = AuditRecord.Create(tenantId, action);
// Assert
Assert.IsNotNull(auditRecord.AuditRecordId);
Assert.AreEqual(26, auditRecord.AuditRecordId.Length); // ULID length
}
[TestMethod]
public void Create_Should_ThrowException_When_TenantIdNull()
{
// Arrange & Act & Assert
Assert.ThrowsException<ArgumentNullException>(() =>
AuditRecord.Create(null, "action"));
}
[TestMethod]
public void ApplyClassification_Should_UpdateClassification_When_Valid()
{
// Arrange
var auditRecord = AuditRecord.Create("tenant", "action");
var classification = DataClassification.PersonallyIdentifiableInformation;
// Act
auditRecord.ApplyClassification(classification, policyVersion: 1);
// Assert
Assert.AreEqual(classification, auditRecord.Classification);
Assert.AreEqual(1, auditRecord.PolicyVersion);
}
[TestMethod]
public void ApplyRetention_Should_CalculateRetentionDate_When_PolicyApplied()
{
// Arrange
var auditRecord = AuditRecord.Create("tenant", "action");
var retentionDays = 2555; // 7 years
var expectedRetentionDate = auditRecord.CreatedAt.AddDays(retentionDays);
// Act
auditRecord.ApplyRetention("rp-2025-01", retentionDays);
// Assert
Assert.IsNotNull(auditRecord.RetentionUntil);
Assert.AreEqual(expectedRetentionDate.Date, auditRecord.RetentionUntil.Value.Date);
}
}
- Value Object Tests
- Test immutability
- Test equality (Equals, GetHashCode)
- Test validation rules
-
Example: CorrelationContext, ResourceRef, ActorInfo
-
Domain Event Tests
- Test event creation
- Test event properties
-
Test event serialization/deserialization
-
Domain Service Tests
- Test business logic
- Mock dependencies (repositories, policies)
- Test invariant enforcement
Code Examples: - Complete aggregate root test suite (AuditRecord) - Value object tests (equality, immutability) - Domain event tests - Domain service tests with mocks
Diagrams: - Aggregate test coverage map - Domain model test hierarchy
Deliverables: - Domain model test suites - Aggregate test templates - Value object test patterns
Topic 6: Validator & Business Rule Tests¶
What will be covered: - FluentValidation Tests
[TestClass]
public class AppendCommandValidatorTests
{
private readonly AppendCommandValidator _validator = new();
[TestMethod]
public void Validator_Should_Pass_When_AllFieldsValid()
{
// Arrange
var command = new AppendCommand
{
TenantId = "valid-tenant",
Action = "resource.created",
ResourceType = "Order",
ResourceId = "12345",
ActorId = "user-123",
Payload = new { field = "value" }
};
// Act
var result = _validator.Validate(command);
// Assert
Assert.IsTrue(result.IsValid);
Assert.AreEqual(0, result.Errors.Count);
}
[TestMethod]
public void Validator_Should_Fail_When_TenantIdMissing()
{
// Arrange
var command = new AppendCommand { Action = "test" };
// Act
var result = _validator.Validate(command);
// Assert
Assert.IsFalse(result.IsValid);
Assert.IsTrue(result.Errors.Any(e => e.PropertyName == "TenantId"));
}
[DataTestMethod]
[DataRow("")]
[DataRow(" ")]
[DataRow(null)]
public void Validator_Should_Fail_When_ActionInvalid(string action)
{
// Arrange
var command = new AppendCommand
{
TenantId = "tenant",
Action = action
};
// Act
var result = _validator.Validate(command);
// Assert
Assert.IsFalse(result.IsValid);
}
}
- Business Rule Tests
- Test retention policy application
- Test classification rules
- Test legal hold enforcement
-
Test data residency rules
-
Invariant Enforcement Tests
- Test aggregate invariants
- Test immutability constraints
- Test state transition rules
Code Examples: - FluentValidation test suite (complete) - Business rule tests - Invariant enforcement tests - Edge case scenarios
Diagrams: - Validation flow - Business rule test matrix
Deliverables: - Validator test suites - Business rule test catalog - Edge case test scenarios
CYCLE 4: Integration Testing Strategy (~3,500 lines)¶
Topic 7: ASP.NET Core Integration Testing¶
What will be covered: - WebApplicationFactory Pattern
[TestClass]
public class IngestionEndpointTests
{
private static WebApplicationFactory<Startup> _factory;
private static HttpClient _client;
[ClassInitialize]
public static void ClassInitialize(TestContext context)
{
_factory = new WebApplicationFactory<Startup>()
.WithWebHostBuilder(builder =>
{
builder.ConfigureServices(services =>
{
// Replace real dependencies with test doubles
services.RemoveAll<IAzureServiceBusClient>();
services.AddSingleton<IAzureServiceBusClient, InMemoryMessageBus>();
// Use in-memory database
services.RemoveAll<DbContext>();
services.AddDbContext<AuditDbContext>(options =>
options.UseInMemoryDatabase("TestDb"));
});
});
_client = _factory.CreateClient();
}
[TestMethod]
public async Task AppendAuditRecord_Should_Return202_When_ValidRequest()
{
// Arrange
var request = new AppendRequest
{
TenantId = "test-tenant",
Action = "resource.created",
ResourceType = "Order",
ResourceId = "12345",
Payload = new { field = "value" }
};
var content = new StringContent(
JsonSerializer.Serialize(request),
Encoding.UTF8,
"application/json");
// Act
var response = await _client.PostAsync("/api/v1/audit/append", content);
// Assert
Assert.AreEqual(HttpStatusCode.Accepted, response.StatusCode);
var result = await response.Content.ReadFromJsonAsync<AppendResponse>();
Assert.IsNotNull(result);
Assert.IsFalse(string.IsNullOrEmpty(result.AuditRecordId));
}
[ClassCleanup]
public static void ClassCleanup()
{
_client?.Dispose();
_factory?.Dispose();
}
}
- TestServer vs. Real Server
- TestServer: In-process, faster, no network overhead
-
Real Server: Docker container, full E2E, slower
-
Service Replacement
- Replace external dependencies with test doubles
- Replace real database with in-memory or test database
-
Replace message bus with in-memory transport
-
Test Database Strategies
- In-memory database (SQLite, EF Core InMemory)
- Docker container (SQL Server, PostgreSQL)
- Dedicated test database (shared or per-test)
Code Examples: - WebApplicationFactory setup (complete) - Service replacement patterns - In-memory database setup - Docker test database setup - HTTP client test requests
Diagrams: - WebApplicationFactory architecture - Service replacement flow - Test database strategies
Deliverables: - Integration test setup guide - WebApplicationFactory templates - Test database strategies
Topic 8: Database Integration Tests¶
What will be covered: - Docker SQL Server for Tests
# docker-compose.test.yml
version: '3.8'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
ACCEPT_EULA: Y
SA_PASSWORD: YourStrong!Passw0rd
ports:
- "1433:1433"
-
FluentMigrator Test Setup
[ClassInitialize] public static void ClassInitialize(TestContext context) { var services = new ServiceCollection(); var connectionString = "Server=localhost;Database=AuditTestDb;..."; // Create database var dbHelper = new SqlServerDatabaseHelper(); dbHelper.CreateIfNotExists(connectionString); // Run migrations services.AddFluentMigratorCore() .ConfigureRunner(rb => rb .AddSqlServer2016() .WithGlobalConnectionString(connectionString) .ScanIn(typeof(AuditMigration).Assembly) .For.Migrations()); var serviceProvider = services.BuildServiceProvider(); var runner = serviceProvider.GetRequiredService<IMigrationRunner>(); runner.MigrateUp(); } -
Transaction Rollback Pattern
[TestMethod] public async Task Repository_Should_InsertAndRetrieve_When_ValidEntity() { // Arrange using var transaction = _session.BeginTransaction(); var repository = new AuditRecordRepository(_unitOfWork, _specLocator); var entity = CreateTestAuditRecord(); try { // Act await repository.InsertAsync(entity, CancellationToken.None); var retrieved = await repository.GetByIdAsync( entity.AuditRecordId, CancellationToken.None); // Assert Assert.IsNotNull(retrieved); Assert.AreEqual(entity.TenantId, retrieved.TenantId); } finally { // Always rollback (don't persist test data) transaction.Rollback(); } } -
Test Data Cleanup
- Rollback transactions (preferred)
- Delete test data in [TestCleanup]
- Recreate database per test class
- Use unique identifiers (GUIDs, ULIDs)
Code Examples: - Docker Compose for test database - FluentMigrator test setup (complete) - Transaction rollback pattern - Test data cleanup strategies - NHibernate integration test
Diagrams: - Database integration test architecture - Transaction rollback flow - Test database lifecycle
Deliverables: - Database integration test guide - Docker Compose configurations - Cleanup strategies
CYCLE 5: Acceptance Testing with BDD (~4,000 lines)¶
Topic 9: Reqnroll (SpecFlow) BDD Framework¶
What will be covered: - Why BDD for ATP? - Living documentation (Gherkin scenarios) - Collaboration (product, dev, QA) - Executable specifications - Traceability to requirements
-
Gherkin Syntax
Feature: Audit Record Ingestion As a system integrator I want to append audit records to ATP So that all user actions are captured for compliance Background: Given I have a valid tenant "acme" And I have a valid JWT token for tenant "acme" Scenario: Append audit record successfully Given I have a valid append request | Field | Value | | Action | order.created | | ResourceType | Order | | ResourceId | 12345 | | ActorId | user-123 | When I send the append request to ATP Then I should receive a 202 Accepted response And the response should contain a valid audit record ID And the audit record should be persisted in the database Scenario: Append audit record with PII classification Given I have an append request with PII data | Field | Value | | Action | patient.viewed | | ResourceType | Patient | | ResourceId | P-9981 | | ActorId | doctor-456 | | Payload | { "ssn": "123-45-..." } | When I send the append request to ATP Then I should receive a 202 Accepted response And the audit record should be classified as "PII" And the retention policy should be "7 years" Scenario Outline: Invalid append requests Given I have an append request with <InvalidField> When I send the append request to ATP Then I should receive a <StatusCode> response And the error message should mention "<ErrorMessage>" Examples: | InvalidField | StatusCode | ErrorMessage | | Empty TenantId| 400 | TenantId required | | Empty Action | 400 | Action required | | Invalid JWT | 401 | Unauthorized | -
Step Definitions
[Binding] public class IngestionSteps { private readonly HttpClient _client; private AppendRequest _request; private HttpResponseMessage _response; public IngestionSteps() { _client = BeforeAfterTestRunHooks.ServerInstance.CreateClient(); } [Given(@"I have a valid append request")] public void GivenValidAppendRequest(Table table) { _request = new AppendRequest { TenantId = "acme", Action = table.Rows[0]["Value"], // First row, "Action" column ResourceType = table.Rows[1]["Value"], ResourceId = table.Rows[2]["Value"], ActorId = table.Rows[3]["Value"] }; } [When(@"I send the append request to ATP")] public async Task WhenSendAppendRequest() { var content = new StringContent( JsonSerializer.Serialize(_request), Encoding.UTF8, "application/json"); _response = await _client.PostAsync("/api/v1/audit/append", content); } [Then(@"I should receive a (\d+) .* response")] public void ThenShouldReceiveStatusCode(int expectedStatusCode) { Assert.AreEqual(expectedStatusCode, (int)_response.StatusCode); } [Then(@"the response should contain a valid audit record ID")] public async Task ThenResponseShouldContainValidAuditRecordId() { var result = await _response.Content.ReadFromJsonAsync<AppendResponse>(); Assert.IsNotNull(result); Assert.IsFalse(string.IsNullOrEmpty(result.AuditRecordId)); Assert.AreEqual(26, result.AuditRecordId.Length); // ULID } }
Code Examples: - Complete Gherkin feature files (5+ scenarios) - Step definition implementations - Table parameter handling - Scenario outline with examples
Diagrams: - BDD workflow (write feature → implement steps → run tests) - Gherkin syntax guide
Deliverables: - BDD testing guide - Feature file templates - Step definition patterns
Topic 10: ATP Acceptance Test Scenarios¶
What will be covered: - ATP Feature Files 1. Ingestion Workflow.feature: Append, classify, store audit records 2. Query Workflow.feature: Search, filter, paginate results 3. Export Workflow.feature: Request export, generate package, download 4. Integrity Verification.feature: Verify hash chains, signatures 5. Policy Management.feature: Create/update retention, classification policies 6. Multi-Tenant Isolation.feature: Verify tenant isolation at all layers 7. GDPR Right to Erasure.feature: Execute erasure, verify pseudonymization 8. Legal Hold.feature: Apply hold, verify retention override
-
Scenario Coverage Matrix
Feature | Happy Path | Error Cases | Edge Cases | Security | Performance ------------------------|------------|-------------|------------|----------|------------- Ingestion Workflow | ✅ | ✅ | ✅ | ✅ | ❌ Query Workflow | ✅ | ✅ | ✅ | ✅ | ❌ Export Workflow | ✅ | ✅ | ✅ | ✅ | ❌ Integrity Verification | ✅ | ✅ | ✅ | ✅ | ❌ Policy Management | ✅ | ✅ | ⚠️ | ✅ | ❌ Multi-Tenant Isolation | ✅ | ✅ | ✅ | ✅ | ❌ GDPR Erasure | ✅ | ✅ | ✅ | ✅ | ❌ Legal Hold | ✅ | ⚠️ | ⚠️ | ✅ | ❌ -
Hooks & Setup
- BeforeFeature: Start TestServer, setup database
- AfterFeature: Stop TestServer, cleanup database
- BeforeScenario: Reset state, clear caches
- AfterScenario: Log results, capture screenshots (if UI)
Code Examples: - Complete feature file (Ingestion Workflow) - Step definitions for all scenarios - Hooks implementation - Test data builders
Diagrams: - Feature coverage map - Acceptance test execution flow
Deliverables: - All ATP feature files - Complete step definitions - Hooks and setup code
CYCLE 6: Architecture & Compliance Tests (~3,000 lines)¶
Topic 11: Architecture Tests with NetArchTest¶
What will be covered: - Why Architecture Tests? - Enforce architectural rules automatically - Prevent architectural erosion over time - Document architectural decisions as code - Fail builds when rules violated
-
NetArchTest Framework
[TestClass] public class LayeredArchitectureTests { [TestMethod] public void DomainLayer_Should_NotDependOn_Infrastructure() { // Arrange var domainAssembly = typeof(AuditRecord).Assembly; var result = Types.InAssembly(domainAssembly) .ShouldNot() .HaveDependencyOn("ConnectSoft.Audit.PersistenceModel") .GetResult(); // Assert Assert.IsTrue(result.IsSuccessful, $"Domain should not depend on infrastructure: {string.Join(", ", result.FailingTypeNames)}"); } [TestMethod] public void Controllers_Should_NotDirectlyReference_Repositories() { var result = Types.InAssembly(typeof(IngestionController).Assembly) .That().ResideInNamespace("ConnectSoft.Audit.ServiceModel") .And().HaveNameEndingWith("Controller") .ShouldNot() .HaveDependencyOn("ConnectSoft.Audit.PersistenceModel.Repositories") .GetResult(); Assert.IsTrue(result.IsSuccessful); } [TestMethod] public void Aggregates_Should_BeSealed() { var result = Types.InAssembly(typeof(AuditRecord).Assembly) .That().ImplementInterface(typeof(IAggregateRoot)) .Should().BeSealed() .GetResult(); Assert.IsTrue(result.IsSuccessful); } } -
ATP Architectural Rules
- Domain layer has no infrastructure dependencies
- Controllers don't directly reference repositories (use services/processors)
- Aggregates are sealed (prevent inheritance)
- Value objects are immutable (readonly properties)
- Interfaces in abstractions layer only
-
No circular dependencies
-
Naming Convention Tests
Code Examples: - NetArchTest suite (complete) - Dependency rule tests - Naming convention tests - Immutability tests - Architecture decision validation
Diagrams: - Layered architecture with dependency rules - Architecture test execution flow
Deliverables: - Architecture test suite - Architectural rule catalog - Enforcement automation
Topic 12: Compliance Validation Tests¶
What will be covered: - GDPR Compliance Tests - Test right to access (query all user data) - Test right to erasure (cryptographic erasure) - Test right to portability (export in standard format) - Test data minimization (PII redaction after N days) - Test breach notification (alert within 72 hours)
- HIPAA Compliance Tests
- Test access controls (authentication, authorization)
- Test audit trail completeness (all access logged)
- Test encryption (at rest, in transit)
-
Test integrity controls (hash chains, signatures)
-
SOC 2 Compliance Tests
- Test logical access (authentication, MFA)
- Test change management (immutable audit trail)
- Test data backup (PITR, LTR verification)
-
Test incident response (DLQ monitoring, alerting)
-
Retention Policy Tests
[TestMethod] public async Task Retention_Should_MarkEligible_When_PeriodExpired() { // Arrange var auditRecord = CreateTestRecord(); auditRecord.RetentionUntil = DateTime.UtcNow.AddDays(-1); // Expired await _repository.InsertAsync(auditRecord); // Act await _retentionService.EvaluateRetentionAsync(); // Assert var retrieved = await _repository.GetByIdAsync(auditRecord.AuditRecordId); Assert.AreEqual(RetentionStatus.Eligible, retrieved.RetentionStatus); } -
Legal Hold Tests
- Test hold application (prevents deletion)
- Test hold release (allows deletion after retention)
- Test hold audit trail (all operations logged)
Code Examples: - GDPR compliance test suite - HIPAA compliance test suite - SOC 2 compliance test suite - Retention policy tests - Legal hold tests
Diagrams: - Compliance test coverage matrix - Retention lifecycle test scenarios
Deliverables: - Compliance test suites - Regulatory validation procedures - Audit evidence generation
CYCLE 7: Contract Testing (REST & Messages) (~3,500 lines)¶
Topic 13: REST API Contract Testing¶
What will be covered: - OpenAPI Contract Validation - Generate OpenAPI spec from code (Swashbuckle) - Validate requests/responses match spec - Detect breaking changes (openapi-diff) - Version compatibility testing
-
Pact Contract Testing
[TestClass] public class IngestionAPIContractTests { [TestMethod] public async Task AppendEndpoint_Should_MatchContract() { // Consumer defines expected contract var pact = Pact.V3("QueryService", "IngestionService", _pactDir); pact.UponReceiving("a valid append request") .Given("tenant exists") .WithRequest(HttpMethod.Post, "/api/v1/audit/append") .WithHeader("Content-Type", "application/json") .WithJsonBody(new { tenantId = "test-tenant", action = "order.created", resourceType = "Order", resourceId = "12345" }) .WillRespond() .WithStatus(202) .WithHeader("Content-Type", "application/json") .WithJsonBody(new { auditRecordId = Match.Regex("[0-9A-Z]{26}", "01JE...") }); await pact.VerifyAsync(async ctx => { var client = new HttpClient { BaseAddress = ctx.MockServerUri }; var response = await client.PostAsJsonAsync("/api/v1/audit/append", new { ... }); Assert.AreEqual(HttpStatusCode.Accepted, response.StatusCode); }); } } -
Schema Evolution Testing
- Test backward compatibility (old clients with new API)
- Test forward compatibility (new clients with old API)
-
Version negotiation testing
-
ATP API Contract Matrix
API | Consumer | Provider | Contract File ---------------|---------------|----------------|------------------ Ingestion API | All producers | Ingestion Svc | ingestion-v1.json Query API | Query clients | Query Service | query-v1.json Export API | Export tools | Export Service | export-v1.json Policy API | Admin tools | Policy Service | policy-v1.json
Code Examples: - Pact consumer tests - Pact provider verification - OpenAPI schema validation - Breaking change detection test
Diagrams: - Contract testing workflow - Consumer-driven contract flow - Schema evolution compatibility
Deliverables: - API contract test suites - Pact contract definitions - Schema evolution tests
Topic 14: Message Contract Testing¶
What will be covered: - AsyncAPI Contract Validation - Generate AsyncAPI spec from message classes - Validate published messages match spec - Version compatibility testing
-
Message Schema Tests
[TestClass] public class AuditAcceptedEventContractTests { [TestMethod] public void Event_Should_MatchAsyncAPISchema() { // Arrange var @event = new AuditRecordAcceptedEvent { EventId = Ulid.NewUlid().ToString(), TenantId = "test-tenant", AuditRecordId = "01JE...", Action = "order.created" }; // Act var json = JsonSerializer.Serialize(@event); var isValid = AsyncAPIValidator.Validate(json, "audit-accepted-v1.schema.json"); // Assert Assert.IsTrue(isValid); } [TestMethod] public void Event_Should_BeDeserializable_By_OldConsumer() { // Test backward compatibility var newEventJson = "{ ... }"; // v1.2 event var oldEvent = JsonSerializer.Deserialize<AuditRecordAcceptedEventV1_0>(newEventJson); Assert.IsNotNull(oldEvent); Assert.AreEqual("test-tenant", oldEvent.TenantId); } } -
Consumer/Producer Contract Tests
- Producer test: Verify published message matches contract
- Consumer test: Verify handler accepts contracted message
- Cross-version compatibility tests
Code Examples: - AsyncAPI schema validation - Message serialization tests - Backward compatibility tests - Consumer/producer contract tests
Diagrams: - Message contract testing flow - Schema evolution testing
Deliverables: - Message contract test suites - AsyncAPI schema validation - Compatibility test matrix
CYCLE 8: Database & Persistence Testing (~3,000 lines)¶
Topic 15: NHibernate Mapping Tests¶
What will be covered: - PersistenceSpecification Tests
[TestClass]
public class AuditRecordEntityMapTests
{
private ISessionSource _sessionSource;
[TestInitialize]
public void Setup()
{
var services = new ServiceCollection();
// Setup NHibernate with test database
services.AddNHibernateFromConfiguration(...);
var provider = services.BuildServiceProvider();
_sessionSource = provider.GetRequiredService<ISessionSource>();
}
[TestMethod]
public void CanCorrectlyMapAuditRecordEntity()
{
// Arrange & Act & Assert
new PersistenceSpecification<AuditRecordEntity>(_sessionSource)
.CheckProperty(e => e.AuditRecordId, Ulid.NewUlid().ToString())
.CheckProperty(e => e.TenantId, "test-tenant")
.CheckProperty(e => e.Action, "order.created")
.CheckProperty(e => e.CreatedAt, DateTime.UtcNow)
.CheckProperty(e => e.ObservedAt, DateTime.UtcNow)
.CheckProperty(e => e.PayloadJson, "{\"test\":\"data\"}")
.VerifyTheMappings();
}
}
-
Repository Integration Tests
[TestMethod] public async Task Repository_Should_InsertAndRetrieve_When_ValidEntity() { using var transaction = _session.BeginTransaction(); try { // Arrange var entity = CreateTestAuditRecord(); // Act await _repository.InsertAsync(entity, CancellationToken.None); var retrieved = await _repository.GetByIdAsync(entity.AuditRecordId); // Assert Assert.IsNotNull(retrieved); Assert.AreEqual(entity.TenantId, retrieved.TenantId); Assert.AreEqual(entity.Action, retrieved.Action); } finally { transaction.Rollback(); } } -
Query & Specification Tests
- Test specification composition
- Test LINQ query translation
- Test pagination (Skip/Take)
- Test tenant filtering
Code Examples: - PersistenceSpecification tests (all entities) - Repository CRUD tests - Specification query tests - Complex query tests (joins, aggregations)
Diagrams: - Persistence testing architecture - Query testing flow
Deliverables: - Persistence test suites - Mapping validation tests - Query test patterns
Topic 16: Row-Level Security (RLS) Tests¶
What will be covered: - RLS Enforcement Tests
[TestMethod]
public async Task Query_Should_FilterByTenant_When_RLSEnabled()
{
// Arrange
await InsertTestRecords("tenant-a", count: 10);
await InsertTestRecords("tenant-b", count: 10);
// Act - Set session context for tenant-a
await _session.CreateSQLQuery(
"EXEC sp_set_session_context 'tenant_id', 'tenant-a'")
.ExecuteUpdateAsync();
var results = await _repository.GetAllAsync();
// Assert
Assert.AreEqual(10, results.Count());
Assert.IsTrue(results.All(r => r.TenantId == "tenant-a"));
}
[TestMethod]
public void Query_Should_Fail_When_SessionContextNotSet()
{
// Arrange - No session context
// Act & Assert
Assert.ThrowsException<SecurityException>(() =>
_repository.GetAll());
}
- Cross-Tenant Isolation Tests
- Verify tenant A cannot access tenant B data
- Test at repository layer
- Test at SQL layer (RLS predicates)
Code Examples: - RLS enforcement tests - Cross-tenant isolation tests - Session context tests
Diagrams: - RLS test scenarios - Tenant isolation validation
Deliverables: - RLS test suite - Isolation validation tests
CYCLE 9: Messaging & Event Testing (~3,500 lines)¶
Topic 17: MassTransit Test Harness¶
What will be covered: - Test Harness Setup
[TestClass]
public class AuditAcceptedEventConsumerTests
{
private ITestHarness _harness;
[TestInitialize]
public async Task Setup()
{
var provider = new ServiceCollection()
.AddMassTransitTestHarness(cfg =>
{
cfg.AddConsumer<AuditAcceptedEventConsumer>();
})
// Add mock dependencies
.AddScoped<IAuditEventProjector, MockProjector>()
.BuildServiceProvider(true);
_harness = provider.GetRequiredService<ITestHarness>();
await _harness.Start();
}
[TestMethod]
public async Task Consumer_Should_ProcessEvent_When_Published()
{
// Arrange
var @event = new AuditRecordAcceptedEvent
{
EventId = Ulid.NewUlid().ToString(),
TenantId = "test-tenant",
AuditRecordId = "01JE...",
};
// Act
await _harness.Bus.Publish(@event);
// Assert
Assert.IsTrue(await _harness.Consumed.Any<AuditRecordAcceptedEvent>());
var consumerHarness = _harness.GetConsumerHarness<AuditAcceptedEventConsumer>();
Assert.IsTrue(await consumerHarness.Consumed.Any<AuditRecordAcceptedEvent>());
}
[TestCleanup]
public async Task Cleanup()
{
await _harness.Stop();
}
}
- Testing Publishers
- Verify message published
- Verify message content
-
Verify headers and metadata
-
Testing Consumers
- Mock dependencies (projectors, repositories)
- Verify message processed
- Verify side effects (database writes)
- Verify error handling
Code Examples: - Test harness setup (complete) - Consumer test suite - Publisher test suite - Saga test examples
Diagrams: - Test harness architecture - Message testing flow
Deliverables: - Messaging test suites - Test harness templates - Consumer/publisher tests
Topic 18: Saga & Outbox/Inbox Tests¶
What will be covered: - Saga State Machine Tests
[TestMethod]
public async Task ExportSaga_Should_TransitionToComplete_When_AllStepsSucceed()
{
// Arrange
var sagaHarness = _harness.GetSagaStateMachineHarness<
ExportWorkflowSaga,
ExportWorkflowState>();
var exportJobId = Guid.NewGuid();
// Act
await _harness.Bus.Publish(new ExportRequestedEvent
{
ExportJobId = exportJobId,
TenantId = "test-tenant"
});
await _harness.Bus.Publish(new PackageCreatedEvent
{
ExportJobId = exportJobId
});
await _harness.Bus.Publish(new PackageSignedEvent
{
ExportJobId = exportJobId
});
// Assert
Assert.IsTrue(await sagaHarness.Consumed.Any<ExportRequestedEvent>());
Assert.IsTrue(await sagaHarness.Created.Any(s => s.CorrelationId == exportJobId));
var instance = sagaHarness.Created.ContainsInState(
exportJobId,
_harness.GetSagaStateMachine<ExportWorkflowSaga>().Final);
Assert.IsNotNull(instance);
}
- Outbox Pattern Tests
- Test atomic write + outbox insert
- Test outbox processor publishes messages
-
Test retry logic on publish failure
-
Inbox Pattern Tests
- Test idempotency (duplicate messages ignored)
- Test inbox deduplication
- Test cleanup of old inbox entries
Code Examples: - Saga state machine tests - Saga compensation tests - Outbox pattern tests - Inbox idempotency tests
Diagrams: - Saga test flow - Outbox/inbox test scenarios
Deliverables: - Saga test suites - Outbox/inbox test suites - State machine validation
CYCLE 10: Security Testing (~3,000 lines)¶
Topic 19: Authentication & Authorization Tests¶
What will be covered: - JWT Authentication Tests
[TestMethod]
public async Task API_Should_Return401_When_NoToken()
{
// Act
var response = await _client.GetAsync("/api/v1/audit/query");
// Assert
Assert.AreEqual(HttpStatusCode.Unauthorized, response.StatusCode);
}
[TestMethod]
public async Task API_Should_Return403_When_InvalidTenant()
{
// Arrange
var token = GenerateJWT(tenantId: "tenant-a");
_client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", token);
var request = new AppendRequest { TenantId = "tenant-b", ... };
// Act
var response = await _client.PostAsJsonAsync("/api/v1/audit/append", request);
// Assert
Assert.AreEqual(HttpStatusCode.Forbidden, response.StatusCode);
}
- Authorization Tests
- Test RBAC (role-based access control)
- Test tenant isolation (cannot access other tenant data)
-
Test edition gates (features per edition)
-
Cryptography Tests
- Test encryption/decryption
- Test hash computation (SHA-256)
- Test signature generation/verification
- Test key rotation scenarios
Code Examples: - Authentication test suite - Authorization test suite (RBAC, tenant isolation) - Cryptography unit tests - Key rotation tests
Diagrams: - Security test coverage matrix - Authentication flow testing
Deliverables: - Security test suites - Auth/authz validation - Cryptography tests
Topic 20: Vulnerability & Penetration Testing¶
What will be covered: - OWASP Top 10 Testing - SQL Injection prevention tests - XSS (Cross-Site Scripting) prevention - CSRF (Cross-Site Request Forgery) protection - Insecure deserialization prevention - Security misconfiguration detection
- Dependency Vulnerability Scanning
- OWASP Dependency-Check in CI
- Trivy container scanning
- Snyk vulnerability database
-
NuGet package audit
-
Secret Scanning
- Git pre-commit hooks (detect secrets)
- Gitleaks in CI pipeline
-
Azure Key Vault usage validation (no hardcoded secrets)
-
Penetration Testing
- Automated DAST (Dynamic Application Security Testing)
- OWASP ZAP for API testing
- Manual pen testing (quarterly)
Code Examples: - SQL injection test scenarios - XSS prevention tests - Dependency scan configuration - Secret scanning setup
Diagrams: - OWASP Top 10 test matrix - Vulnerability scanning pipeline
Deliverables: - Security vulnerability tests - Scanning automation - Pen testing procedures
CYCLE 11: Performance & Load Testing (~4,000 lines)¶
Topic 21: Load Testing with k6¶
What will be covered: - k6 Framework - JavaScript-based load testing - Virtual users (VUs) - Ramp-up/ramp-down stages - Thresholds and SLOs
-
ATP Ingestion Load Test
import http from 'k6/http'; import { check, sleep } from 'k6'; import { Rate, Trend } from 'k6/metrics'; const errorRate = new Rate('errors'); const ingestionDuration = new Trend('ingestion_duration'); export const options = { stages: [ { duration: '2m', target: 100 }, // Ramp-up to 100 VUs { duration: '5m', target: 100 }, // Stay at 100 VUs { duration: '2m', target: 200 }, // Ramp-up to 200 VUs { duration: '5m', target: 200 }, // Stay at 200 VUs { duration: '2m', target: 0 }, // Ramp-down ], thresholds: { 'http_req_duration': ['p(95)<500'], // 95% requests < 500ms 'http_req_failed': ['rate<0.01'], // Error rate < 1% 'errors': ['rate<0.01'], }, }; export default function () { const url = 'https://atp-staging.azure.com/api/v1/audit/append'; const payload = JSON.stringify({ tenantId: 'load-test-tenant', action: 'load.test', resourceType: 'LoadTest', resourceId: `test-${__VU}-${__ITER}`, actorId: `user-${__VU}`, payload: { iteration: __ITER, vu: __VU } }); const params = { headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${__ENV.JWT_TOKEN}`, 'X-Idempotency-Key': `${__VU}-${__ITER}-${Date.now()}` }, }; const response = http.post(url, payload, params); errorRate.add(response.status !== 202); ingestionDuration.add(response.timings.duration); check(response, { 'status is 202': (r) => r.status === 202, 'response has recordId': (r) => JSON.parse(r.body).auditRecordId !== undefined, 'response time < 500ms': (r) => r.timings.duration < 500, }); sleep(1); // Think time } -
Performance Test Types
- Load Test: Sustained load (100-500 RPS for 10 minutes)
- Stress Test: Beyond capacity (1000+ RPS until failure)
- Spike Test: Sudden traffic burst (0 → 500 RPS in 10 seconds)
-
Soak Test: Long duration (moderate load for 2-4 hours)
-
ATP Performance SLOs
Endpoint | p50 Latency | p95 Latency | p99 Latency | Throughput -----------------------|-------------|-------------|-------------|------------- POST /audit/append | < 100ms | < 300ms | < 500ms | 1000 RPS GET /audit/query | < 200ms | < 500ms | < 1000ms | 500 RPS POST /export/request | < 500ms | < 1000ms | < 2000ms | 10 RPS GET /integrity/verify | < 1000ms | < 2000ms | < 5000ms | 50 RPS
Code Examples: - k6 load test scripts (complete) - Stress test script - Spike test script - Soak test script - SLO threshold configuration
Diagrams: - Load test stages visualization - Performance SLO dashboard - Throughput vs. latency graph
Deliverables: - k6 load test scripts - Performance test suites - SLO validation tests
Topic 22: Performance Monitoring & Analysis¶
What will be covered: - NBomber (.NET Load Testing)
[TestMethod]
public void LoadTest_Ingestion_Should_MeetSLO()
{
var scenario = Scenario.Create("ingestion_load", async context =>
{
var request = new AppendRequest { ... };
var response = await _httpClient.PostAsJsonAsync("/api/v1/audit/append", request);
return response.IsSuccessStatusCode
? Response.Ok(statusCode: (int)response.StatusCode)
: Response.Fail(statusCode: (int)response.StatusCode);
})
.WithLoadSimulations(
Simulation.Inject(rate: 100, interval: TimeSpan.FromSeconds(1), during: TimeSpan.FromMinutes(5))
);
var stats = NBomberRunner
.RegisterScenarios(scenario)
.Run();
// Assert SLO thresholds
var scenarioStats = stats.ScenarioStats[0];
Assert.IsTrue(scenarioStats.Ok.Latency.Percent95 < 500,
$"p95 latency {scenarioStats.Ok.Latency.Percent95}ms exceeds SLO 500ms");
Assert.IsTrue(scenarioStats.Fail.Request.Percent < 1,
$"Error rate {scenarioStats.Fail.Request.Percent}% exceeds SLO 1%");
}
- Database Performance Tests
- Query execution time benchmarks
- Index effectiveness validation
-
Batch insert performance
-
Message Throughput Tests
- Publish rate (messages/second)
- Consume rate (messages/second)
- End-to-end latency (publish → consume)
Code Examples: - NBomber test scenarios - Database benchmark tests - Message throughput tests - Performance analysis queries
Diagrams: - Performance test results analysis - Bottleneck identification
Deliverables: - NBomber test suite - Performance benchmarks - Analysis procedures
CYCLE 12: Chaos Engineering & Resilience (~3,000 lines)¶
Topic 23: Chaos Testing Scenarios¶
What will be covered: - Fault Injection Tests - Pod/container failures (kill random pod) - Network latency injection - Database connection failures - Message bus unavailability - External API timeouts
-
Resilience Validation
[TestMethod] public async Task Ingestion_Should_UseOutbox_When_MessageBusFails() { // Arrange var mockBus = new Mock<IPublishEndpoint>(); mockBus.Setup(b => b.Publish(It.IsAny<object>(), It.IsAny<CancellationToken>())) .ThrowsAsync(new TimeoutException("Service Bus unavailable")); var service = new IngestionService(_repository, _unitOfWork, mockBus.Object); // Act var result = await service.AppendAsync(new AppendCommand { ... }); // Assert - Should succeed despite bus failure (outbox pattern) Assert.IsNotNull(result); // Verify outbox entry created var outboxMessages = await _outboxRepository.GetPendingAsync(); Assert.AreEqual(1, outboxMessages.Count); } -
Retry & Circuit Breaker Tests
- Test exponential backoff
- Test circuit breaker opens after N failures
-
Test circuit breaker recovers (half-open → closed)
-
Back-Pressure Tests
- Test consumer throttling under high load
- Test queue depth limits
- Test message rejection when overwhelmed
Code Examples: - Fault injection tests - Retry policy validation - Circuit breaker tests - Back-pressure tests - Chaos Mesh integration (see chaos-drills.md)
Diagrams: - Chaos test scenarios - Resilience validation flow
Deliverables: - Chaos test suite - Resilience validation tests - Fault injection scripts
Topic 24: Disaster Recovery Tests¶
What will be covered: - Backup & Restore Tests - Test PITR (Point-in-Time Restore) - Test backup integrity (checksums) - Test restore procedures (runbook validation)
- Failover Tests
- Test regional failover (primary → DR region)
- Test database failover (SQL Server AG)
-
Test message bus failover
-
Data Integrity After Recovery
- Verify hash chains intact
- Verify signatures valid
- Verify no data loss (within RPO)
Code Examples: - Backup restore validation test - Failover simulation test - Integrity verification after restore
Diagrams: - DR test scenarios - Failover validation flow
Deliverables: - DR test suite - Failover validation tests - Runbook validation
CYCLE 13: End-to-End (E2E) Testing (~3,000 lines)¶
Topic 25: E2E Test Scenarios¶
What will be covered: - Complete User Journeys
Feature: Complete Audit Trail Workflow
Scenario: Ingest, query, export audit records end-to-end
Given I have integrated my application with ATP
When I perform the following actions:
| Action | Details |
| Create Order | Order ID 12345 |
| Update Order | Changed status to Shipped |
| Delete Order Item | Removed item SKU-789 |
Then ATP should have captured 3 audit records
When I query audit records for Order 12345
Then I should see all 3 audit records
When I request an export for Order 12345
Then I should receive an export package
And the package should contain all 3 audit records
And the package should be cryptographically signed
And I should be able to verify the signature
- Multi-Service Workflows
- Ingestion → Projection → Query
- Ingestion → Integrity → Verification
-
Query → Export → Download → Verify
-
Real Dependencies (Staging Environment)
- Real Azure SQL Database
- Real Azure Service Bus
- Real Azure Blob Storage (WORM)
- Real Azure Key Vault
Code Examples: - Complete E2E test scenarios (Gherkin) - Step definitions for E2E workflows - Staging environment setup - Real dependency configuration
Diagrams: - E2E test architecture - Multi-service workflow
Deliverables: - E2E test suites - Staging test setup - Workflow validation
Topic 26: Cross-Service Integration Tests¶
What will be covered: - Service-to-Service Communication - Test REST API calls between services - Test message-driven communication - Test request/response patterns
- Distributed Transaction Tests
- Test saga completion
- Test compensation logic
-
Test timeout handling
-
Observability Validation
- Test distributed tracing (correlation IDs)
- Test log aggregation
- Test metrics collection
Code Examples: - Cross-service integration tests - Distributed tracing validation - Observability tests
Diagrams: - Cross-service test flow - Distributed transaction testing
Deliverables: - Cross-service test suite - Integration validation
CYCLE 14: Test Data Management (~2,500 lines)¶
Topic 27: Test Data Strategies¶
What will be covered: - Test Data Approaches - Synthetic Data: Generated programmatically (preferred for ATP) - Masked Production Data: Real data with PII redacted (rare, compliance concerns) - Golden Datasets: Curated test data (regression testing)
-
Test Data Builders
public class AuditRecordTestBuilder { private string _tenantId = "test-tenant"; private string _action = "test.action"; private string _resourceType = "TestResource"; private string _resourceId = Guid.NewGuid().ToString(); public AuditRecordTestBuilder WithTenantId(string tenantId) { _tenantId = tenantId; return this; } public AuditRecordTestBuilder WithAction(string action) { _action = action; return this; } public AuditRecord Build() { return AuditRecord.Create( tenantId: _tenantId, action: _action, resourceType: _resourceType, resourceId: _resourceId, actorId: "test-actor"); } } // Usage: var auditRecord = new AuditRecordTestBuilder() .WithTenantId("acme") .WithAction("order.created") .Build(); -
Fixture Management
- Shared fixtures (class-level setup)
- Test-specific data (method-level)
-
Cleanup strategies
-
Data Anonymization for Tests
- Generate realistic but fake PII
- Bogus library for .NET
- Faker.js for JavaScript
Code Examples: - Test data builders (all ATP entities) - Fixture management patterns - Bogus data generation - Golden dataset setup
Diagrams: - Test data strategy matrix - Builder pattern flow
Deliverables: - Test data builder library - Fixture templates - Data generation utilities
Topic 28: Test Database Seeding & Cleanup¶
What will be covered: - Database Seeding - Seed reference data (tenants, policies) - Seed test scenarios (specific audit records) - Incremental seeding (only what's needed per test)
- Cleanup Strategies
- Transaction rollback (preferred)
- Delete in reverse FK order
- Truncate tables (integration tests)
-
Recreate database (E2E tests)
-
Idempotent Seeding
- Check if data exists before inserting
- Upsert patterns
- Deterministic IDs (same seed → same data)
Code Examples: - Database seeding scripts - Cleanup procedures - Idempotent seed implementation
Diagrams: - Seed and cleanup lifecycle - Data dependency graph
Deliverables: - Seeding utilities - Cleanup procedures - Data management scripts
CYCLE 15: Code Coverage & Quality Metrics (~3,000 lines)¶
Topic 29: Code Coverage Measurement¶
What will be covered: - Coverlet Coverage Collection
<!-- .runsettings -->
<RunConfiguration>
<ResultsDirectory>.\TestResults</ResultsDirectory>
</RunConfiguration>
<DataCollectionRunSettings>
<DataCollectors>
<DataCollector friendlyName="XPlat code coverage">
<Configuration>
<Format>cobertura,json,lcov,opencover</Format>
<Exclude>
[*.Tests]*
[*.AcceptanceTests]*
[*]*.Migrations.*
</Exclude>
<Include>
[ConnectSoft.Audit.*]*
</Include>
</Configuration>
</DataCollector>
</DataCollectors>
</DataCollectionRunSettings>
-
ATP Coverage Targets
Layer | Coverage Target | Rationale -----------------------|-----------------|-------------------------------- Domain Model | ≥ 90% | Critical business logic Application Services | ≥ 80% | Core workflows Persistence Repositories| ≥ 70% | Data access patterns Service Controllers | ≥ 70% | API surface Message Consumers | ≥ 80% | Event processing Validators | ≥ 95% | Input validation critical Sagas | ≥ 85% | Complex workflows -
Coverage Analysis
- ReportGenerator for HTML reports
- SonarQube integration
- Coverage trends over time
-
Identify untested code paths
-
Exclusions
- Generated code (migrations, entity maps)
- Test projects themselves
- Infrastructure setup code (Program.cs, Startup.cs)
- Third-party integrations (covered by integration tests)
Code Examples: - .runsettings configuration (complete) - Coverage report generation - SonarQube integration - Coverage badge generation
Diagrams: - Coverage measurement flow - Coverage trend dashboard
Deliverables: - Coverage configuration - Report generation setup - Trend analysis queries
Topic 30: Quality Metrics & Gates¶
What will be covered: - Quality Metrics - Test Metrics: - Total test count - Pass rate (should be 100%) - Execution time - Flaky test count - Code Quality Metrics: - Code coverage percentage - Cyclomatic complexity (per method) - Maintainability index - Code duplication percentage - Defect Metrics: - Defect density (defects per 1000 LOC) - Defect escape rate (bugs in production) - Mean time to detection (MTTD) - Mean time to resolution (MTTR)
-
SonarQube Quality Gates
Metric | Threshold | Action if Failed ------------------------------|----------------|------------------ Coverage on New Code | ≥ 80% | Block merge Duplicated Lines (%) | ≤ 3% | Block merge Maintainability Rating | A | Warning Reliability Rating | A | Block merge Security Rating | A | Block merge Security Hotspots Reviewed | 100% | Block merge Bugs | 0 | Block merge Vulnerabilities | 0 | Block merge Code Smells | ≤ 5 per file | Warning -
DORA Metrics Alignment
- Deployment Frequency (enabled by fast tests)
- Lead Time for Changes (test execution time impact)
- Change Failure Rate (test effectiveness measure)
- Mean Time to Recovery (test-driven debugging)
Code Examples: - SonarQube quality gate configuration - Metrics collection scripts - DORA metrics queries - Quality dashboard (Power BI, Grafana)
Diagrams: - Quality metrics dashboard - Quality gate enforcement flow - DORA metrics tracking
Deliverables: - Quality metrics catalog - SonarQube configuration - Dashboard templates
CYCLE 16: CI/CD Test Integration (~3,000 lines)¶
Topic 31: Azure Pipelines Test Stages¶
What will be covered: - Test Pipeline Structure
stages:
- stage: CI
jobs:
- job: Build_And_Test
steps:
- task: UseDotNet@2
- task: DotNetCoreCLI@2
displayName: 'Restore'
inputs:
command: restore
- task: DotNetCoreCLI@2
displayName: 'Build'
inputs:
command: build
arguments: '--configuration Release'
- task: DotNetCoreCLI@2
displayName: 'Run Unit Tests'
inputs:
command: test
projects: '**/*UnitTests.csproj'
arguments: '--configuration Release --no-build --collect:"XPlat Code Coverage"'
- task: DotNetCoreCLI@2
displayName: 'Run Integration Tests'
inputs:
command: test
projects: '**/*IntegrationTests.csproj'
arguments: '--configuration Release --no-build'
- task: DotNetCoreCLI@2
displayName: 'Run Acceptance Tests'
inputs:
command: test
projects: '**/*AcceptanceTests.csproj'
arguments: '--configuration Release --no-build'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Agent.TempDirectory)/**/*coverage.cobertura.xml'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'VSTest'
testResultsFiles: '**/*.trx'
- Parallel Test Execution
- Run test projects in parallel
- Agent pool configuration
-
Test result aggregation
-
Test Result Publishing
- TRX format for Azure DevOps
- JUnit XML for other CI systems
-
Test result trends dashboard
-
Coverage Enforcement
- Fail build if coverage < threshold
- Report coverage delta (new code coverage)
- Coverage badge in README
Code Examples: - Complete Azure Pipeline YAML (test stages) - Parallel test execution - Coverage enforcement script - Test result publishing
Diagrams: - CI/CD test pipeline - Parallel test execution - Coverage enforcement flow
Deliverables: - Pipeline templates - Test automation scripts - Coverage enforcement
Topic 32: Test Environment Management¶
What will be covered: - Environment Configuration - Dev: In-memory dependencies, fast tests - CI: Docker containers, comprehensive tests - Staging: Real Azure services, E2E tests
-
Service Containers in Azure Pipelines
jobs: - job: Test services: mssql: image: mcr.microsoft.com/mssql/server:2022-latest env: ACCEPT_EULA: Y SA_PASSWORD: $(SA_PASSWORD) ports: - 1433:1433 redis: image: redis:7-alpine ports: - 6379:6379 steps: - task: DotNetCoreCLI@2 inputs: command: test arguments: '--configuration Release --settings:$(Build.SourcesDirectory)/test.runsettings' -
Test Data Isolation
- Separate database per test run
- Unique tenant IDs per test
- Parallel test execution (no conflicts)
Code Examples: - Environment-specific appsettings.json - Service container configuration - Test isolation patterns
Diagrams: - Test environment topology - Service container architecture
Deliverables: - Environment configuration guide - Service container setup - Isolation strategies
CYCLE 17: Test Maintenance & Flakiness (~2,500 lines)¶
Topic 33: Test Stability & Flakiness¶
What will be covered: - What is a Flaky Test? - Intermittent failures (passes sometimes, fails sometimes) - Common causes: Timing issues, shared state, external dependencies
- Detecting Flaky Tests
- Run tests multiple times (10x, 100x)
- Track pass/fail rates
-
Identify tests with <100% pass rate
-
Fixing Flaky Tests
- Timing Issues: Use deterministic time (FakeTimeProvider), avoid Thread.Sleep
- Shared State: Isolate test data, use unique IDs
- External Dependencies: Mock or use test doubles
-
Race Conditions: Proper synchronization, await async operations
-
Test Retry Policies
-
Test Quarantine
- Mark flaky tests with [Ignore("Flaky - tracking in issue #123")]
- Fix flaky tests in separate PR
- Don't let flaky tests block CI
Code Examples: - Flaky test detection script - Test retry implementation - Deterministic time provider - Test isolation patterns
Diagrams: - Flaky test detection flow - Test quarantine workflow
Deliverables: - Flaky test detection tools - Stabilization patterns - Quarantine procedures
Topic 34: Test Code Quality¶
What will be covered: - Test Code Standards - Clear test names (follow convention) - AAA pattern (Arrange, Act, Assert) - One assertion per test (or closely related assertions) - No logic in tests (no if/loops) - DRY principle (helper methods, builders)
- Test Smells
- Mystery Guest: Test depends on external data
- Resource Optimism: Assumes resource availability
- Test Code Duplication: Copy-paste tests
- Assertion Roulette: Multiple unrelated assertions
-
Eager Test: Tests too much in one test
-
Test Refactoring
- Extract helper methods
- Use test builders
- Share fixtures via [TestInitialize]
- Parameterize similar tests
Code Examples: - Good test examples (clear, focused) - Bad test examples (test smells) - Test refactoring examples - Helper method library
Diagrams: - Test smell catalog - Refactoring strategies
Deliverables: - Test code standards - Smell detection guide - Refactoring patterns
CYCLE 18: Best Practices & Anti-Patterns (~3,000 lines)¶
Topic 35: Testing Best Practices¶
What will be covered: - Unit Testing Best Practices - ✅ Test one thing per test method - ✅ Use AAA pattern (Arrange, Act, Assert) - ✅ Name tests descriptively (Should_When pattern) - ✅ Tests should be fast (< 1 second each) - ✅ Tests should be isolated (no dependencies) - ✅ Tests should be deterministic (same result every time) - ✅ Mock external dependencies - ✅ Use builders for complex test data - ✅ Test both happy path and error cases - ✅ Test boundary conditions (null, empty, max)
- Integration Testing Best Practices
- ✅ Use TestServer for ASP.NET Core tests
- ✅ Use Docker containers for database tests
- ✅ Rollback transactions (don't persist test data)
- ✅ Test with realistic data volumes
-
✅ Test error handling and retries
-
Acceptance Testing Best Practices
- ✅ Write Gherkin scenarios with product/QA
- ✅ Keep scenarios focused (one feature per file)
- ✅ Use Background for common setup
- ✅ Use Scenario Outline for similar tests with different data
-
✅ Reuse step definitions across features
-
Performance Testing Best Practices
- ✅ Test in staging environment (production parity)
- ✅ Use realistic load patterns
- ✅ Monitor system resources during tests
- ✅ Define clear SLO thresholds
- ✅ Run performance tests before production deployment
Code Examples: - Best practice examples (complete test suites) - Anti-pattern examples (what not to do)
Diagrams: - Best practices checklist - Anti-pattern catalog
Deliverables: - Best practices handbook - Anti-pattern guide - Code review checklist
Topic 36: ATP Testing Checklist & Summary¶
What will be covered: - Pre-Commit Checklist - [ ] All new code has unit tests - [ ] Unit tests pass locally - [ ] Code coverage ≥ 70% - [ ] No compiler warnings - [ ] Linter rules pass
- Pre-Merge Checklist
- All unit tests pass in CI
- Integration tests pass
- Architecture tests pass
- Code coverage meets threshold
- SonarQube quality gate passes
- No security vulnerabilities (critical/high)
-
Peer review completed
-
Pre-Deployment Checklist (Staging)
- Acceptance tests pass
- E2E tests pass
- Contract tests pass
- Performance tests meet SLOs
- Chaos tests pass (resilience validated)
-
Security scan passes
-
Pre-Production Checklist
- All staging tests pass
- Load testing in production-like environment
- DR failover tested
- Runbooks validated
- Compliance evidence generated
- Change approval obtained
Code Examples: - Automated checklist validation - Gate enforcement scripts
Diagrams: - Testing checklist workflow - Quality gate progression
Deliverables: - Complete testing checklist - Automated validation - Quality gate summary
Summary of Deliverables¶
Across all 18 cycles, this documentation will provide:
- Strategy & Philosophy
- Testing pyramid and philosophy
- TDD and BDD approaches
- Project structure and organization
-
Test frameworks and tooling
-
Unit Testing
- MSTest framework guide
- Test doubles (mocks, stubs, fakes)
- Domain model test suites
-
Validator and business rule tests
-
Integration Testing
- WebApplicationFactory patterns
- TestServer setup
- Database integration tests
-
Service replacement strategies
-
Acceptance Testing (BDD)
- Reqnroll/SpecFlow framework
- Gherkin feature files (all ATP workflows)
- Step definitions and hooks
-
Living documentation
-
Architecture & Compliance
- NetArchTest architectural rules
- Compliance validation (GDPR, HIPAA, SOC 2)
- Retention and legal hold testing
-
Regulatory evidence generation
-
Contract Testing
- REST API contracts (Pact, OpenAPI)
- Message contracts (AsyncAPI)
- Schema evolution compatibility
-
Consumer/producer validation
-
Persistence Testing
- NHibernate mapping tests (PersistenceSpecification)
- Repository integration tests
- Row-Level Security validation
-
Query and specification tests
-
Messaging Testing
- MassTransit test harness
- Consumer and publisher tests
- Saga state machine tests
-
Outbox and inbox pattern validation
-
Security Testing
- Authentication and authorization tests
- Vulnerability scanning (OWASP, Trivy)
- Penetration testing procedures
-
Cryptography validation
-
Performance Testing
- k6 load testing scripts
- NBomber .NET performance tests
- Stress, spike, and soak tests
- SLO validation and monitoring
-
Chaos & Resilience
- Chaos testing scenarios
- Fault injection tests
- Retry and circuit breaker validation
- Disaster recovery tests
-
End-to-End Testing
- Complete user journey tests
- Multi-service workflows
- Staging environment tests
- Real dependency validation
-
Test Infrastructure
- Test data management (builders, fixtures)
- Database seeding and cleanup
- Code coverage measurement (Coverlet)
- Quality metrics (SonarQube)
-
CI/CD Integration
- Azure Pipeline test stages
- Parallel test execution
- Service containers
- Coverage and quality gates
-
Maintenance & Quality
- Flaky test detection and fixing
- Test code quality standards
- Testing checklists (pre-commit, pre-merge, pre-deploy)
- Best practices and anti-patterns
Next Steps¶
- Review & Approval: Validate cycle plan with development and QA teams
- Cycle 1 Generation: Begin content generation for testing strategy overview
- Test Project Setup: Create all test project structures
- Test Templates: Develop test class and method templates
- CI Pipeline Integration: Configure test stages in Azure Pipelines
- Coverage Baseline: Measure current coverage and set improvement targets
Related Documentation¶
- Quality Gates: Quality gate thresholds and enforcement
- Azure Pipelines: CI/CD pipeline with test stages
- Architecture Overview: Architectural principles tested
- Data Architecture: Data layer testing considerations
- Persistence: Persistence layer testing patterns
- Messaging: Messaging layer testing strategies
- Zero-Trust Architecture: Security testing requirements
- Chaos Drills: Chaos engineering test scenarios
- Database Migrations: Migration testing
- Observability: Observability validation in tests
This documentation plan covers the complete testing strategy for ATP, from unit and integration testing to performance, security, chaos, and compliance testing, fully leveraging ConnectSoft testing patterns (MSTest, Reqnroll, NetArchTest, k6, Coverlet) and ensuring quality, reliability, and compliance at every stage.