Skip to content

Pragmatic Testing Strategy Guide

Python & PySide6: Effektive Tests ohne Overhead


Philosophie: High-Value, Low-Overhead Testing

"Perfect is the enemy of good. Test what matters, not what's easy to test."

Ziel dieses Guides: - ✅ Tests, die echte Bugs finden (Memory Leaks, Race Conditions, Performance) - ✅ Hoher ROI (Return on Investment) pro geschriebenem Test - ✅ Minimales Mocking - lieber realistische Integration Tests - ✅ Automatisiert die kritischen Pfade, nicht jeden Getter

NICHT das Ziel: - ❌ 100% Code Coverage als Selbstzweck - ❌ Tests für triviale Getter/Setter - ❌ Übermäßiges Mocking das zu fragilen Tests führt - ❌ Tests die mehr Wartungsaufwand als der Code selbst haben


Inhaltsverzeichnis

  1. Was testen? Der 80/20-Ansatz
  2. Memory Leak Detection
  3. Thread-Safety Testing
  4. Signal/Slot Integration Tests
  5. Performance Testing
  6. Critical Path Testing
  7. Test Organization
  8. CI/CD Integration

Was testen? Der 80/20-Ansatz

✅ Priorität 1: MUSS getestet werden

1. Business Logic ohne GUI-Abhängigkeit

# models/calculator.py
class Calculator:
    """Pure Business Logic - EINFACH zu testen"""

    def calculate_total(self, items: list[Item]) -> float:
        return sum(item.price * item.quantity for item in items)

    def apply_discount(self, total: float, discount_percent: float) -> float:
        if not 0 <= discount_percent <= 100:
            raise ValueError("Discount must be 0-100")
        return total * (1 - discount_percent / 100)

Test:

# tests/test_calculator.py
import pytest
from myapp.models.calculator import Calculator, Item

def test_calculate_total():
    """Test ohne jegliches Mocking - schnell und zuverlässig."""
    calc = Calculator()
    items = [
        Item(price=10.0, quantity=2),
        Item(price=5.0, quantity=3),
    ]
    assert calc.calculate_total(items) == 35.0

def test_apply_discount_validates_range():
    """Edge Cases testen - findet echte Bugs."""
    calc = Calculator()

    with pytest.raises(ValueError):
        calc.apply_discount(100.0, -10)  # Negativ

    with pytest.raises(ValueError):
        calc.apply_discount(100.0, 150)  # Über 100

2. Daten-Persistierung & Serialisierung

# models/settings.py
class Settings:
    """Kritisch: Datenverlust ist inakzeptabel."""

    def save_to_file(self, path: Path):
        data = {
            'version': self.version,
            'preferences': self.preferences,
        }
        with open(path, 'w') as f:
            json.dump(data, f, indent=2)

    @classmethod
    def load_from_file(cls, path: Path) -> 'Settings':
        with open(path, 'r') as f:
            data = json.load(f)
        return cls.from_dict(data)

Test:

def test_settings_roundtrip(tmp_path):
    """Kritischer Test: Daten dürfen nicht verloren gehen."""
    settings = Settings(version='1.0', preferences={'theme': 'dark'})

    file_path = tmp_path / "settings.json"
    settings.save_to_file(file_path)

    loaded = Settings.load_from_file(file_path)

    assert loaded.version == settings.version
    assert loaded.preferences == settings.preferences

def test_settings_handles_corrupted_file(tmp_path):
    """Edge Case: Kaputte Datei sollte nicht crashen."""
    file_path = tmp_path / "corrupted.json"
    file_path.write_text("{ invalid json")

    with pytest.raises(json.JSONDecodeError):
        Settings.load_from_file(file_path)

3. Algorithmen & Berechnungen

def test_complex_calculation():
    """Komplexe Logik muss korrekt sein - IMMER testen."""
    result = calculate_project_cost(
        hours=100,
        rate=50,
        materials=[Material(cost=200), Material(cost=150)],
        tax_rate=0.19
    )

    expected = (100 * 50 + 350) * 1.19
    assert result == pytest.approx(expected)

⚠️ Priorität 2: SOLLTE getestet werden

4. State Machines & Workflows

class DocumentState:
    """State Transitions sind fehleranfällig."""

    def __init__(self):
        self._state = 'draft'

    def publish(self):
        if self._state != 'draft':
            raise InvalidStateTransition()
        self._state = 'published'

Test:

def test_state_transitions():
    """State Machines sind bug-prone - testen!"""
    doc = DocumentState()

    assert doc.state == 'draft'

    doc.publish()
    assert doc.state == 'published'

    # Ungültige Transition
    with pytest.raises(InvalidStateTransition):
        doc.publish()  # Schon published

5. Datenvalidierung

def test_email_validation():
    """Validation Logic testen, aber nicht übertreiben."""
    validator = EmailValidator()

    # Happy Path
    assert validator.is_valid("user@example.com")

    # Wichtige Edge Cases
    assert not validator.is_valid("invalid")
    assert not validator.is_valid("@example.com")
    assert not validator.is_valid("")

❌ Priorität 3: NICHT testen (oder sehr niedrig)

6. Triviale Getter/Setter

# ❌ NICHT testen - Zeitverschwendung
class User:
    def __init__(self, name: str):
        self._name = name

    @property
    def name(self) -> str:
        return self._name  # Trivial, kein Bug-Risiko

# def test_user_name():  # ← Überflüssig!
#     user = User("Alice")
#     assert user.name == "Alice"

7. Framework/Library Code

# ❌ NICHT Qt's Funktionalität testen
def test_qpushbutton_clicked():  # ← Sinnlos!
    button = QPushButton()
    button.clicked.emit()  # Qt ist bereits getestet

8. GUI-Layout Code

# ❌ Nicht testen (außer bei kritischen Custom Widgets)
class MyWidget(QWidget):
    def __init__(self):
        super().__init__()
        layout = QVBoxLayout()
        layout.addWidget(QPushButton("Click"))  # Layout-Details unwichtig
        self.setLayout(layout)


Memory Leak Detection

Warum kritisch für PySide6: - Qt's C++ Objects + Python's Garbage Collector = Komplexe Memory Management - Signal/Slot Connections können Leaks verursachen - Thread-Workers ohne Cleanup

Memory Leak Test Pattern

# conftest.py
import gc
import pytest
from PySide6.QtCore import QObject

@pytest.fixture
def track_qobjects():
    """Tracked QObject creation/deletion."""
    gc.collect()
    before = _count_qobjects()

    yield

    gc.collect()
    after = _count_qobjects()

    leaked = after - before
    if leaked > 0:
        pytest.fail(f"Memory leak detected: {leaked} QObjects not cleaned up")

def _count_qobjects():
    """Count active QObjects."""
    count = 0
    for obj in gc.get_objects():
        try:
            if isinstance(obj, QObject):
                count += 1
        except ReferenceError:
            pass
    return count

Verwendung:

def test_worker_cleanup(track_qobjects, qtbot):
    """Verify workers are cleaned up properly."""

    # Create worker
    worker = DataWorker()
    thread = QThread()

    worker.moveToThread(thread)

    # Setup cleanup
    worker.finished.connect(thread.quit)
    worker.finished.connect(worker.deleteLater)
    thread.finished.connect(thread.deleteLater)

    # Run
    thread.start()
    worker.do_work()

    # Wait for cleanup
    qtbot.waitSignal(thread.finished, timeout=5000)

    # Force cleanup
    worker = None
    thread = None

    # track_qobjects fixture will check for leaks

Signal Connection Leak Test

def test_signal_disconnection(track_qobjects):
    """Verify signals are disconnected when objects are destroyed."""

    emitter = SignalEmitter()
    receiver = SignalReceiver()

    # Connect
    emitter.signal.connect(receiver.slot)

    # Delete receiver
    receiver.deleteLater()
    QApplication.processEvents()

    # Emit should not crash
    emitter.signal.emit()

    # Force cleanup
    emitter = None
    receiver = None

Memory Usage Monitoring

import tracemalloc
import pytest

@pytest.fixture
def memory_profiler():
    """Profile memory usage of test."""
    tracemalloc.start()

    snapshot_before = tracemalloc.take_snapshot()

    yield

    snapshot_after = tracemalloc.take_snapshot()

    top_stats = snapshot_after.compare_to(snapshot_before, 'lineno')

    # Print top 10 memory allocations
    print("\n[ Top 10 Memory Allocations ]")
    for stat in top_stats[:10]:
        print(stat)

    tracemalloc.stop()

def test_large_data_processing(memory_profiler):
    """Verify no excessive memory allocation."""
    processor = DataProcessor()

    # Process large dataset
    processor.process(range(100000))

    # Memory profiler will show allocations

Thread-Safety Testing

Kritisch für PySide6: - QThread vs Python Threading - Signal/Slot ConnectionType - Shared Data Access

Race Condition Detection

from threading import Thread
import pytest

def test_thread_safe_data_access():
    """Verify thread-safe access to shared data."""

    shared_data = ThreadSafeData()
    results = []
    errors = []

    def writer(thread_id):
        try:
            for i in range(100):
                shared_data.set_value(f"thread_{thread_id}", i)
        except Exception as e:
            errors.append(e)

    def reader():
        try:
            for _ in range(100):
                _ = shared_data.get_all_values()
        except Exception as e:
            errors.append(e)

    # Create multiple threads
    threads = [
        Thread(target=writer, args=(i,)) for i in range(5)
    ] + [
        Thread(target=reader) for _ in range(5)
    ]

    # Start all
    for t in threads:
        t.start()

    # Wait for completion
    for t in threads:
        t.join()

    # Verify no race conditions
    assert len(errors) == 0, f"Race conditions detected: {errors}"

QThread Signal Safety Test

def test_signal_thread_safety(qtbot):
    """Verify signals work correctly across threads."""

    class Worker(QThread):
        result = Signal(int)

        def run(self):
            for i in range(100):
                self.result.emit(i)

    worker = Worker()
    received = []

    def on_result(value):
        received.append(value)

    # QueuedConnection ensures thread-safety
    worker.result.connect(
        on_result,
        Qt.ConnectionType.QueuedConnection
    )

    worker.start()
    worker.wait(5000)

    # Verify all signals received
    assert len(received) == 100
    assert received == list(range(100))

Deadlock Detection

import time
import pytest

def test_no_deadlock_in_blocking_connection():
    """Verify BlockingQueuedConnection doesn't deadlock."""

    class Emitter(QObject):
        signal = Signal()

    class Receiver(QObject):
        @Slot()
        def slot(self):
            time.sleep(0.1)  # Simulate work

    emitter = Emitter()
    receiver = Receiver()

    # Move receiver to different thread
    thread = QThread()
    receiver.moveToThread(thread)
    thread.start()

    # BlockingQueuedConnection
    emitter.signal.connect(
        receiver.slot,
        Qt.ConnectionType.BlockingQueuedConnection
    )

    # This should NOT deadlock (timeout = 2 seconds)
    start = time.time()
    emitter.signal.emit()
    duration = time.time() - start

    assert duration < 2.0, "Potential deadlock detected"

    thread.quit()
    thread.wait()

Signal/Slot Integration Tests

Pragmatischer Ansatz: - Teste Signal/Slot INTEGRATION, nicht die Mechanik - Fokus: Funktioniert der Datenfluss?

Signal Spy Pattern (pytest-qt)

def test_data_flow_through_signals(qtbot):
    """Test kompletter Signal-Flow ohne Mocking."""

    # Real objects, no mocks
    model = DataModel()
    view = DataView()
    presenter = DataPresenter(model, view)

    # Spy on view updates
    with qtbot.waitSignal(view.display_updated, timeout=1000) as blocker:
        model.load_data("test_data")

    # Verify data arrived at view
    assert blocker.args[0] == "test_data"
    assert view.displayed_data is not None

Multi-Signal Chain Test

def test_signal_chain_integrity(qtbot):
    """Test dass Signal-Kette vollständig funktioniert."""

    # Setup pipeline: Loader → Processor → Display
    loader = DataLoader()
    processor = DataProcessor()
    display = DataDisplay()

    # Connect chain
    loader.data_loaded.connect(processor.process)
    processor.data_processed.connect(display.show)

    # Track completion
    with qtbot.waitSignal(display.data_shown, timeout=2000):
        loader.load("test.json")

    # Verify end result
    assert display.current_data is not None

Signal Emissions Count Test

def test_signal_not_emitted_excessively(qtbot):
    """Verify signals aren't spammed."""

    model = DataModel()
    count = [0]

    def count_emissions():
        count[0] += 1

    model.data_changed.connect(count_emissions)

    # Update 100 items
    model.update_items(range(100))

    # Should batch updates, not emit 100 times
    assert count[0] <= 10, f"Too many emissions: {count[0]}"

Performance Testing

Pragmatisch: Nicht jede Funktion messen, nur Bottlenecks.

Performance Regression Test

import time
import pytest

@pytest.fixture
def performance_baseline():
    """Define acceptable performance baseline."""
    return {
        'load_large_file': 2.0,      # Max 2 seconds
        'process_1000_items': 0.5,   # Max 0.5 seconds
        'render_view': 0.1,          # Max 100ms
    }

def test_load_large_file_performance(performance_baseline):
    """Verify loading doesn't regress."""

    loader = DataLoader()

    start = time.perf_counter()
    loader.load("large_file.json")  # 10 MB file
    duration = time.perf_counter() - start

    baseline = performance_baseline['load_large_file']
    assert duration < baseline, f"Loading too slow: {duration:.2f}s (max {baseline}s)"

Memory Usage Benchmark

import psutil
import os

def test_memory_usage_acceptable():
    """Verify memory usage stays reasonable."""

    process = psutil.Process(os.getpid())

    # Baseline
    mem_before = process.memory_info().rss / 1024 / 1024  # MB

    # Operation
    processor = DataProcessor()
    processor.process_large_dataset(range(100000))

    # Check
    mem_after = process.memory_info().rss / 1024 / 1024  # MB
    mem_increase = mem_after - mem_before

    # Should not use more than 100 MB
    assert mem_increase < 100, f"Memory increase too high: {mem_increase:.1f} MB"

Pytest Benchmark Plugin

# pip install pytest-benchmark

def test_calculation_performance(benchmark):
    """Benchmark critical calculation."""

    calculator = ComplexCalculator()

    result = benchmark(calculator.compute, data=range(1000))

    # pytest-benchmark automatically:
    # - Runs multiple iterations
    # - Calculates statistics
    # - Compares with previous runs
    # - Fails if regression detected

Critical Path Testing

Konzept: Teste die häufigsten User-Workflows End-to-End.

Happy Path Integration Test

def test_complete_user_workflow(qtbot):
    """Test typischen User-Workflow ohne Mocking."""

    # Setup real application
    app_controller = ApplicationController()
    main_window = MainWindow(app_controller)

    # 1. User öffnet Projekt
    with qtbot.waitSignal(app_controller.project_loaded):
        app_controller.open_project("test_project")

    assert main_window.current_project is not None

    # 2. User bearbeitet Daten
    main_window.edit_widget.set_value("new_value")

    # 3. User speichert
    with qtbot.waitSignal(app_controller.project_saved):
        app_controller.save_project()

    # 4. Verify persistence
    app_controller.close_project()
    app_controller.open_project("test_project")

    assert main_window.edit_widget.get_value() == "new_value"

Error Path Test

def test_handle_corrupted_project(qtbot):
    """Test dass Fehler nicht zum Crash führen."""

    app_controller = ApplicationController()

    # Try to open corrupted project
    with qtbot.waitSignal(app_controller.error_occurred) as blocker:
        app_controller.open_project("corrupted_project")

    # Verify error was handled gracefully
    error_msg = blocker.args[0]
    assert "corrupted" in error_msg.lower()

    # App should still be usable
    app_controller.create_new_project()
    assert app_controller.current_project is not None

Test Organization

Struktur nach Wertigkeit

tests/
├── critical/                   # MUSS bei jedem Commit laufen
│   ├── test_data_integrity.py
│   ├── test_memory_leaks.py
│   └── test_critical_paths.py
├── integration/                # Läuft vor Release
│   ├── test_workflows.py
│   ├── test_signal_chains.py
│   └── test_thread_safety.py
├── performance/                # Läuft nächtlich
│   ├── test_benchmarks.py
│   └── test_load_times.py
└── unit/                       # Schnelle Tests
    ├── models/
    │   └── test_calculator.py
    └── utils/
        └── test_validators.py

Pytest Markers

# conftest.py
def pytest_configure(config):
    config.addinivalue_line("markers", "critical: critical tests that must pass")
    config.addinivalue_line("markers", "slow: slow running tests")
    config.addinivalue_line("markers", "memory: memory leak tests")
    config.addinivalue_line("markers", "integration: integration tests")
# test_data_integrity.py
import pytest

@pytest.mark.critical
def test_data_not_lost_on_crash():
    """Kritischer Test: Daten dürfen nicht verloren gehen."""
    # ...

@pytest.mark.slow
@pytest.mark.memory
def test_no_memory_leak_after_1000_operations():
    """Langsamer Memory Test."""
    # ...

Ausführung:

# Nur kritische Tests
pytest -m critical

# Alles außer langsame Tests
pytest -m "not slow"

# Nur Memory Tests
pytest -m memory

# Integration + Critical
pytest -m "integration or critical"


CI/CD Integration

GitHub Actions Example

# .github/workflows/test.yml
name: Tests

on: [push, pull_request]

jobs:
  critical-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Install dependencies
        run: |
          pip install -e ".[dev]"

      - name: Run critical tests
        run: |
          pytest -m critical --maxfail=1 -v

      - name: Check memory leaks
        run: |
          pytest -m memory -v

  integration-tests:
    runs-on: ubuntu-latest
    needs: critical-tests
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4

      - name: Install dependencies
        run: pip install -e ".[dev]"

      - name: Run integration tests
        run: |
          pytest -m integration -v

  performance-tests:
    runs-on: ubuntu-latest
    # Only on main branch
    if: github.ref == 'refs/heads/main'
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4

      - name: Run benchmarks
        run: |
          pytest tests/performance/ --benchmark-only

      - name: Store benchmark results
        uses: actions/upload-artifact@v3
        with:
          name: benchmarks
          path: .benchmarks/

Pre-commit Hook

# .git/hooks/pre-commit
#!/bin/bash
# Run critical tests before commit

pytest -m critical --maxfail=1 -q

if [ $? -ne 0 ]; then
    echo "❌ Critical tests failed. Commit aborted."
    exit 1
fi

echo "✅ Critical tests passed."

Practical Test Examples

Test Suite für ein PySide6 Model

# tests/models/test_project_model.py
import pytest
from pathlib import Path

class TestProjectModel:
    """Pragmatische Tests für ProjectModel."""

    @pytest.mark.critical
    def test_project_save_and_load(self, tmp_path):
        """Kritisch: Projekt darf nicht verloren gehen."""
        model = ProjectModel()
        model.name = "Test Project"
        model.add_item("Item 1")
        model.add_item("Item 2")

        file_path = tmp_path / "project.json"
        model.save(file_path)

        loaded = ProjectModel.load(file_path)

        assert loaded.name == "Test Project"
        assert len(loaded.items) == 2

    @pytest.mark.critical
    def test_project_handles_invalid_file(self, tmp_path):
        """Kritisch: Crash bei kaputten Dateien vermeiden."""
        file_path = tmp_path / "invalid.json"
        file_path.write_text("{ invalid json }")

        with pytest.raises(InvalidProjectFileError):
            ProjectModel.load(file_path)

    def test_add_duplicate_item_raises_error(self):
        """Business Rule: Keine Duplikate."""
        model = ProjectModel()
        model.add_item("Item 1")

        with pytest.raises(DuplicateItemError):
            model.add_item("Item 1")

    @pytest.mark.memory
    def test_large_project_memory_usage(self, track_qobjects):
        """Memory: Große Projekte sollen keinen Leak haben."""
        model = ProjectModel()

        # Add 10000 items
        for i in range(10000):
            model.add_item(f"Item {i}")

        # Clear
        model.clear()

        # track_qobjects will check for leaks

Test Suite für einen Controller

# tests/controllers/test_app_controller.py
import pytest

class TestAppController:
    """Integration Tests für AppController."""

    @pytest.mark.integration
    def test_complete_workflow(self, qtbot, tmp_path):
        """Test vollständigen Workflow."""
        controller = AppController()

        # Create project
        project_path = tmp_path / "test.proj"
        with qtbot.waitSignal(controller.project_created):
            controller.create_project(project_path)

        # Add data
        controller.add_item("Test Item")

        # Save
        with qtbot.waitSignal(controller.project_saved):
            controller.save()

        # Close
        controller.close_project()

        # Reopen
        with qtbot.waitSignal(controller.project_loaded):
            controller.open_project(project_path)

        # Verify persistence
        assert controller.current_project is not None
        assert "Test Item" in controller.current_project.items

    @pytest.mark.critical
    def test_auto_save_on_crash(self, qtbot, tmp_path):
        """Kritisch: Auto-save bei Crash."""
        controller = AppController(auto_save_interval=100)

        project_path = tmp_path / "test.proj"
        controller.create_project(project_path)
        controller.add_item("Important Data")

        # Wait for auto-save
        qtbot.wait(200)

        # Simulate crash (no explicit save)
        controller = None

        # Verify data survived
        backup_path = project_path.with_suffix('.proj.backup')
        assert backup_path.exists()

        recovered = ProjectModel.load(backup_path)
        assert "Important Data" in recovered.items

Best Practices Summary

DO ✅

# ✅ Test Business Logic without GUI
def test_calculation():
    result = calculate_discount(100, 10)
    assert result == 90

# ✅ Test Data Integrity
def test_save_load_roundtrip():
    data = save_and_load(test_data)
    assert data == test_data

# ✅ Test Critical Paths End-to-End
def test_complete_user_workflow():
    # No mocking, real integration
    pass

# ✅ Test Memory Leaks
def test_no_memory_leak(track_qobjects):
    create_and_destroy_1000_widgets()
    # Fixture checks for leaks

# ✅ Test Thread Safety
def test_concurrent_access():
    threads = [Thread(target=access_data) for _ in range(10)]
    # Verify no race conditions

# ✅ Test Performance Regressions
def test_load_time_acceptable():
    duration = measure(load_large_file)
    assert duration < MAX_ACCEPTABLE

DON'T ❌

# ❌ Don't test trivial code
def test_getter():  # Waste of time
    obj.value = 5
    assert obj.value == 5

# ❌ Don't test framework code
def test_qpushbutton_works():  # Qt is tested
    button = QPushButton()
    button.click()

# ❌ Don't mock everything
def test_with_100_mocks():  # Fragile
    mock1 = Mock()
    mock2 = Mock()
    # ... 98 more mocks
    # Test tells you nothing about reality

# ❌ Don't test implementation details
def test_internal_private_method():  # Breaks on refactor
    obj._internal_method()

# ❌ Don't aim for 100% coverage blindly
def test_every_single_line():  # Low ROI
    # Testing for coverage metrics, not value

Test Pyramid (angepasst)

         ┌─────────────┐
         │   Manual    │  ← Nur kritische Workflows
         │   Testing   │     
         └─────────────┘
       ┌──────────────┐
       │ Integration  │     ← Signal Chains, Workflows
       │    Tests     │        (20% der Tests)
       └──────────────┘
     ┌──────────────────┐
     │   Critical Unit  │   ← Business Logic, Memory
     │      Tests       │      (60% der Tests)
     └──────────────────┘
  ┌────────────────────────┐
  │ Infrastructure Tests   │  ← Setup/Config Tests
  │   (CI/CD, Fixtures)    │     (20% der Tests)
  └────────────────────────┘

Tools & Setup

Essential Tools

# pyproject.toml
[project.optional-dependencies]
test = [
    "pytest>=7.0",
    "pytest-qt>=4.2.0",       # PySide6/PyQt testing
    "pytest-cov>=4.0",        # Coverage
    "pytest-benchmark>=4.0",  # Performance testing
    "pytest-xdist>=3.0",      # Parallel test execution
    "pytest-timeout>=2.0",    # Prevent hanging tests
]

pytest.ini Configuration

# pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*

# Markers
markers =
    critical: critical tests that must always pass
    slow: tests that take >1 second
    memory: memory leak detection tests
    integration: integration tests
    performance: performance benchmarks

# Coverage
addopts = 
    --strict-markers
    --tb=short
    --cov=src/myapp
    --cov-report=term-missing
    --cov-report=html
    -v

# Timeout
timeout = 300
timeout_method = thread

# Parallel execution
# addopts = -n auto  # Uncomment for parallel tests

Running Tests

# All tests
pytest

# Only critical
pytest -m critical

# Fast feedback loop
pytest -m "critical and not slow" -x

# With coverage
pytest --cov

# Parallel execution
pytest -n auto

# Watch mode (requires pytest-watch)
ptw -- -m critical

Conclusion: The Pragmatic Testing Mindset

Fragen die du dir stellen solltest:

  1. Würde dieser Test einen echten Bug finden?
  2. Ja → Schreiben
  3. Nein → Überspringen

  4. Wie hoch ist die Wahrscheinlichkeit eines Bugs hier?

  5. Hoch (Daten-Persistierung, Threading) → Testen
  6. Niedrig (Getter/Setter) → Nicht testen

  7. Wie teuer wäre der Bug in Production?

  8. Datenverlust → Kritischer Test
  9. UI-Layout → Manueller Test reicht

  10. Wie oft ändert sich dieser Code?

  11. Stabil (Business Logic) → Testen
  12. Häufig (UI-Tweaks) → Wenig testen

  13. Kann ich das sinnvoll automatisieren?

  14. Ja → Automatisierter Test
  15. Nein → Manueller Test-Plan

Ziel: 80% Sicherheit mit 20% Aufwand 🎯


References

  • pytest-qt Documentation: https://pytest-qt.readthedocs.io/
  • pytest Documentation: https://docs.pytest.org/
  • Python Testing Best Practices: https://realpython.com/python-testing/
  • Qt Testing Guide: https://doc.qt.io/qt-6/testing.html