Files
logo_test/README.md
Rick McEwen ea589a50a4 Update README with new test parameters and dataset setup
- Add detailed instructions for LogoDet-3K dataset placement
- Document all test script parameters including new options:
  - simple matching method
  - --output-file for clean results output
  - --use-max-similarity, --positive-samples, --negative-samples
- Add section on running comparison tests with shell script
- Update project structure to include run_comparison_tests.sh
2025-12-31 17:49:56 -05:00

157 lines
5.6 KiB
Markdown

# Logo Detection Test Framework
A testing framework for evaluating logo detection accuracy using DETR (DEtection TRansformer) and CLIP (Contrastive Language-Image Pre-training) models.
## Overview
This project provides tools to:
- Detect logos in images using a fine-tuned DETR model
- Match detected logos against reference images using CLIP embeddings
- Evaluate detection accuracy with precision, recall, and F1 metrics
## Architecture
The system uses a two-stage pipeline:
1. **DETR** - Identifies potential logo regions (bounding boxes) in images
2. **CLIP** - Extracts feature embeddings for each detected region and compares against reference logos
## Installation
Requires Python 3.12+. Uses [uv](https://github.com/astral-sh/uv) for package management.
```bash
# Install dependencies
uv sync
# Or using pip
pip install -r requirements.txt
```
## Usage
### Prepare Test Data
The test framework requires the **LogoDet-3K** dataset. Download it and place it in the project directory:
```
logo_test/
├── LogoDet-3K/ # Dataset directory (required)
│ ├── Clothes/ # Category directories
│ │ ├── Adidas/ # Brand directories with images + XML annotations
│ │ ├── Nike/
│ │ └── ...
│ ├── Electronic/
│ ├── Food/
│ └── ...
```
The dataset should contain images with corresponding Pascal VOC format XML annotation files that define logo bounding boxes.
Then run the preparation script:
```bash
uv run python prepare_test_data.py
```
This script:
1. Scans `LogoDet-3K/` for images and XML annotation files
2. Extracts cropped logo regions using bounding box data → saves to `reference_logos/`
3. Copies full images → saves to `test_images/`
4. Creates `test_data_mapping.db` SQLite database with ground truth mappings
### Run Detection Tests
```bash
# Basic test with default settings (margin-based matching)
uv run python test_logo_detection.py
# Test with more logos and custom threshold
uv run python test_logo_detection.py -n 20 --threshold 0.75
# Use multi-ref matching method
uv run python test_logo_detection.py --matching-method multi-ref \
--refs-per-logo 5 --min-matching-refs 2
# Reproducible test with seed
uv run python test_logo_detection.py -n 50 --seed 42
```
### Key Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `-n, --num-logos` | 10 | Number of reference logos to sample |
| `-t, --threshold` | 0.7 | CLIP similarity threshold |
| `-d, --detr-threshold` | 0.5 | DETR detection confidence threshold |
| `--matching-method` | margin | Matching method: `simple`, `margin`, or `multi-ref` |
| `--margin` | 0.05 | Margin over second-best match (margin/multi-ref) |
| `--refs-per-logo` | 3 | Reference images per logo |
| `--min-matching-refs` | 1 | Min refs that must match (multi-ref only) |
| `--use-max-similarity` | False | Use max instead of mean similarity (multi-ref only) |
| `--positive-samples` | 5 | Positive test images per logo |
| `--negative-samples` | 20 | Negative test images per logo |
| `-s, --seed` | None | Random seed for reproducibility |
| `--output-file` | None | Append results summary to file (clean output) |
**Matching Methods:**
- `simple` - Returns all logos above threshold (baseline, most permissive)
- `margin` - Requires margin over second-best match (reduces false positives)
- `multi-ref` - Aggregates scores across multiple reference images per logo
See `--help` for all options.
### Run Comparison Tests
To compare all matching methods with consistent parameters:
```bash
./run_comparison_tests.sh
```
This runs all four matching configurations (simple, margin, multi-ref mean, multi-ref max) and saves clean results to `comparison_results.txt`.
## Project Structure
```
logo_test/
├── logo_detection_detr.py # Core detection library (DetectLogosDETR class)
├── test_logo_detection.py # Test script for accuracy evaluation
├── prepare_test_data.py # Script to prepare test database
├── run_comparison_tests.sh # Script to run all matching methods
├── test_data_mapping.db # SQLite database with ground truth
├── reference_logos/ # Reference logo images (not in git)
├── test_images/ # Test images (not in git)
├── LogoDet-3K/ # Source dataset (not in git)
├── logo_detection_detr_usage.md # API usage guide
└── logo_detection_test_methodology.md # Test methodology documentation
```
## Accuracy Improvement Techniques
The framework implements several techniques to improve detection accuracy:
1. **Non-Maximum Suppression (NMS)** - Removes overlapping duplicate detections
2. **Minimum Box Size Filtering** - Filters out noise from tiny detections
3. **Confidence Threshold Filtering** - Removes low-confidence detections
4. **Multiple Reference Images** - Uses multiple refs per logo for robust matching
5. **Margin-Based Matching** - Requires confidence margin over second-best match
6. **Multi-Ref Matching** - Aggregates similarity scores across references
7. **Embedding Caching** - Caches embeddings to avoid recomputation
## Models
The framework uses:
- **DETR**: `Pravallika6/detr-finetuned-logo-detection_v2`
- **CLIP**: `openai/clip-vit-large-patch14`
Models are automatically downloaded from HuggingFace on first run and cached in `~/.cache/huggingface/`.
## Documentation
- [API Usage Guide](logo_detection_detr_usage.md) - How to use the DetectLogosDETR class
- [Test Methodology](logo_detection_test_methodology.md) - Detailed explanation of test framework and tuning
## License
MIT