test: fail loudly when EnrichmentModelTypeEnum gains an unclassified value

The current pattern - hardcoded include-lists in
ONNXModelRunner.has_variable_length_inputs and friends - silently
defaults any unknown model to "fixed-size, simple, single-threaded".
For has_variable_length_inputs that default re-introduces the ORT
mem-pattern leak when the new model is actually variable-length.

Adds test_every_enrichment_model_is_explicitly_classified, which
iterates EnrichmentModelTypeEnum and asserts every member is in one
of two explicit sets the test author maintains.  A new enum value
without classification fails CI with a message naming the classifier
to update.  Also documents the same constraint on the enum itself
with a TODO pointing at the longer-term fix (a co-located MODEL_TRAITS
registry covering all five classifiers, not just this one).

Verified by injecting a fake enum value: the test fails with the
expected pointer to the missing classifier.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
This commit is contained in:
felalex 2026-05-03 22:24:00 -07:00
parent 62ad2b09f7
commit 6fa6c5a84d
2 changed files with 54 additions and 0 deletions

View File

@ -7,6 +7,17 @@ class EmbeddingTypeEnum(str, Enum):
class EnrichmentModelTypeEnum(str, Enum):
# When adding a value, audit every classifier that switches on it:
# - ONNXModelRunner.has_variable_length_inputs
# - ONNXModelRunner.is_cpu_complex_model
# - ONNXModelRunner.is_migraphx_complex_model
# - ONNXModelRunner.is_concurrent_model
# - CudaGraphRunner.is_model_supported
# The default for omission is "fixed-size, simple, single-threaded" - which
# silently re-introduces the ORT mem-pattern leak if the new model is
# actually variable-length (Jina/PaddleOCR-class).
# TODO: replace these scattered include-lists with a single MODEL_TRAITS
# registry co-located with the enum so adding a value forces classification.
arcface = "arcface"
facenet = "facenet"
jina_v1 = "jina_v1"

View File

@ -141,6 +141,49 @@ class TestHasVariableLengthInputs(unittest.TestCase):
)
)
def test_every_enrichment_model_is_explicitly_classified(self):
"""Every EnrichmentModelTypeEnum value must be deliberately classified.
Adding a new model to the enum without updating has_variable_length_inputs
silently defaults it to fixed-size (mem_pattern stays on), which
re-introduces the ORT mmap-plan leak if the new model is actually
variable-length. This test fails on any unclassified enum value so the
author is forced to make a deliberate decision.
TODO: replace this guard with a single MODEL_TRAITS registry co-located
with EnrichmentModelTypeEnum so adding a value mechanically forces
classification across every classifier (variable-length, cpu_complex,
migraphx_complex, concurrent, cuda_graph_supported), not just this one.
"""
from frigate.detectors.detection_runners import ONNXModelRunner
from frigate.embeddings.types import EnrichmentModelTypeEnum
VARIABLE_LENGTH = {
EnrichmentModelTypeEnum.jina_v1,
EnrichmentModelTypeEnum.jina_v2,
EnrichmentModelTypeEnum.paddleocr,
}
FIXED_LENGTH = {
EnrichmentModelTypeEnum.arcface,
EnrichmentModelTypeEnum.facenet,
EnrichmentModelTypeEnum.yolov9_license_plate,
}
classified = VARIABLE_LENGTH | FIXED_LENGTH
for member in EnrichmentModelTypeEnum:
self.assertIn(
member,
classified,
f"{member.value} is not explicitly classified — audit "
"ONNXModelRunner.has_variable_length_inputs (and the other "
"classifiers listed in EnrichmentModelTypeEnum's docstring).",
)
self.assertEqual(
ONNXModelRunner.has_variable_length_inputs(member.value),
member in VARIABLE_LENGTH,
f"{member.value}: classification disagrees with "
"has_variable_length_inputs — update one or the other.",
)
class TestComputeCudaMemLimit(unittest.TestCase):
@staticmethod