Database Models¶
Welcome to the comprehensive database models reference for Whombat! Here, you'll
discover an organized collection of all the database models defined within the
Whombat framework. Our categorization mirrors the structure outlined in
soundevent
.
The models within Whombat share an analogical relationship with those in
soundevent
and are essentially a SQLAlchemy port. While the core concepts remain
consistent, it's essential to note that some minor differences do exist.
Data Descriptors¶
Users¶
whombat.models.User
¶
Bases: Base
User Model.
Represents a user in the system.
This model stores information about a user, including their email, hashed password, username, full name, and status (active, superuser, verified).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
email |
str
|
|
required |
hashed_password |
str
|
|
required |
username |
str
|
|
required |
id |
UUID
|
|
UUID('6a1f9e41-0cc3-4318-b2f4-f3ec7425eac4')
|
name |
str | None
|
|
None
|
is_active |
bool
|
|
True
|
is_superuser |
bool
|
|
False
|
is_verified |
bool
|
|
False
|
Notes
This class inherits from SQLAlchemyBaseUserTableUUID
(provided by fastapi-users
) which defines the id
, email
,
hashed_password
, is_active
, and is_superuser
attributes.
Important: Do not create instances of this class directly.
Use the create_user
function in the whombat.api.users
module instead.
Attributes:
Name | Type | Description |
---|---|---|
notes |
list[ForwardRef(Note)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
recordings |
list[ForwardRef(Recording)]
|
|
recording_owner |
list[ForwardRef(RecordingOwner)]
|
|
user_runs |
list[ForwardRef(UserRun)]
|
|
Tags¶
whombat.models.Tag
¶
Bases: Base
Tag Model.
Represents a tag with a key-value structure.
Tags are used to categorize and annotate various elements, such as audio clips or sound events. The key-value structure provides a flexible way to organize and manage tags, with the "key" acting as a category or namespace and the "value" representing the specific tag within that category.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
str
|
|
required |
value |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[ForwardRef(Recording)]
|
|
recording_tags |
list[ForwardRef(RecordingTag)]
|
|
sound_event_annotations |
list[ForwardRef(SoundEventAnnotation)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
clip_annotations |
list[ForwardRef(ClipAnnotation)]
|
|
clip_annotation_tags |
list[ForwardRef(ClipAnnotationTag)]
|
|
evaluation_set_tags |
list[ForwardRef(EvaluationSetTag)]
|
|
annotation_projects |
list[ForwardRef(AnnotationProject)]
|
|
annotation_project_tags |
list[ForwardRef(AnnotationProjectTag)]
|
|
sound_event_prediction_tags |
list[ForwardRef(SoundEventPredictionTag)]
|
|
clip_prediction_tags |
list[ForwardRef(ClipPredictionTag)]
|
|
Features¶
whombat.models.FeatureName
¶
Bases: Base
Feature Name Model.
Represents the name of a feature.
Features are numerical values associated with sound events, clips, or recordings, providing additional information about these objects. This model stores the unique names of those features.
Features can represent various aspects:
- Sound Events: Duration, bandwidth, or other characteristics extracted via deep learning models.
- Clips: Acoustic properties like signal-to-noise ratio or acoustic indices.
- Recordings: Contextual information like temperature, wind speed, or recorder height.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[ForwardRef(RecordingFeature)]
|
|
clips |
list[ForwardRef(ClipFeature)]
|
|
sound_events |
list[ForwardRef(SoundEventFeature)]
|
|
Notes¶
whombat.models.Note
¶
Bases: Base
Note model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message |
str
|
|
required |
created_by_id |
UUID
|
|
required |
is_issue |
bool
|
|
False
|
uuid |
UUID
|
|
UUID('adb2e817-734f-4b73-9e66-2dcddf2c92e5')
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
created_by |
User
|
|
recording |
ForwardRef(Recording) | None
|
|
recording_note |
ForwardRef(RecordingNote) | None
|
|
sound_event_annotation |
ForwardRef(SoundEventAnnotation) | None
|
|
sound_event_annotation_note |
ForwardRef(SoundEventAnnotationNote) | None
|
|
clip_annotation |
ForwardRef(ClipAnnotation) | None
|
|
clip_annotation_note |
ForwardRef(ClipAnnotationNote) | None
|
|
Audio Content¶
Recordings¶
whombat.models.Recording
¶
Bases: Base
Recording model for recording table.
This model represents the recording table in the database. It contains the all the information about a recording.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('08e57db4-b95d-47e2-b282-db993a998523')
|
hash |
str
|
|
required |
path |
Path
|
|
required |
duration |
float
|
|
required |
samplerate |
int
|
|
required |
channels |
int
|
|
required |
date |
date | None
|
|
None
|
time |
time | None
|
|
None
|
latitude |
float | None
|
|
None
|
longitude |
float | None
|
|
None
|
time_expansion |
float
|
|
1.0
|
rights |
str | None
|
|
None
|
notes |
list[Note]
|
|
[]
|
tags |
list[Tag]
|
|
[]
|
features |
list[ForwardRef(RecordingFeature)]
|
|
[]
|
owners |
list[User]
|
|
[]
|
recording_notes |
list[ForwardRef(RecordingNote)]
|
|
[]
|
recording_tags |
list[ForwardRef(RecordingTag)]
|
|
[]
|
recording_owners |
list[ForwardRef(RecordingOwner)]
|
|
[]
|
Notes
If the time expansion factor is not 1.0, the duration and samplerate are the duration and samplerate of original recording, not the expanded recording.
The path of the dataset is the path to the recording file relative to the base audio directory. We dont store the absolute path to the recording file in the database, as this may expose sensitive information, and it makes it easier to share datasets between users.
The hash of the recording is used to uniquely identify it. It is computed from the recording file, and is used to check if a recording has already been registered in the database. If the hash of a recording is already in the database, the recording is not registered again.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clips |
list[ForwardRef(Clip)]
|
|
recording_datasets |
list[ForwardRef(DatasetRecording)]
|
|
whombat.models.RecordingTag
¶
whombat.models.RecordingNote
¶
whombat.models.RecordingFeature
¶
Bases: Base
Recording Feature Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
recording_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
recording |
Recording
|
|
feature_name |
FeatureName
|
|
whombat.models.RecordingOwner
¶
Datasets¶
whombat.models.Dataset
¶
Bases: Base
Dataset model for dataset table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('a5545a91-225e-4fb7-aac6-629ad2a1b0bb')
|
name |
str
|
|
required |
description |
str
|
|
required |
audio_dir |
Path
|
|
required |
Notes
The audio_dir
attribute is the path to the audio directory of the dataset.
This is the directory that contains all the recordings of the dataset. Only
the relative path to the base audio directory is stored in the database.
Note that we should NEVER store absolute paths in the database.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[Recording]
|
|
dataset_recordings |
list[ForwardRef(DatasetRecording)]
|
|
whombat.models.DatasetRecording
¶
Bases: Base
Dataset Recording Model.
A dataset recording is a link between a dataset and a recording. It contains the path to the recording within the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id |
int
|
|
required |
recording_id |
int
|
|
required |
path |
Path
|
|
required |
Notes
The dataset recording model is a many-to-many relationship between the dataset and recording models. This means that a recording can be part of multiple datasets. This is useful when a recording is used in multiple studies or deployments. However, as we do not want to duplicate recordings in the database, we use a many-to-many relationship to link recordings to datasets.
Attributes:
Name | Type | Description |
---|---|---|
dataset |
Dataset
|
|
recording |
Recording
|
|
Acoustic Objects¶
Sound Events¶
whombat.models.SoundEvent
¶
Bases: Base
Sound Event model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('534e5f5e-203f-469c-8887-98e758df6e10')
|
recording_id |
int
|
|
required |
geometry_type |
str
|
|
required |
geometry |
TimeStamp | TimeInterval | Point | LineString | Polygon | BoundingBox | MultiPoint | MultiLineString | MultiPolygon
|
|
required |
Notes
The geometry attribute is stored as a JSON string in the database.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recording |
Recording
|
|
features |
list[ForwardRef(SoundEventFeature)]
|
|
sound_event_annotation |
ForwardRef(SoundEventAnnotation) | None
|
|
sound_event_prediction |
ForwardRef(SoundEventPrediction) | None
|
|
whombat.models.SoundEventFeature
¶
Bases: Base
Sound Event Feature model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
sound_event |
SoundEvent
|
|
feature_name |
FeatureName
|
|
Clips¶
whombat.models.Clip
¶
whombat.models.ClipFeature
¶
Bases: Base
Clip Feature Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
clip |
Clip
|
|
Annotation¶
Sound Event Annotation¶
whombat.models.SoundEventAnnotation
¶
Bases: Base
Sound Event Annotation model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('d5a49a1c-f517-4b24-a079-9eecbdd0e698')
|
clip_annotation_id |
int
|
|
required |
created_by_id |
int | None
|
|
required |
sound_event_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ForwardRef(ClipAnnotation)
|
|
created_by |
User | None
|
|
sound_event |
SoundEvent
|
|
tags |
list[Tag]
|
|
notes |
list[Note]
|
|
sound_event_annotation_notes |
list[ForwardRef(SoundEventAnnotationNote)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
whombat.models.SoundEventAnnotationTag
¶
whombat.models.SoundEventAnnotationNote
¶
Bases: Base
Sound Event Annotation Note Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_annotation_id |
int
|
|
required |
note_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
sound_event_annotation |
SoundEventAnnotation
|
|
note |
Note
|
|
Clip Annotation¶
whombat.models.ClipAnnotation
¶
Bases: Base
Clip Annotation Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('53e872da-50b2-41ac-a34c-a1727a5ba040')
|
clip_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip |
Clip
|
|
sound_events |
list[SoundEventAnnotation]
|
|
tags |
list[Tag]
|
|
notes |
list[Note]
|
|
clip_annotation_notes |
list[ForwardRef(ClipAnnotationNote)]
|
|
clip_annotation_tags |
list[ForwardRef(ClipAnnotationTag)]
|
|
annotation_task |
ForwardRef(AnnotationTask)
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_annotations |
list[ForwardRef(EvaluationSetAnnotation)]
|
|
whombat.models.ClipAnnotationTag
¶
whombat.models.ClipAnnotationNote
¶
Bases: Base
Clip Annotation Note Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_annotation_id |
int
|
|
required |
note_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ClipAnnotation
|
|
note |
Note
|
|
Annotation Task¶
whombat.models.AnnotationTask
¶
Bases: Base
Annotation Task model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_project_id |
int
|
|
required |
clip_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
uuid |
UUID
|
|
UUID('012641a8-ba13-4b8f-997c-ae534a87b1ad')
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
annotation_project |
ForwardRef(AnnotationProject)
|
|
clip |
Clip
|
|
clip_annotation |
ClipAnnotation
|
|
status_badges |
list[ForwardRef(AnnotationStatusBadge)]
|
|
whombat.models.AnnotationStatusBadge
¶
Bases: Base
Annotation status badge model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_task_id |
int
|
|
required |
user_id |
UUID | None
|
|
required |
state |
AnnotationState
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
annotation_task |
AnnotationTask
|
|
user |
User | None
|
|
Annotation Project¶
whombat.models.AnnotationProject
¶
Bases: Base
Annotation Project model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('58c4131d-0de3-4c44-a51d-91c515da91ed')
|
name |
str
|
|
required |
description |
str
|
|
required |
annotation_instructions |
str | None
|
|
None
|
tags |
list[Tag]
|
|
[]
|
annotation_tasks |
list[ForwardRef(AnnotationTask)]
|
|
[]
|
annotation_project_tags |
list[ForwardRef(AnnotationProjectTag)]
|
|
[]
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
whombat.models.AnnotationProjectTag
¶
Bases: Base
Annotation Project Tag model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_project_id |
int
|
|
required |
tag_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
annotation_project |
AnnotationProject
|
|
tag |
Tag
|
|
Prediction¶
Sound Event Prediction¶
whombat.models.SoundEventPrediction
¶
Bases: Base
Predicted Sound Event model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
int
|
|
required |
uuid |
UUID
|
|
UUID('5dd47e9f-3dcc-402a-8b25-f5529dba6d8f')
|
sound_event_id |
int
|
|
required |
clip_prediction_id |
int
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
sound_event |
SoundEvent
|
|
clip_prediction |
ForwardRef(ClipPrediction)
|
|
tags |
list[ForwardRef(SoundEventPredictionTag)]
|
|
whombat.models.SoundEventPredictionTag
¶
Clip Prediction¶
whombat.models.ClipPrediction
¶
Bases: Base
Prediction Clip model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
int
|
|
required |
uuid |
UUID
|
|
UUID('40414c72-4027-4f37-953b-d3bd0af4edca')
|
clip_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
clip |
Clip
|
|
tags |
list[ForwardRef(ClipPredictionTag)]
|
|
sound_events |
list[SoundEventPrediction]
|
|
whombat.models.ClipPredictionTag
¶
Model Run¶
whombat.models.ModelRun
¶
Bases: Base
Model Run Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('5c585eab-d947-42d6-9973-cba18e7b7612')
|
name |
str
|
|
required |
version |
str
|
|
required |
description |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_predictions |
list[ClipPrediction]
|
|
evaluations |
list[Evaluation]
|
|
model_run_predictions |
list[ForwardRef(ModelRunPrediction)]
|
|
model_run_evaluations |
list[ForwardRef(ModelRunEvaluation)]
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_model_runs |
list[ForwardRef(EvaluationSetModelRun)]
|
|
User Run¶
whombat.models.UserRun
¶
Bases: Base
User Run User.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('322223fc-120c-4d8a-99df-50aca8376465')
|
user_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
user |
ForwardRef(User)
|
|
clip_predictions |
list[ClipPrediction]
|
|
evaluations |
list[Evaluation]
|
|
user_run_predictions |
list[ForwardRef(UserRunPrediction)]
|
|
user_run_evaluations |
list[ForwardRef(UserRunEvaluation)]
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_user_runs |
list[ForwardRef(EvaluationSetUserRun)]
|
|
Evaluation¶
Sound Event Evaluation¶
whombat.models.SoundEventEvaluation
¶
Bases: Base
Sound Event Evaluation.
Represents the evaluation of a predicted sound event against a ground truth annotation.
This class stores the results of comparing a predicted sound event (from a model or user) to a corresponding annotated sound event (ground truth). It includes various metrics and scores to quantify the accuracy of the prediction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('0ff9a034-37a7-4192-ab02-bbb2a82a989a')
|
clip_evaluation_id |
int
|
|
required |
source_id |
int | None
|
|
required |
target_id |
int | None
|
|
required |
affinity |
float
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
source |
SoundEventPrediction | None
|
|
target |
SoundEventAnnotation | None
|
|
metrics |
list[ForwardRef(SoundEventEvaluationMetric)]
|
|
clip_evaluation |
ForwardRef(ClipEvaluation)
|
|
whombat.models.SoundEventEvaluationMetric
¶
Bases: Base
Sound Event Evaluation Metric model.
Represents a specific metric used to evaluate a sound event prediction.
This class stores the value of a single evaluation metric (e.g., precision, recall, F1-score) calculated for a SoundEventEvaluation. It links the metric value to its name (stored in the FeatureName table) and the corresponding evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
sound_event_evaluation |
SoundEventEvaluation
|
|
Clip Evaluation¶
whombat.models.ClipEvaluation
¶
Bases: Base
Clip Evaluation Model.
Represents the evaluation of a clip-level prediction against ground truth.
This class compares a prediction made on an audio clip to the corresponding ground truth annotation for that clip. It considers both clip-level tags and sound event predictions within the clip, providing an overall score and detailed metrics for the evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('25d2a368-6f11-4606-9835-9aa99efb22cb')
|
evaluation_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
clip_prediction_id |
int
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ClipAnnotation
|
|
clip_prediction |
ClipPrediction
|
|
sound_event_evaluations |
list[SoundEventEvaluation]
|
|
metrics |
list[ForwardRef(ClipEvaluationMetric)]
|
|
evaluation |
ForwardRef(Evaluation)
|
|
whombat.models.ClipEvaluationMetric
¶
Bases: Base
Clip Evaluation Metric.
Represents a specific metric used to evaluate a clip-level prediction.
This class stores the value of a single evaluation metric (e.g., accuracy, precision, recall) calculated for a ClipEvaluation. It links the metric value to its name (stored in the FeatureName table) and the corresponding clip evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
clip_evaluation |
ClipEvaluation
|
|
Evaluation¶
whombat.models.Evaluation
¶
Bases: Base
Evaluation.
Represents a complete evaluation of a model's predictions.
This class stores high-level information about the evaluation of a set of predictions compared to ground truth annotations. It includes an overall score, aggregated metrics, and a breakdown of individual clip evaluations. This provides a comprehensive overview of the model's performance on a specific task (e.g., sound event detection).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('88774c9d-b775-4c19-bd3a-692a4796f944')
|
task |
str
|
|
required |
score |
float
|
|
0
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
metrics |
list[ForwardRef(EvaluationMetric)]
|
|
clip_evaluations |
list[ClipEvaluation]
|
|
whombat.models.EvaluationMetric
¶
Bases: Base
Evaluation Metric.
Represents a specific metric associated with an overall evaluation.
This class stores the value of an evaluation metric (e.g., overall accuracy, macro F1-score) calculated for an Evaluation. It links the metric value to its name (from the FeatureName table) and the corresponding evaluation, providing insights into the model's performance on a broader level.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
evaluation |
Evaluation
|
|
Evaluation Set¶
whombat.models.EvaluationSet
¶
Bases: Base
Evaluation Set Model.
Represents a collection of data and settings for evaluating model predictions.
An EvaluationSet defines the parameters and data required for a specific evaluation task. It includes:
- Target Tags: The list of sound tags that are the focus of the evaluation.
- Prediction Task: The type of prediction being evaluated (e.g., sound event detection).
- Ground Truth Examples: A set of clip annotations serving as the ground truth for comparison.
This allows for structured and standardized evaluation of different models and prediction types.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('efd5b5eb-f004-4ea5-bc9d-86ef8c03145b')
|
name |
str
|
|
required |
description |
str
|
|
required |
task |
str
|
|
required |
tags |
list[Tag]
|
|
[]
|
model_runs |
list[ModelRun]
|
|
[]
|
user_runs |
list[UserRun]
|
|
[]
|
evaluation_set_annotations |
list[ForwardRef(EvaluationSetAnnotation)]
|
|
[]
|
evaluation_set_tags |
list[ForwardRef(EvaluationSetTag)]
|
|
[]
|
evaluation_set_model_runs |
list[ForwardRef(EvaluationSetModelRun)]
|
|
[]
|
evaluation_set_user_runs |
list[ForwardRef(EvaluationSetUserRun)]
|
|
[]
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotations |
list[ForwardRef(ClipAnnotation)]
|
|
whombat.models.EvaluationSetTag
¶
Bases: Base
Evaluation Set Tag model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_set_id |
int
|
|
required |
tag_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
evaluation_set |
EvaluationSet
|
|
tag |
Tag
|
|
whombat.models.EvaluationSetAnnotation
¶
Bases: Base
Evaluation Set Annotation Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_set_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
evaluation_set |
EvaluationSet
|
|
clip_annotation |
ClipAnnotation
|
|