Database Models¶
Welcome to the comprehensive database models reference for Whombat! Here, you'll
discover an organized collection of all the database models defined within the
Whombat framework. Our categorization mirrors the structure outlined in
soundevent
.
The models within Whombat share an analogical relationship with those in
soundevent
and are essentially a SQLAlchemy port. While the core concepts remain
consistent, it's essential to note that some minor differences do exist.
Data Descriptors¶
Users¶
whombat.models.User
¶
Bases: Base
User Model.
Represents a user in the system.
This model stores information about a user, including their email, hashed password, username, full name, and status (active, superuser, verified).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
email |
str
|
|
required |
hashed_password |
str
|
|
required |
username |
str
|
|
required |
id |
UUID
|
|
UUID('5b07c844-e40e-432a-b45b-01925895a8c9')
|
name |
str | None
|
|
None
|
is_active |
bool
|
|
True
|
is_superuser |
bool
|
|
False
|
is_verified |
bool
|
|
False
|
Notes
This class inherits from SQLAlchemyBaseUserTableUUID
(provided by fastapi-users
) which defines the id
, email
,
hashed_password
, is_active
, and is_superuser
attributes.
Important: Do not create instances of this class directly.
Use the create_user
function in the whombat.api.users
module instead.
Attributes:
Name | Type | Description |
---|---|---|
notes |
list[ForwardRef(Note)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
recordings |
list[ForwardRef(Recording)]
|
|
recording_owner |
list[ForwardRef(RecordingOwner)]
|
|
user_runs |
list[ForwardRef(UserRun)]
|
|
Tags¶
whombat.models.Tag
¶
Bases: Base
Tag Model.
Represents a tag with a key-value structure.
Tags are used to categorize and annotate various elements, such as audio clips or sound events. The key-value structure provides a flexible way to organize and manage tags, with the "key" acting as a category or namespace and the "value" representing the specific tag within that category.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
str
|
|
required |
value |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[ForwardRef(Recording)]
|
|
recording_tags |
list[ForwardRef(RecordingTag)]
|
|
sound_event_annotations |
list[ForwardRef(SoundEventAnnotation)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
clip_annotations |
list[ForwardRef(ClipAnnotation)]
|
|
clip_annotation_tags |
list[ForwardRef(ClipAnnotationTag)]
|
|
evaluation_set_tags |
list[ForwardRef(EvaluationSetTag)]
|
|
annotation_projects |
list[ForwardRef(AnnotationProject)]
|
|
annotation_project_tags |
list[ForwardRef(AnnotationProjectTag)]
|
|
sound_event_prediction_tags |
list[ForwardRef(SoundEventPredictionTag)]
|
|
clip_prediction_tags |
list[ForwardRef(ClipPredictionTag)]
|
|
Features¶
whombat.models.FeatureName
¶
Bases: Base
Feature Name Model.
Represents the name of a feature.
Features are numerical values associated with sound events, clips, or recordings, providing additional information about these objects. This model stores the unique names of those features.
Features can represent various aspects:
- Sound Events: Duration, bandwidth, or other characteristics extracted via deep learning models.
- Clips: Acoustic properties like signal-to-noise ratio or acoustic indices.
- Recordings: Contextual information like temperature, wind speed, or recorder height.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[ForwardRef(RecordingFeature)]
|
|
clips |
list[ForwardRef(ClipFeature)]
|
|
sound_events |
list[ForwardRef(SoundEventFeature)]
|
|
Notes¶
whombat.models.Note
¶
Bases: Base
Note model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message |
str
|
|
required |
created_by_id |
UUID
|
|
required |
is_issue |
bool
|
|
False
|
uuid |
UUID
|
|
UUID('0adb47bc-cc8b-42cc-bd78-57cf1593599c')
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
created_by |
User
|
|
recording |
ForwardRef(Recording) | None
|
|
recording_note |
ForwardRef(RecordingNote) | None
|
|
sound_event_annotation |
ForwardRef(SoundEventAnnotation) | None
|
|
sound_event_annotation_note |
ForwardRef(SoundEventAnnotationNote) | None
|
|
clip_annotation |
ForwardRef(ClipAnnotation) | None
|
|
clip_annotation_note |
ForwardRef(ClipAnnotationNote) | None
|
|
Audio Content¶
Recordings¶
whombat.models.Recording
¶
Bases: Base
Recording model for recording table.
This model represents the recording table in the database. It contains the all the information about a recording.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('a6b607cd-1ec8-4c07-a335-5be439a83759')
|
hash |
str
|
|
required |
path |
Path
|
|
required |
duration |
float
|
|
required |
samplerate |
int
|
|
required |
channels |
int
|
|
required |
date |
date | None
|
|
None
|
time |
time | None
|
|
None
|
latitude |
float | None
|
|
None
|
longitude |
float | None
|
|
None
|
time_expansion |
float
|
|
1.0
|
rights |
str | None
|
|
None
|
notes |
list[Note]
|
|
[]
|
tags |
list[Tag]
|
|
[]
|
features |
list[ForwardRef(RecordingFeature)]
|
|
[]
|
owners |
list[User]
|
|
[]
|
recording_notes |
list[ForwardRef(RecordingNote)]
|
|
[]
|
recording_tags |
list[ForwardRef(RecordingTag)]
|
|
[]
|
recording_owners |
list[ForwardRef(RecordingOwner)]
|
|
[]
|
Notes
If the time expansion factor is not 1.0, the duration and samplerate are the duration and samplerate of original recording, not the expanded recording.
The path of the dataset is the path to the recording file relative to the base audio directory. We dont store the absolute path to the recording file in the database, as this may expose sensitive information, and it makes it easier to share datasets between users.
The hash of the recording is used to uniquely identify it. It is computed from the recording file, and is used to check if a recording has already been registered in the database. If the hash of a recording is already in the database, the recording is not registered again.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clips |
list[ForwardRef(Clip)]
|
|
recording_datasets |
list[ForwardRef(DatasetRecording)]
|
|
whombat.models.RecordingTag
¶
whombat.models.RecordingNote
¶
whombat.models.RecordingFeature
¶
Bases: Base
Recording Feature Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
recording_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
recording |
Recording
|
|
feature_name |
FeatureName
|
|
whombat.models.RecordingOwner
¶
Datasets¶
whombat.models.Dataset
¶
Bases: Base
Dataset model for dataset table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('b141d8ee-313a-4845-85e2-a9bf502f3c11')
|
name |
str
|
|
required |
description |
str
|
|
required |
audio_dir |
Path
|
|
required |
Notes
The audio_dir
attribute is the path to the audio directory of the dataset.
This is the directory that contains all the recordings of the dataset. Only
the relative path to the base audio directory is stored in the database.
Note that we should NEVER store absolute paths in the database.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recordings |
list[Recording]
|
|
dataset_recordings |
list[ForwardRef(DatasetRecording)]
|
|
whombat.models.DatasetRecording
¶
Bases: Base
Dataset Recording Model.
A dataset recording is a link between a dataset and a recording. It contains the path to the recording within the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id |
int
|
|
required |
recording_id |
int
|
|
required |
path |
Path
|
|
required |
Notes
The dataset recording model is a many-to-many relationship between the dataset and recording models. This means that a recording can be part of multiple datasets. This is useful when a recording is used in multiple studies or deployments. However, as we do not want to duplicate recordings in the database, we use a many-to-many relationship to link recordings to datasets.
Attributes:
Name | Type | Description |
---|---|---|
dataset |
Dataset
|
|
recording |
Recording
|
|
Acoustic Objects¶
Sound Events¶
whombat.models.SoundEvent
¶
Bases: Base
Sound Event model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('0147aad2-0656-4dab-aff1-50015b05e5a1')
|
recording_id |
int
|
|
required |
geometry_type |
str
|
|
required |
geometry |
TimeStamp | TimeInterval | Point | LineString | Polygon | BoundingBox | MultiPoint | MultiLineString | MultiPolygon
|
|
required |
Notes
The geometry attribute is stored as a JSON string in the database.
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
recording |
Recording
|
|
features |
list[ForwardRef(SoundEventFeature)]
|
|
sound_event_annotation |
ForwardRef(SoundEventAnnotation) | None
|
|
sound_event_prediction |
ForwardRef(SoundEventPrediction) | None
|
|
whombat.models.SoundEventFeature
¶
Bases: Base
Sound Event Feature model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
sound_event |
SoundEvent
|
|
feature_name |
FeatureName
|
|
Clips¶
whombat.models.Clip
¶
whombat.models.ClipFeature
¶
Bases: Base
Clip Feature Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
clip |
Clip
|
|
Annotation¶
Sound Event Annotation¶
whombat.models.SoundEventAnnotation
¶
Bases: Base
Sound Event Annotation model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('43c22c2d-d24f-418a-88c0-54571cc5b9f5')
|
clip_annotation_id |
int
|
|
required |
created_by_id |
int | None
|
|
required |
sound_event_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ForwardRef(ClipAnnotation)
|
|
created_by |
User | None
|
|
sound_event |
SoundEvent
|
|
tags |
list[Tag]
|
|
notes |
list[Note]
|
|
sound_event_annotation_notes |
list[ForwardRef(SoundEventAnnotationNote)]
|
|
sound_event_annotation_tags |
list[ForwardRef(SoundEventAnnotationTag)]
|
|
whombat.models.SoundEventAnnotationTag
¶
whombat.models.SoundEventAnnotationNote
¶
Bases: Base
Sound Event Annotation Note Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_annotation_id |
int
|
|
required |
note_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
sound_event_annotation |
SoundEventAnnotation
|
|
note |
Note
|
|
Clip Annotation¶
whombat.models.ClipAnnotation
¶
Bases: Base
Clip Annotation Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('39705910-ed30-4114-9ace-95f602d298ee')
|
clip_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip |
Clip
|
|
sound_events |
list[SoundEventAnnotation]
|
|
tags |
list[Tag]
|
|
notes |
list[Note]
|
|
clip_annotation_notes |
list[ForwardRef(ClipAnnotationNote)]
|
|
clip_annotation_tags |
list[ForwardRef(ClipAnnotationTag)]
|
|
annotation_task |
ForwardRef(AnnotationTask)
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_annotations |
list[ForwardRef(EvaluationSetAnnotation)]
|
|
whombat.models.ClipAnnotationTag
¶
whombat.models.ClipAnnotationNote
¶
Bases: Base
Clip Annotation Note Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_annotation_id |
int
|
|
required |
note_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ClipAnnotation
|
|
note |
Note
|
|
Annotation Task¶
whombat.models.AnnotationTask
¶
Bases: Base
Annotation Task model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_project_id |
int
|
|
required |
clip_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
uuid |
UUID
|
|
UUID('7fc28c70-fd6f-4e6e-982d-1ce1ef2ccb36')
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
annotation_project |
ForwardRef(AnnotationProject)
|
|
clip |
Clip
|
|
clip_annotation |
ClipAnnotation
|
|
status_badges |
list[ForwardRef(AnnotationStatusBadge)]
|
|
whombat.models.AnnotationStatusBadge
¶
Bases: Base
Annotation status badge model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_task_id |
int
|
|
required |
user_id |
UUID | None
|
|
required |
state |
AnnotationState
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
annotation_task |
AnnotationTask
|
|
user |
User | None
|
|
Annotation Project¶
whombat.models.AnnotationProject
¶
Bases: Base
Annotation Project model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('09de118c-98fe-4b06-ace7-1dea0a05d7f3')
|
name |
str
|
|
required |
description |
str
|
|
required |
annotation_instructions |
str | None
|
|
None
|
tags |
list[Tag]
|
|
[]
|
annotation_tasks |
list[ForwardRef(AnnotationTask)]
|
|
[]
|
annotation_project_tags |
list[ForwardRef(AnnotationProjectTag)]
|
|
[]
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
whombat.models.AnnotationProjectTag
¶
Bases: Base
Annotation Project Tag model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
annotation_project_id |
int
|
|
required |
tag_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
annotation_project |
AnnotationProject
|
|
tag |
Tag
|
|
Prediction¶
Sound Event Prediction¶
whombat.models.SoundEventPrediction
¶
Bases: Base
Predicted Sound Event model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
int
|
|
required |
uuid |
UUID
|
|
UUID('58c402e3-b22b-4efb-b7e7-a1dd97a67d81')
|
sound_event_id |
int
|
|
required |
clip_prediction_id |
int
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
sound_event |
SoundEvent
|
|
clip_prediction |
ForwardRef(ClipPrediction)
|
|
tags |
list[ForwardRef(SoundEventPredictionTag)]
|
|
whombat.models.SoundEventPredictionTag
¶
Clip Prediction¶
whombat.models.ClipPrediction
¶
Bases: Base
Prediction Clip model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
int
|
|
required |
uuid |
UUID
|
|
UUID('605c74a6-c4bc-4ff4-943e-5e32c52835b7')
|
clip_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
clip |
Clip
|
|
tags |
list[ForwardRef(ClipPredictionTag)]
|
|
sound_events |
list[SoundEventPrediction]
|
|
whombat.models.ClipPredictionTag
¶
Model Run¶
whombat.models.ModelRun
¶
Bases: Base
Model Run Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('ef80e93e-958e-455a-8cf8-21d773b00464')
|
name |
str
|
|
required |
version |
str
|
|
required |
description |
str
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_predictions |
list[ClipPrediction]
|
|
evaluations |
list[Evaluation]
|
|
model_run_predictions |
list[ForwardRef(ModelRunPrediction)]
|
|
model_run_evaluations |
list[ForwardRef(ModelRunEvaluation)]
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_model_runs |
list[ForwardRef(EvaluationSetModelRun)]
|
|
User Run¶
whombat.models.UserRun
¶
Bases: Base
User Run User.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('303233ae-809c-4dc7-a44b-fd4535d37a61')
|
user_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
user |
ForwardRef(User)
|
|
clip_predictions |
list[ClipPrediction]
|
|
evaluations |
list[Evaluation]
|
|
user_run_predictions |
list[ForwardRef(UserRunPrediction)]
|
|
user_run_evaluations |
list[ForwardRef(UserRunEvaluation)]
|
|
evaluation_sets |
list[ForwardRef(EvaluationSet)]
|
|
evaluation_set_user_runs |
list[ForwardRef(EvaluationSetUserRun)]
|
|
Evaluation¶
Sound Event Evaluation¶
whombat.models.SoundEventEvaluation
¶
Bases: Base
Sound Event Evaluation.
Represents the evaluation of a predicted sound event against a ground truth annotation.
This class stores the results of comparing a predicted sound event (from a model or user) to a corresponding annotated sound event (ground truth). It includes various metrics and scores to quantify the accuracy of the prediction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('92d489f7-6ffb-4132-92c4-54c3c2a49580')
|
clip_evaluation_id |
int
|
|
required |
source_id |
int | None
|
|
required |
target_id |
int | None
|
|
required |
affinity |
float
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
source |
SoundEventPrediction | None
|
|
target |
SoundEventAnnotation | None
|
|
metrics |
list[ForwardRef(SoundEventEvaluationMetric)]
|
|
clip_evaluation |
ForwardRef(ClipEvaluation)
|
|
whombat.models.SoundEventEvaluationMetric
¶
Bases: Base
Sound Event Evaluation Metric model.
Represents a specific metric used to evaluate a sound event prediction.
This class stores the value of a single evaluation metric (e.g., precision, recall, F1-score) calculated for a SoundEventEvaluation. It links the metric value to its name (stored in the FeatureName table) and the corresponding evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sound_event_evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
sound_event_evaluation |
SoundEventEvaluation
|
|
Clip Evaluation¶
whombat.models.ClipEvaluation
¶
Bases: Base
Clip Evaluation Model.
Represents the evaluation of a clip-level prediction against ground truth.
This class compares a prediction made on an audio clip to the corresponding ground truth annotation for that clip. It considers both clip-level tags and sound event predictions within the clip, providing an overall score and detailed metrics for the evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('01c6cd27-66e8-402f-9ec1-59ad4b2686a6')
|
evaluation_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
clip_prediction_id |
int
|
|
required |
score |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotation |
ClipAnnotation
|
|
clip_prediction |
ClipPrediction
|
|
sound_event_evaluations |
list[SoundEventEvaluation]
|
|
metrics |
list[ForwardRef(ClipEvaluationMetric)]
|
|
evaluation |
ForwardRef(Evaluation)
|
|
whombat.models.ClipEvaluationMetric
¶
Bases: Base
Clip Evaluation Metric.
Represents a specific metric used to evaluate a clip-level prediction.
This class stores the value of a single evaluation metric (e.g., accuracy, precision, recall) calculated for a ClipEvaluation. It links the metric value to its name (stored in the FeatureName table) and the corresponding clip evaluation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clip_evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
clip_evaluation |
ClipEvaluation
|
|
Evaluation¶
whombat.models.Evaluation
¶
Bases: Base
Evaluation.
Represents a complete evaluation of a model's predictions.
This class stores high-level information about the evaluation of a set of predictions compared to ground truth annotations. It includes an overall score, aggregated metrics, and a breakdown of individual clip evaluations. This provides a comprehensive overview of the model's performance on a specific task (e.g., sound event detection).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('ddad8928-b8c1-468d-913c-a94368284ea9')
|
task |
str
|
|
required |
score |
float
|
|
0
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
metrics |
list[ForwardRef(EvaluationMetric)]
|
|
clip_evaluations |
list[ClipEvaluation]
|
|
whombat.models.EvaluationMetric
¶
Bases: Base
Evaluation Metric.
Represents a specific metric associated with an overall evaluation.
This class stores the value of an evaluation metric (e.g., overall accuracy, macro F1-score) calculated for an Evaluation. It links the metric value to its name (from the FeatureName table) and the corresponding evaluation, providing insights into the model's performance on a broader level.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_id |
int
|
|
required |
feature_name_id |
int
|
|
required |
value |
float
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
name |
AssociationProxy[str]
|
|
feature_name |
FeatureName
|
|
evaluation |
Evaluation
|
|
Evaluation Set¶
whombat.models.EvaluationSet
¶
Bases: Base
Evaluation Set Model.
Represents a collection of data and settings for evaluating model predictions.
An EvaluationSet defines the parameters and data required for a specific evaluation task. It includes:
- Target Tags: The list of sound tags that are the focus of the evaluation.
- Prediction Task: The type of prediction being evaluated (e.g., sound event detection).
- Ground Truth Examples: A set of clip annotations serving as the ground truth for comparison.
This allows for structured and standardized evaluation of different models and prediction types.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uuid |
UUID
|
|
UUID('53e52bc2-a2e4-41ed-afd7-5b9ba7064871')
|
name |
str
|
|
required |
description |
str
|
|
required |
task |
str
|
|
required |
tags |
list[Tag]
|
|
[]
|
model_runs |
list[ModelRun]
|
|
[]
|
user_runs |
list[UserRun]
|
|
[]
|
evaluation_set_annotations |
list[ForwardRef(EvaluationSetAnnotation)]
|
|
[]
|
evaluation_set_tags |
list[ForwardRef(EvaluationSetTag)]
|
|
[]
|
evaluation_set_model_runs |
list[ForwardRef(EvaluationSetModelRun)]
|
|
[]
|
evaluation_set_user_runs |
list[ForwardRef(EvaluationSetUserRun)]
|
|
[]
|
Attributes:
Name | Type | Description |
---|---|---|
id |
int
|
|
clip_annotations |
list[ForwardRef(ClipAnnotation)]
|
|
whombat.models.EvaluationSetTag
¶
Bases: Base
Evaluation Set Tag model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_set_id |
int
|
|
required |
tag_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
evaluation_set |
EvaluationSet
|
|
tag |
Tag
|
|
whombat.models.EvaluationSetAnnotation
¶
Bases: Base
Evaluation Set Annotation Model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
evaluation_set_id |
int
|
|
required |
clip_annotation_id |
int
|
|
required |
Attributes:
Name | Type | Description |
---|---|---|
evaluation_set |
EvaluationSet
|
|
clip_annotation |
ClipAnnotation
|
|