Skip to content

tensorflow#

Warning

Additional Dependencies: To use the Tensorflow models, you will need to install additional dependencies. These dependencies are optional, as they can be heavy and may not be needed in all use cases. To install them, run:

pip install "audioclass[tensorflow]"

audioclass.models.tensorflow #

Module for defining TensorFlow-based audio classification models.

This module provides classes and functions for creating and using TensorFlow models for audio classification tasks. It includes a TensorflowModel class that wraps a TensorFlow callable and a Signature dataclass to define the model's input and output specifications.

Classes:

Name Description
Signature

Defines the input and output signature of a TensorFlow model.

TensorflowModel

A wrapper class for TensorFlow audio classification models.

Classes#

Signature(input_name, classification_name, feature_name, input_length, input_dtype=np.float32) dataclass #

Defines the input and output signature of a TensorFlow model.

Attributes:

Name Type Description
classification_name str

The name of the output tensor containing classification probabilities.

feature_name str

The name of the output tensor containing extracted features.

input_dtype DTypeLike

The data type of the input tensor. Defaults to np.float32.

input_length int

The number of samples expected in the input tensor.

input_name str

The name of the input tensor.

Attributes#
classification_name: str instance-attribute #

The name of the output tensor containing classification probabilities.

feature_name: str instance-attribute #

The name of the output tensor containing extracted features.

input_dtype: DTypeLike = np.float32 class-attribute instance-attribute #

The data type of the input tensor. Defaults to np.float32.

input_length: int instance-attribute #

The number of samples expected in the input tensor.

input_name: str instance-attribute #

The name of the input tensor.

TensorflowModel(callable, signature, tags, confidence_threshold, samplerate, name, logits=True, batch_size=8) #

Bases: ClipClassificationModel

A wrapper class for TensorFlow audio classification models.

This class provides a standardized interface for interacting with TensorFlow models, allowing them to be used seamlessly with the audioclass library.

Parameters:

Name Type Description Default
callable Callable

The TensorFlow callable representing the model.

required
signature Signature

The input and output signature of the model.

required
tags List[Tag]

The list of tags that the model can predict.

required
confidence_threshold float

The minimum confidence threshold for assigning a tag to a clip.

required
samplerate int

The sample rate of the audio data expected by the model (in Hz).

required
name str

The name of the model.

required
logits bool

Whether the model outputs logits (True) or probabilities (False). Defaults to True.

True
batch_size int

The maximum number of frames to process in each batch. Defaults to 8.

8

Methods:

Name Description
process_array

Process a single audio array and return the model output.

Attributes:

Name Type Description
batch_size
callable Callable

The TensorFlow callable representing the model.

confidence_threshold
input_samples
logits
name
num_classes
samplerate
signature Signature

The input and output signature of the model.

tags
Attributes#
batch_size = batch_size instance-attribute #
callable: Callable = callable instance-attribute #

The TensorFlow callable representing the model.

confidence_threshold = confidence_threshold instance-attribute #
input_samples = signature.input_length instance-attribute #
logits = logits instance-attribute #
name = name instance-attribute #
num_classes = len(tags) instance-attribute #
samplerate = samplerate instance-attribute #
signature: Signature = signature instance-attribute #

The input and output signature of the model.

tags = tags instance-attribute #
Functions#
process_array(array) #

Process a single audio array and return the model output.

Parameters:

Name Type Description Default
array ndarray

The audio array to be processed, with shape (num_frames, input_samples).

required

Returns:

Type Description
ModelOutput

A ModelOutput object containing the class probabilities and extracted features.

Note

This is a low-level method that requires manual batching of the input audio array. If you prefer a higher-level interface that handles batching automatically, consider using process_file, process_recording, or process_clip instead.

Be aware that passing an array with a large batch size may exceed available device memory and cause the process to crash.

Functions#

process_array(call, signature, array, validate_signature=False, logits=True) #

Process an array with a TensorFlow model.

Parameters:

Name Type Description Default
call Callable

The TensorFlow callable representing the model.

required
signature Signature

The input and output signature of the model.

required
array ndarray

The audio array to be processed, with shape (num_frames, input_samples) or (input_samples,).

required
validate_signature bool

Whether to validate the model signature. Defaults to False.

False
logits bool

Whether the model outputs logits (True) or probabilities (False). Defaults to True.

True

Returns:

Type Description
ModelOutput

A ModelOutput object containing the class probabilities and extracted features.

Raises:

Type Description
ValueError

If the input array has the wrong shape or if the model signature is invalid.