PrognosAIs.Model package

Subpackages

Submodules

PrognosAIs.Model.Callbacks module

class PrognosAIs.Model.Callbacks.ConcordanceIndex(validation_generator)[source]

Bases: tensorflow.python.keras.callbacks.Callback

A custom callback function to evaluate the concordance index on the whole validation set

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.

class PrognosAIs.Model.Callbacks.Timer[source]

Bases: tensorflow.python.keras.callbacks.Callback

A custom callback function to evaluate the elapsed time of training

on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_.

PrognosAIs.Model.Callbacks.calculate_concordance_index(y_true, y_pred)[source]

This function determine the concordance index for two numpy arrays

y_true contains a label to indicate whether events occurred, and time to events (or time to right censored data if no event occurred)

y_pred is beta*x in the cox model

PrognosAIs.Model.Evaluators module

class PrognosAIs.Model.Evaluators.Evaluator(model_file, data_folder, config_file, output_folder)[source]

Bases: object

_combine_config_and_model_metrics(model_metrics: dict, config_metrics: dict) → dict[source]

Combine the metrics specified in the model and those specified in the config.

Parameters
  • model_metrics (dict) – Metrics as defined by the model

  • config_metrics (dict) – Metrics defined in the config

Returns

dict – Combined metrics

static _fake_fit(model: tensorflow.python.keras.engine.training.Model) → tensorflow.python.keras.engine.training.Model[source]

Fit of the model on fake date to properly initialize the model.

Parameters

model (tf.keras.Model) – Model to initialize

Returns

tf.keras.Model – Initalized model.

_format_predictions(predictions: Union[list, numpy.ndarray]) → dict[source]

Format the predictions to match them with the output names

Parameters

predictions (Union[list, np.ndarray]) – The predictions from the model

Raises

ValueError – If the predictions do not match with the expected output names

Returns

dict – Output predictions matched with the output names

_init_data_generators(labels_only: bool) → dict[source]

Initialize data generators for all sample folders.

Parameters

labels_only (bool) – Whether to only load labels

Returns

dict – initalized data generators

static _load_model(model_file: str, custom_objects: dict) → Tuple[tensorflow.python.keras.engine.training.Model, ValueError][source]

Try to load a model, if it doesnt work parse the error.

Parameters
  • model_file (str) – Location of the model file

  • custom_objects (dict) – Potential custom objects to use during model loading

Returns

Tuple[tf.keras.Model, ValueError] – The model if successfully loaded, otherwise the error

static combine_predictions(predictions: numpy.ndarray, are_one_hot: bool, label_combination_type: str) → numpy.ndarray[source]
evaluate()[source]
evaluate_metrics() → dict[source]

Evaluate all metrics for all samples

Returns

dict – The evaluated metrics

evaluate_metrics_from_predictions(predictions: dict, real_labels: dict) → dict[source]

Evaluate the metrics based on the model predictions

Parameters
  • predictions (dict) – Predictions obtained from the model

  • real_labels (dict) – The true labels of the samples for the different outputs

Returns

dict – The different evaluated metrics

evaluate_sample_metrics() → dict[source]

Evaluate the metrics based on a full sample instead of based on individual batches

Returns

dict – The evaluated metrics

get_image_output_labels() → dict[source]

Whether an output label is a simple class, the label is actually an image.

Returns

dict – Output labels that are image outputs

get_real_labels() → dict[source]
get_real_labels_of_sample_subset(subset_name: str) → dict[source]

Get the real labels corresponding of all samples from a subset.

Parameters

subset_name (str) – Name of subset to get labels for

Returns

dict – Real labels for each dataset and output

get_sample_labels_from_patch_labels()[source]
get_sample_predictions_from_patch_predictions()[source]
get_sample_result_from_patch_results(patch_results)[source]
get_to_evaluate_metrics() → dict[source]

Get the metrics functions which should be evaluated.

Returns

dict – Metric function to be evaluated for the different outputs

image_array_to_sitk(image_array: numpy.ndarray, input_name: str) → SimpleITK.SimpleITK.Image[source]
init_data_generators() → dict[source]

Initialize the data generators.

Returns

dict – DataGenerator for each subfolder of samples

classmethod init_from_sys_args(args_in)[source]
init_label_generators() → dict[source]

Initialize the data generators which only give labels.

Returns

dict – DataGenerator for each subfolder of samples

init_model_parameters() → None[source]

Initialize the parameters from the model.

static load_model(model_file: str, custom_module: module = None) → tensorflow.python.keras.engine.training.Model[source]

Load the model, including potential custom losses.

Parameters
  • model_file (str) – Location of the model file

  • custom_module (ModuleType) – Custom module from which to load losses or metrics

Raises

error – If the model could not be loaded and the problem is not due to a missing loss or metric function.

Returns

tf.keras.Model – The loaded model

make_dataframe(sample_names, predictions, labels) → pandas.core.frame.DataFrame[source]
make_metric_dataframe(metrics: dict) → pandas.core.frame.DataFrame[source]
static one_hot_labels_to_flat_labels(labels: numpy.ndarray) → numpy.ndarray[source]
patches_to_sample_image(datagenerator: PrognosAIs.IO.DataGenerator.HDF5Generator, filenames: list, output_name: str, predictions: numpy.ndarray, labels_are_one_hot: bool, label_combination_type: str) → numpy.ndarray[source]
predict() → dict[source]

Get predictions from the model

Returns

dict – Predictions for the different outputs of the model for all samples

write_image_predictions_to_files(sample_names, predictions, labels_one_hot) → None[source]
write_metrics_to_file() → None[source]
write_predictions_to_file() → None[source]

PrognosAIs.Model.Losses module

class PrognosAIs.Model.Losses.CoxLoss(**kwargs)[source]

Bases: tensorflow.python.keras.losses.Loss

Cox loss as defined in https://arxiv.org/pdf/1606.00931.pdf.

call(y_true: tensorflow.python.framework.ops.Tensor, y_pred: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]

Calculate the cox loss.

Parameters
  • y_true (tf.Tensor) – Tensor of shape (batch_size, 2), with the first index containing whether and event occurred for each sample, and the second index containing the time to event, or follow-up time if no event has occurred

  • y_pred (tf.Tensor) – The \(\hat{h}_{\sigma\}\) as predicted by the network

Returns

tf.Tensor – The cox loss for each sample in the batch

get_config() → dict[source]

Get the configuration of the loss.

Returns

dict – configuration of the loss

class PrognosAIs.Model.Losses.DICE_loss(name: str = 'dice_loss', weighted: bool = False, foreground_only: bool = False, **kwargs)[source]

Bases: tensorflow.python.keras.losses.Loss

Loss class for the Sørensen–Dice coefficient.

call(y_true: tensorflow.python.framework.ops.Tensor, y_pred: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]

Calculate the DICE loss.

This functions calculates the DICE loss defined as:

\[1 - 2 * \frac{|A \cap B|}{|A| + |B|}\]

When no positive labels are found in both A and B the loss returns 0 by default. The loss works both for one-hot predicted labels and binary labels.

Parameters
  • y_true (tf.Tensor) – The ground truth labels, shape: (batch_size, N_1, N_2 … N_d) where N_d is the number of channels (can be 1). For a 3D tensor with 1 channel (binary class) and batch size of 1 it will have a shape of (1, N_1, N_2, N_3, 1)

  • y_pred (tf.Tensor) – The predicted labels. shape: (batch_size, N_1, N_2 … N_d) where N_d is the number of channels. When a binary prediction is done (last activation function is sigmoid), N_d = 1. When one-hot prediction are done (last activation function is softmax) N_d = number of classes

Returns

tf.Tensor – Tensor of length batch_size with the DICE loss for each sample

get_config() → dict[source]

Get the configuration of the loss.

Returns

dict – configuration of the loss

class PrognosAIs.Model.Losses.MaskedCategoricalCrossentropy(name: str = 'masked_categorical_crossentropy', class_weight: dict = None, mask_value: int = - 1, **kwargs)[source]

Bases: tensorflow.python.keras.losses.CategoricalCrossentropy

Caterogical crossentropy loss which takes into account missing values.

__call__(y_true: tensorflow.python.framework.ops.Tensor, y_pred: tensorflow.python.framework.ops.Tensor, sample_weight: tensorflow.python.framework.ops.Tensor = None) → tensorflow.python.framework.ops.Tensor[source]

Obtain the total masked categorical crossentropy loss for the batch.

Parameters
  • y_true (tf.Tensor) – Ground-truth labels, one-hot encoded (batch_size, N_1, N_2, …. N_d) tensor, with N_d the number of outputs

  • y_pred (tf.Tensor) – Predictions one-hot encoded, for example from softmax, (batch_size, N_1, N_2, …. N_d) tensor, with N_d the number of outputs

  • sample_weight (tf.Tensor) – Sample weight for each indidvidual label to be used in reduction of sample loss to overal batch loss

Returns

tf.Tensor – The total masked categorial crossentropy loss, scalar tensor with rank 0

__init__(name: str = 'masked_categorical_crossentropy', class_weight: dict = None, mask_value: int = - 1, **kwargs) → None[source]

Caterogical crossentropy loss which takes into account missing values.

For the samples with masked values a cross entropy of 0 will be used, for the other samples the standard cross entropy loss will be calculated

Parameters
  • name (str) – Optional name for the op

  • class_weight (dict) – Weights for each class

  • mask_value (int) – The value that indicates that a sample is missing

  • **kwargs – arguments to pass the default CategoricalCrossentropy loss

call(y_true: tensorflow.python.framework.ops.Tensor, y_pred: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]

Obtain the masked categorical crossentropy loss for each sample.

Parameters
  • y_true (tf.Tensor) – Ground-truth labels, one-hot encoded (batch_size, N_1, N_2, …. N_d) tensor, with N_d the number of outputs

  • y_pred (tf.Tensor) – Predictions one-hot encoded, for example from softmax, (batch_size, N_1, N_2, …. N_d) tensor, with N_d the number of outputs

Returns

tf.Tensor

The masked categorial crossentropy loss for each sample, has rank

one less than the inputs tensors

get_config() → dict[source]

Get the configuration of the loss.

Returns

dict – Configuration parameters of the loss

is_unmasked_sample(y_true: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]

Get whether the samples are unmasked (i.e. have real label data).

Parameters

y_true (tf.Tensor) – Tensor of the true labels

Returns

tf.Tensor – Tensor of 0s and 1s indicating whether that sample is unmasked.

PrognosAIs.Model.Metrics module

class PrognosAIs.Model.Metrics.ConcordanceIndex(name='ConcordanceIndex', **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

class PrognosAIs.Model.Metrics.DICE(name='dice_coefficient', foreground_only=True, **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

get_config()[source]

Returns the serializable config of the metric.

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

class PrognosAIs.Model.Metrics.MaskedAUC(name='MaskedAUC', mask_value=- 1, **kwargs)[source]

Bases: tensorflow.python.keras.metrics.AUC

get_config()[source]

Returns the serializable config of the metric.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates confusion matrix statistics.

Parameters
  • y_true – The ground truth values.

  • y_pred – The predicted values.

  • sample_weight – Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true.

Returns

Update op.

class PrognosAIs.Model.Metrics.MaskedCategoricalAccuracy(name='MaskedCategoricalAccuracy', mask_value=- 1, **kwargs)[source]

Bases: tensorflow.python.keras.metrics.CategoricalAccuracy

get_config()[source]

Returns the serializable config of the metric.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates metric statistics.

y_true and y_pred should have the same shape.

Parameters
  • y_true – Ground truth values. shape = [batch_size, d0, .. dN].

  • y_pred – The predicted values. shape = [batch_size, d0, .. dN].

  • sample_weight – Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)).

Returns

Update op.

class PrognosAIs.Model.Metrics.MaskedSensitivity(name='masked_sensitivity', mask_value=- 1, **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

reset_states()[source]

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

class PrognosAIs.Model.Metrics.MaskedSpecificity(name='masked_specificity', mask_value=- 1, **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

reset_states()[source]

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

class PrognosAIs.Model.Metrics.Sensitivity(name='Sensitivity_custom', **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

reset_states()[source]

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

class PrognosAIs.Model.Metrics.Specificity(name='Specificity_custom', **kwargs)[source]

Bases: tensorflow.python.keras.metrics.Metric

reset_states()[source]

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result()[source]

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

  1. Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example.

  2. You don’t need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed.

As a result, code should generally work the same way with graph or eager execution.

Parameters
  • *args

  • **kwargs – A mini-batch of inputs to the Metric.

PrognosAIs.Model.Metrics.concordance_index(y_true, y_pred)[source]

This function determines the concordance index given two tensorflow tensors

y_true contains a label to indicate whether events occurred, and time to events (or time to right censored data if no event occurred)

y_pred is beta*x in the cox model

PrognosAIs.Model.Parsers module

class PrognosAIs.Model.Parsers.CallbackParser(callback_settings: dict, root_path: str = None, module_paths=None, save_name=None)[source]

Bases: PrognosAIs.Model.Parsers.StandardParser

__init__(callback_settings: dict, root_path: str = None, module_paths=None, save_name=None)[source]

Parse callback settings to actual callbacks

Parameters

callback_settings – Settings for the callbacks

Returns

None

get_callbacks()[source]
replace_root_path(settings, root_path)[source]
class PrognosAIs.Model.Parsers.LossParser(loss_settings: dict, class_weights: dict = None, module_paths=None)[source]

Bases: PrognosAIs.Model.Parsers.StandardParser

__init__(loss_settings: dict, class_weights: dict = None, module_paths=None)[source]

Parse loss settings to actual losses

Parameters

loss_settings – Settings for the losses

Returns

None

get_losses()[source]
class PrognosAIs.Model.Parsers.MetricParser(metric_settings: dict, label_names: list = None, module_paths=None)[source]

Bases: PrognosAIs.Model.Parsers.StandardParser

__init__(metric_settings: dict, label_names: list = None, module_paths=None) → None[source]

Parse metrics settings to actual metrics

Parameters

loss_settings – Settings for the losses

convert_metrics_list_to_dict(metrics: list) → dict[source]
get_metrics()[source]
class PrognosAIs.Model.Parsers.OptimizerParser(optimizer_settings: dict, module_paths=None)[source]

Bases: PrognosAIs.Model.Parsers.StandardParser

__init__(optimizer_settings: dict, module_paths=None) → None[source]

Interfacing class to easily get a tf.keras.optimizers optimizer

Parameters

optimizer_settings – Arguments to be passed to the optimizer

Returns

None

get_optimizer()[source]
class PrognosAIs.Model.Parsers.StandardParser(config: dict, module_paths: list)[source]

Bases: object

get_class(class_name)[source]
parse_settings()[source]

PrognosAIs.Model.Trainer module

class PrognosAIs.Model.Trainer.Trainer(config: PrognosAIs.IO.ConfigLoader.ConfigLoader, sample_folder: str, output_folder: str, tmp_data_folder: Optional[str] = None, save_name: Optional[str] = None)[source]

Bases: object

Trainer to be used for training a model.

__init__(config: PrognosAIs.IO.ConfigLoader.ConfigLoader, sample_folder: str, output_folder: str, tmp_data_folder: Optional[str] = None, save_name: Optional[str] = None) → None[source]

Trainer to be used for training a model.

Parameters
  • config (ConfigLoader.ConfigLoader) – Config to be used

  • sample_folder (str) – Folder containing the train and validation samples

  • output_folder (str) – Folder to put the resulting model

  • tmp_data_folder (str) – Folder to copy samples to and load from. Defaults to None.

  • save_name (str) – Specify a name to save the model as instead of using a automatically generated one. Defaults to None.

static _get_architecture_name(model_name: str, input_dimensionality: dict) → Tuple[str, str][source]

Get the full architecture name from the model name and input dimensionality.

Parameters
  • model_name (str) – Name of the model

  • input_dimensionality (dict) – Dimensionality of the different inputs

Returns

Tuple[str, str] – Class name of architecture and full achitecture name

_setup_model() → tensorflow.python.keras.engine.training.Model[source]

Get the model architecture from the architecture name (not yet compiled).

Raises

ValueError – If architecture is not known

Returns

tf.keras.Model – The loaded architecture

get_distribution_strategy() → tensorflow.python.distribute.distribute_lib.Strategy[source]

Get the appropiate distribution strategy.

A strategy will be returned that can either distribute the training over multiple SLURM nodes, over multi GPUs, train on a single GPU or on a single CPU (in that order).

Returns

tf.distribute.Strategy – The distribution strategy to be used in training.

classmethod init_from_sys_args(args_in: list)PrognosAIs.Model.Trainer.Trainer[source]

Initialize a Trainer object from the command line.

Parameters

args_in (list) – Arguments to parse to the trainer

Returns

Trainer – The trainer object

load_class_weights() → Union[None, dict][source]

Load the class weight from the class weight file.

Returns

Union[None, dict]

Class weights if requested and the class weight file exists,

otherwise None.

property model

Model to be used in training.

Returns

tf.keras.Model – The model

move_data_to_temporary_folder(data_folder: str) → str[source]

Move the data to a temporary directory before loading.

Parameters

data_folder (str) – The original data folder

Returns

str – Folder to which the data has been moved

set_precision_strategy(float_policy_setting: Union[str, bool]) → None[source]

Set the appropiate precision strategy for GPUs.

If the GPUs support it a mixed float16 precision will be used (see tf.keras.mixe_precision for more information), which reduces the memory overhead of the training, while doing computation in float32. If GPUs dont support mixed precision, we will try a float16 precision setting. If that doesn’t work either the normal policy is used. If you get NaN values for loss or loss doesn’t converge it might be because of the policy. Try running the model without a policy setting.

Parameters

float_policy_setting (float_policy_setting – Union[str, bool]): Which policy to select if set to PrognosAIs.Constants.AUTO, we will automatically determine what can be done. “mixed” will only consider mixed precision, “float16” only considers float16 policy. Set to False to not use a policy

static set_tf_config(cluster_resolver: tensorflow.python.distribute.cluster_resolver.cluster_resolver.ClusterResolver, environment: Optional[str] = None) → None[source]

Set the TF_CONFIG env variable from the given cluster resolver.

From https://github.com/tensorflow/tensorflow/issues/37693

Parameters
  • cluster_resolver (tf.distribute.cluster_resolver.ClusterResolver) – cluster resolver to use.

  • environment (str) – Environment to set in TF_CONFIG. Defaults to None.

setup_callbacks() → list[source]

Set up callbacks to be used during training.

Returns

list – the callbacks

setup_data_generator(sample_folder: str)PrognosAIs.IO.DataGenerator.HDF5Generator[source]

Set up a data generator for a folder containg train samples.

Parameters

sample_folder (str) – The path to the folder containing the sample files.

Raises

ValueError – If the sample folder does not exist or does not contain any samples.

Returns

DataGenerator.HDF5Generator – Datagenerator of the sample in the sample folder.

setup_model() → tensorflow.python.keras.engine.training.Model[source]

Set up model to be used during train.

Returns

tf.keras.Model – The compiled model to be trained.

property train_data_generator

The train data generator to be used in training.

Returns

DataGenerator.HDF5Generator – The train data generator

train_model() → str[source]

Train the model.

Returns

str – The location where the model has been saved

property validation_data_generator

The validation data generator to be used in training.

Returns

DataGenerator.HDF5Generator – The validation data generator

Module contents