PrognosAIs.Model.Architectures package

Submodules

PrognosAIs.Model.Architectures.AlexNet module

class PrognosAIs.Model.Architectures.AlexNet.AlexNet_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

create_model()[source]

Here the code to create the actual model

padding_type = 'valid'
class PrognosAIs.Model.Architectures.AlexNet.AlexNet_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

create_model()[source]

Here the code to create the actual model

padding_type = 'valid'

PrognosAIs.Model.Architectures.Architecture module

class PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.NetworkArchitecture

static make_outputs(output_info: dict, output_data_type: str, activation_type: str = 'softmax', squeeze_outputs: bool = True) → dict[source]

Make the outputs

class PrognosAIs.Model.Architectures.Architecture.NetworkArchitecture(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config: dict = {})[source]

Bases: abc.ABC

static check_minimum_input_size(input_layer: tensorflow.python.keras.engine.input_layer.Input, minimum_input_size: numpy.ndarray)[source]
abstract create_model()[source]

Here the code to create the actual model

static get_corrected_stride_size(layer: <module 'tensorflow.keras.layers' from '/home/docs/checkouts/readthedocs.org/user_builds/prognosais/envs/latest/lib/python3.7/site-packages/tensorflow/keras/layers/__init__.py'>, stride_size: list, conv_size: list)[source]

Ensure that the stride is never bigger than the actual input In this way any network can keep working, indepedent of size

make_dropout_layer(layer)[source]
make_inputs(input_shapes: dict, input_dtype: str, squeeze_inputs: bool = True) → Union[dict, tensorflow.python.keras.engine.input_layer.Input][source]
abstract make_outputs(output_info: dict, output_data_type: str) → <module ‘tensorflow.keras.layers’ from ‘/home/docs/checkouts/readthedocs.org/user_builds/prognosais/envs/latest/lib/python3.7/site-packages/tensorflow/keras/layers/__init__.py’>[source]

Make the outputs

PrognosAIs.Model.Architectures.DDSNet module

class PrognosAIs.Model.Architectures.DDSNet.DDSNet(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

get_DDS_block(layer, N_filters)[source]
init_dimensionality(N_dimension)[source]
class PrognosAIs.Model.Architectures.DDSNet.DDSNet_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DDSNet.DDSNet

create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.DDSNet.DDSNet_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DDSNet.DDSNet

create_model()[source]

Here the code to create the actual model

dims = 3

PrognosAIs.Model.Architectures.DenseNet module

class PrognosAIs.Model.Architectures.DenseNet.DenseNet(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

get_dense_block(layer, N_filters, N_conv_layers)[source]
get_dense_stem(layer, N_filters)[source]
get_transition_block(layer, N_filters, theta)[source]
init_dimensionality(N_dimension)[source]
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_121_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_121_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 3
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_169_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_169_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 3
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_201_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_201_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 3
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_264_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.DenseNet.DenseNet_264_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.DenseNet.DenseNet

GROWTH_RATE = 32
INITIAL_FILTERS = 64
THETA = 0.5
create_model()[source]

Here the code to create the actual model

dims = 3

PrognosAIs.Model.Architectures.InceptionNet module

class PrognosAIs.Model.Architectures.InceptionNet.InceptionNet_InceptionResNetV2_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.InceptionNet.InceptionResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.InceptionNet.InceptionNet_InceptionResNetV2_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.InceptionNet.InceptionResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.InceptionNet.InceptionResNet(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

get_inception_resnet_A(layer)[source]
get_inception_resnet_B(layer)[source]
get_inception_resnet_C(layer)[source]
get_inception_resnet_reduction_A(layer)[source]
get_inception_resnet_reduction_B(layer)[source]
get_inception_stem(layer)[source]
init_dimensionality(N_dimension)[source]

PrognosAIs.Model.Architectures.ResNet module

class PrognosAIs.Model.Architectures.ResNet.ResNet(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

get_residual_conv_block(layer: <module 'tensorflow.keras.layers' from '/home/docs/checkouts/readthedocs.org/user_builds/prognosais/envs/latest/lib/python3.7/site-packages/tensorflow/keras/layers/__init__.py'>, N_filters: int, kernel_size: list)[source]
get_residual_identity_block(layer, N_filters, kernel_size)[source]
class PrognosAIs.Model.Architectures.ResNet.ResNet_18_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.ResNet.ResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.ResNet.ResNet_18_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.ResNet.ResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.ResNet.ResNet_18_multioutput_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.ResNet.ResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.ResNet.ResNet_34_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.ResNet.ResNet

create_model()[source]

Here the code to create the actual model

class PrognosAIs.Model.Architectures.ResNet.ResNet_34_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.ResNet.ResNet

create_model()[source]

Here the code to create the actual model

PrognosAIs.Model.Architectures.UNet module

class PrognosAIs.Model.Architectures.UNet.UNet_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config: dict = {})[source]

Bases: PrognosAIs.Model.Architectures.UNet.Unet

create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.UNet.UNet_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config: dict = {})[source]

Bases: PrognosAIs.Model.Architectures.UNet.Unet

create_model()[source]

Here the code to create the actual model

dims = 3
class PrognosAIs.Model.Architectures.UNet.Unet(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config: dict = {})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.NetworkArchitecture

get_conv_block(layer, N_filters, kernel_size=3, activation='relu', kernel_regularizer=None)[source]
get_cropping_block(conv_layer, upsampling_layer)[source]
get_depth()[source]
get_number_of_filters()[source]
get_padding_block(layer)[source]
get_pool_block(layer)[source]
get_upsampling_block(layer, N_filters, activation='relu', kernel_regularizer=None)[source]
init_dimensionality(N_dimension)[source]
make_outputs(output_info: dict, output_data_type: str, activation_type: str = 'softmax')[source]

Make the outputs

PrognosAIs.Model.Architectures.VGG module

class PrognosAIs.Model.Architectures.VGG.VGG(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.Architecture.ClassificationNetworkArchitecture

get_VGG_block(layer, N_filters, N_conv_layer)[source]
init_dimensionality(N_dimension)[source]
class PrognosAIs.Model.Architectures.VGG.VGG_16_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.VGG.VGG

create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.VGG.VGG_16_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.VGG.VGG

create_model()[source]

Here the code to create the actual model

dims = 3
class PrognosAIs.Model.Architectures.VGG.VGG_19_2D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.VGG.VGG

create_model()[source]

Here the code to create the actual model

dims = 2
class PrognosAIs.Model.Architectures.VGG.VGG_19_3D(input_shapes: dict, output_info: dict, input_data_type='float32', output_data_type='float32', model_config={})[source]

Bases: PrognosAIs.Model.Architectures.VGG.VGG

create_model()[source]

Here the code to create the actual model

dims = 3

Module contents