learnergy.models.deep

deep-based models.

A package contaning deep-based models (networks) for all common learnergy modules.

class learnergy.models.deep.ConvDBN(model: Optional[str] = 'bernoulli', visible_shape: Optional[Tuple[int, int]] = (28, 28), filter_shape: Optional[Tuple[Tuple[int, int], Ellipsis]] = ((7, 7),), n_filters: Optional[Tuple[int, Ellipsis]] = (16,), n_channels: Optional[int] = 1, steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.0001,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), maxpooling: Optional[Tuple[bool, Ellipsis]] = (False, False), pooling_kernel: Optional[Tuple[int, Ellipsis]] = (2, 2), use_gpu: Optional[bool] = False)

Bases: learnergy.core.Model

A ConvDBN class provides the basic implementation for Convolutional DBNs.

References

H. Lee, et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Proceedings of the 26th annual international conference on machine learning (2009).

__init__(self, model: Optional[str] = 'bernoulli', visible_shape: Optional[Tuple[int, int]] = (28, 28), filter_shape: Optional[Tuple[Tuple[int, int], Ellipsis]] = ((7, 7),), n_filters: Optional[Tuple[int, Ellipsis]] = (16,), n_channels: Optional[int] = 1, steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.0001,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), maxpooling: Optional[Tuple[bool, Ellipsis]] = (False, False), pooling_kernel: Optional[Tuple[int, Ellipsis]] = (2, 2), use_gpu: Optional[bool] = False)

Initialization method.

Parameters
  • model – Indicates which type of ConvRBM should be used to compose the DBN.

  • visible_shape – Shape of visible units.

  • filter_shape – Shape of filters per layer.

  • n_filters – Number of filters per layer.

  • n_channels – Number of channels.

  • steps – Number of Gibbs’ sampling steps per layer.

  • learning_rate – Learning rate per layer.

  • momentum – Momentum parameter per layer.

  • decay – Weight decay used for penalization per layer.

  • maxpooling – Whether MaxPooling2D should be used or not.

  • pooling_kernel – The kernel size of each square-sized MaxPooling layer (when maxpooling=True).

  • use_gpu – Whether GPU should be used or not.

property decay(self)

Weight decay per layer.

property filter_shape(self)

Shape of filters.

fit(self, dataset: Union[torch.utils.data.Dataset, learnergy.core.Dataset], batch_size: Optional[int] = 128, epochs: Optional[Tuple[int, Ellipsis]] = (10, 10))

Fits a new ConvDBN model.

Parameters
  • dataset – A Dataset object containing the training data.

  • batch_size – Amount of samples per batch.

  • epochs – Number of training epochs per layer.

Returns

MSE (mean squared error) from the training step.

Return type

(float)

forward(self, x: torch.Tensor)

Performs a forward pass over the data.

Parameters

x – An input tensor for computing the forward pass.

Returns

A tensor containing the ConvDBN’s outputs.

Return type

(torch.Tensor)

property lr(self)

Learning rate per layer.

property maxpooling(self)

Usage of MaxPooling.

property models(self)

List of models (RBMs).

property momentum(self)

Momentum parameter per layer.

property n_channels(self)

Number of channels.

property n_filters(self)

Number of filters.

property n_layers(self)

Number of layers.

reconstruct(self, dataset: torch.utils.data.Dataset)

Reconstructs batches of new samples.

Parameters

dataset (torch.utils.data.Dataset) – A Dataset object containing the training data.

Returns

Reconstruction error and visible probabilities, i.e., P(v|h).

Return type

(Tuple[float, torch.Tensor])

property steps(self)

Number of steps Gibbs’ sampling steps per layer.

property visible_shape(self)

Shape of visible units.

class learnergy.models.deep.DBN(model: Optional[Tuple[str, Ellipsis]] = ('gaussian',), n_visible: Optional[int] = 128, n_hidden: Optional[Tuple[int, Ellipsis]] = (128,), steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.1,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), temperature: Optional[Tuple[float, Ellipsis]] = (1.0,), use_gpu: Optional[bool] = False, normalize: Optional[bool] = True, input_normalize: Optional[bool] = True)

Bases: learnergy.core.Model

A DBN class provides the basic implementation for Deep Belief Networks.

References

G. Hinton, S. Osindero, Y. Teh. A fast learning algorithm for deep belief nets. Neural computation (2006).

property T(self)

Temperature factor per layer.

__init__(self, model: Optional[Tuple[str, Ellipsis]] = ('gaussian',), n_visible: Optional[int] = 128, n_hidden: Optional[Tuple[int, Ellipsis]] = (128,), steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.1,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), temperature: Optional[Tuple[float, Ellipsis]] = (1.0,), use_gpu: Optional[bool] = False, normalize: Optional[bool] = True, input_normalize: Optional[bool] = True)

Initialization method.

Parameters
  • model – Indicates which type of RBM should be used to compose the DBN.

  • n_visible – Amount of visible units.

  • n_hidden – Amount of hidden units per layer.

  • steps – Number of Gibbs’ sampling steps per layer.

  • learning_rate – Learning rate per layer.

  • momentum – Momentum parameter per layer.

  • decay – Weight decay used for penalization per layer.

  • temperature – Temperature factor per layer.

  • use_gpu – Whether GPU should be used or not.

property decay(self)

Weight decay per layer.

fit(self, dataset: Union[torch.utils.data.Dataset, learnergy.core.Dataset], batch_size: Optional[int] = 128, epochs: Optional[Tuple[int, Ellipsis]] = (10,))

Fits a new DBN model.

Parameters
  • dataset – A Dataset object containing the training data.

  • batch_size – Amount of samples per batch.

  • epochs – Number of training epochs per layer.

Returns

MSE (mean squared error) and log pseudo-likelihood from the training step.

Return type

(Tuple[float, float])

forward(self, x: torch.Tensor)

Performs a forward pass over the data.

Parameters

x – An input tensor for computing the forward pass.

Returns

A tensor containing the DBN’s outputs.

Return type

(torch.Tensor)

property lr(self)

Learning rate per layer.

property models(self)

List of models (RBMs).

property momentum(self)

Momentum parameter per layer.

property n_hidden(self)

Tuple of hidden units.

property n_layers(self)

Number of layers.

property n_visible(self)

Number of visible units.

reconstruct(self, dataset: torch.utils.data.Dataset)

Reconstructs batches of new samples.

Parameters

dataset (torch.utils.data.Dataset) – A Dataset object containing the training data.

Returns

Reconstruction error and visible probabilities, i.e., P(v|h).

Return type

(Tuple[float, torch.Tensor])

property steps(self)

Number of steps Gibbs’ sampling steps per layer.

class learnergy.models.deep.ResidualDBN(model: Optional[str] = 'bernoulli', n_visible: Optional[int] = 128, n_hidden: Optional[Tuple[int, Ellipsis]] = (128,), steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.1,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), temperature: Optional[Tuple[float, Ellipsis]] = (1.0,), zetta1: Optional[float] = 1.0, zetta2: Optional[float] = 1.0, use_gpu: Optional[bool] = False)

Bases: learnergy.models.deep.DBN

A ResidualDBN class provides the basic implementation for Residual-based Deep Belief Networks.

References

M. Roder, et al. A Layer-Wise Information Reinforcement Approach to Improve Learning in Deep Belief Networks. International Conference on Artificial Intelligence and Soft Computing (2020).

__init__(self, model: Optional[str] = 'bernoulli', n_visible: Optional[int] = 128, n_hidden: Optional[Tuple[int, Ellipsis]] = (128,), steps: Optional[Tuple[int, Ellipsis]] = (1,), learning_rate: Optional[Tuple[float, Ellipsis]] = (0.1,), momentum: Optional[Tuple[float, Ellipsis]] = (0.0,), decay: Optional[Tuple[float, Ellipsis]] = (0.0,), temperature: Optional[Tuple[float, Ellipsis]] = (1.0,), zetta1: Optional[float] = 1.0, zetta2: Optional[float] = 1.0, use_gpu: Optional[bool] = False)

Initialization method.

Parameters
  • model (str) – Indicates which type of RBM should be used to compose the ResidualDBN.

  • n_visible (int) – Amount of visible units.

  • n_hidden (tuple) – Amount of hidden units per layer.

  • steps (tuple) – Number of Gibbs’ sampling steps per layer.

  • learning_rate (tuple) – Learning rate per layer.

  • momentum (tuple) – Momentum parameter per layer.

  • decay (tuple) – Weight decay used for penalization per layer.

  • temperature (tuple) – Temperature factor per layer.

  • zetta1 (float) – Penalization factor for original learning.

  • zetta2 (float) – Penalization factor for residual learning.

  • use_gpu (boolean) – Whether GPU should be used or not.

calculate_residual(self, pre_activations: torch.Tensor)

Calculates the residual learning over input.

Parameters

pre_activations (torch.Tensor) – Pre-activations to be used.

Returns

The residual learning based on input pre-activations.

Return type

(torch.Tensor)

fit(self, dataset: Union[torch.utils.data.Dataset, learnergy.core.Dataset], batch_size: Optional[int] = 128, epochs: Optional[Tuple[int, Ellipsis]] = (10,))

Fits a new ResidualDBN model.

Parameters
  • dataset – A Dataset object containing the training data.

  • batch_size – Amount of samples per batch.

  • epochs – Number of training epochs per layer.

Returns

MSE (mean squared error) and log pseudo-likelihood from the training step.

Return type

(Tuple[float, float])

forward(self, x: torch.Tensor)

Re-writes the forward pass for classification purposes.

Parameters

x – An input tensor for computing the forward pass.

Returns

A tensor containing the DBN’s outputs.

Return type

(torch.Tensor)

property zetta1(self)

Penalization factor for original learning.

property zetta2(self)

Penalization factor for residual learning.