nn.Module functions and defenitions

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

torch.nn.Module

A base class for all PyTorch neural network modules.

Loss Function

A loss function measures the difference between predicted and actual values, quantifying how well the model is performing on a specific task.

Optimizer

An optimizer adjusts the model's parameters during training based on gradients computed by the loss function to minimize the error and improve model performance.

nn.Module.apply(fn)

Applies a function to each submodule of the module.

Backpropagation

Backpropagation is a process of computing gradients of the loss function with respect to the parameters using the chain rule of calculus, enabling efficient optimization in neural networks.

torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e

08, weight_decay=0, amsgrad=False) - Adam optimizer. It combines adaptive learning rates for each parameter with momentum-based updates.

torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e

08, weight_decay=0, momentum=0, centered=False) - RMSprop optimizer. It uses moving averages of squared gradients to scale the learning rate.

nn.Module.add_module(name, module)

Adds a child module to the current module.

nn.Module.__call__(input)

Calls the module's 'forward' method when the module is called as a function.

nn.Module.double()

Casts the module parameters to double data type.

nn.Module.float()

Casts the module parameters to float data type.

nn.Module.half()

Casts the module parameters to half-precision floating-point data type (float16).

torch.nn.CrossEntropyLoss()

Cross-Entropy loss function. It is commonly used for multi-class classification problems with the Softmax activation.

nn.Module.forward(input)

Defines the computation performed at every call to the module.

Gradient Descent

Gradient Descent is an optimization algorithm that iteratively updates the parameters in the direction of steepest descent of the loss function to minimize the error.

nn.Module.parameters()

Returns an iterator over module parameters.

nn.Module.__init__()

Initializes the module.

nn.Module.state_dict()

Returns the module's state dictionary containing all learnable parameters.

nn.Module.load_state_dict(state_dict)

Loads the state dictionary into the module.

torch.nn.MSELoss()

Mean Squared Error (MSE) loss function. It computes the mean squared difference between predicted and target values.

nn.Module.to(device)

Moves the module to the specified device (e.g., 'cuda' for GPU or 'cpu' for CPU).

nn.Module.share_memory()

Moves the module's parameters to shared memory, enabling sharing between processes.

torch.nn.NLLLoss()

Negative Log-Likelihood (NLL) loss function. It combines the LogSoftmax activation and the negative log-likelihood.

Parameters

Parameters are the learnable weights and biases of a neural network that get updated during training to optimize the model.

nn.Module.backward_hook(fn)

Registers a backward hook on the module.

nn.Module.forward_hook(fn)

Registers a forward hook on the module.

nn.Module.register_parameter(name, parameter)

Registers a parameter with the module, making it available in the module's parameters.

nn.Module.register_buffer(name, tensor)

Registers a tensor as a persistent buffer, which is not a parameter but should be part of the module's state.

nn.Module.named_modules()

Returns an iterator over all modules in the module hierarchy, yielding both the name of the module and the module itself.

nn.Module.modules()

Returns an iterator over all modules in the module hierarchy.

nn.Module.named_children()

Returns an iterator over immediate children modules, yielding both the name of the module and the module itself.

nn.Module.children()

Returns an iterator over immediate children modules.

nn.Module.trainable_parameters()

Returns an iterator over module parameters that require gradients.

nn.Module.named_parameters()

Returns an iterator over module parameters, yielding both the name of the parameter and the parameter itself.

nn.Module.requires_grad_(requires_grad=True)

Sets the 'requires_grad' attribute of the module's parameters.

nn.Module.zero_grad()

Sets the gradients of all parameters to zero.

nn.Module.eval()

Sets the module in evaluation mode, disabling gradients and dropout.

nn.Module.train()

Sets the module in training mode, enabling gradients and dropout.

torch.optim.SGD(params, lr=0.01, momentum=0, dampening=0, weight_decay=0, nesterov=False)

Stochastic Gradient Descent (SGD) optimizer. It updates the parameters based on gradients computed during backpropagation.


Kaugnay na mga set ng pag-aaral

Международная Экономика

View Set

Ch 8- Race and Ethnicity in Sports

View Set