modules.losses¶
Losses module contains implementations for various losses used generally in vision and language space. One can register custom losses to be detected by pythia using the following example.
from pythia.common.registry import registry
from torch import nn
@registry.register_loss("custom")
class CustomLoss(nn.Module):
...
Then in your model’s config you can specify losses
attribute to use this loss
in the following way:
model_attributes:
some_model:
losses:
- type: custom
- params: {}
-
class
pythia.modules.losses.
AttentionSupervisionLoss
[source]¶ Loss for attention supervision. Used in case you want to make attentions similar to some particular values.
-
forward
(sample_list, model_output)[source]¶ Calculates and returns the multi loss.
Parameters: - sample_list (SampleList) – SampleList containing targets attribute.
- model_output (Dict) – Model output containing scores attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
BinaryCrossEntropyLoss
[source]¶ -
forward
(sample_list, model_output)[source]¶ Calculates and returns the binary cross entropy.
Parameters: - sample_list (SampleList) – SampleList containing targets attribute.
- model_output (Dict) – Model output containing scores attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
CaptionCrossEntropyLoss
[source]¶ -
forward
(sample_list, model_output)[source]¶ Calculates and returns the cross entropy loss for captions.
Parameters: - sample_list (SampleList) – SampleList containing targets attribute.
- model_output (Dict) – Model output containing scores attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
CombinedLoss
(weight_softmax)[source]¶ -
forward
(sample_list, model_output)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
pythia.modules.losses.
LogitBinaryCrossEntropy
[source]¶ Returns Binary Cross Entropy for logits.
Attention
Key: logit_bce
-
forward
(sample_list, model_output)[source]¶ Calculates and returns the binary cross entropy for logits
Parameters: - sample_list (SampleList) – SampleList containing targets attribute.
- model_output (Dict) – Model output containing scores attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
Losses
(loss_list)[source]¶ Losses
acts as an abstraction for instantiating and calculating losses.BaseModel
instantiates this class based on the losses attribute in the model’s configuration model_attributes.loss_list
needs to be a list for each separate loss containing type and params attributes.Parameters: loss_list (List[ConfigNode]) – Description of parameter loss_list. Example:
# losses: # - type: logit_bce # Can also contain `params` to specify that particular loss's init params # - type: combined config = [{"type": "logit_bce"}, {"type": "combined"}] losses = Losses(config)
Note
Since,
Losses
is instantiated in theBaseModel
, normal end user mostly doesn’t need to use this class.-
losses
¶ List containing instanttions of each loss passed in config
-
forward
(sample_list, model_output, *args, **kwargs)[source]¶ Takes in the original
SampleList
returned from DataLoader and model_output returned from the model and returned a Dict containing loss for each of the losses in losses.Parameters: - sample_list (SampleList) – SampleList given be the dataloader.
- model_output (Dict) – Dict returned from model as output.
Returns: Dictionary containing loss value for each of the loss.
Return type: Dict
-
-
class
pythia.modules.losses.
MultiLoss
(params)[source]¶ A loss for combining multiple losses with weights.
Parameters: params (List(Dict)) – A list containing parameters for each different loss and their weights. Example:
# MultiLoss works with config like below where each loss's params and # weights are defined losses: - type: multi params: - type: logit_bce weight: 0.3 params: {} - type: attention_supervision weight: 0.7 params: {}
-
forward
(sample_list, model_output, *args, **kwargs)[source]¶ Calculates and returns the multi loss.
Parameters: - sample_list (SampleList) – SampleList containing attentions attribute.
- model_output (Dict) – Model output containing attention_supervision attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
NLLLoss
[source]¶ Negative log likelikehood loss.
-
forward
(sample_list, model_output)[source]¶ Calculates and returns the negative log likelihood.
Parameters: - sample_list (SampleList) – SampleList containing targets attribute.
- model_output (Dict) – Model output containing scores attribute.
Returns: Float value for loss.
Return type: torch.FloatTensor
-
-
class
pythia.modules.losses.
PythiaLoss
(params={})[source]¶ Internal Pythia helper and wrapper class for all Loss classes. It makes sure that the value returned from a Loss class is a dict and contain proper dataset type in keys, so that it is easy to figure out which one is the val loss and which one is train loss.
For example: it will return
{"val/logit_bce": 27.4}
, in case logit_bce is used and SampleList is from val set.Parameters: params (type) – Description of parameter params. Note
Since,
PythiaLoss
is used by theLosses
class, end user doesn’t need to worry about it.-
forward
(sample_list, model_output, *args, **kwargs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
pythia.modules.losses.
SoftmaxKlDivLoss
[source]¶ -
forward
(sample_list, model_output)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
pythia.modules.losses.
WeightedSoftmaxLoss
[source]¶ -
forward
(sample_list, model_output)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
pythia.modules.losses.
WrongLoss
[source]¶ -
forward
(sample_list, model_output)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-