Interface change 🛠︎
Bug Fixes :bug:
Added a feature :new:
Distribution API
- 🛠︎Change the argument order of \_\_init\_\_ in the exponential distribution families and make the distribution parameters explicit 90
- 🛠︎Removed the sampling option from DistributionBase.set_dist (relaxed distribution families still have a sampling option) 108
- 🛠︎ TransformedDistribution will be the probability distribution of 2 variables of input and output of flow, and the return_all option of the conventional sample method is abolished. 115
- 🛠︎ Rename return_all option of InverseTransformedDistribution.sample to return_hidden 115
- 🛠︎ Added TransformedDistribution(or InverseTransformedDistribution ).sample with return_all option (that is, option that also returns random variables that are not involved). 115
- 🛠︎ TransformedDistribution.\_\_init\_\_ argument `var` is renamed to`flow_output_var`
- :new: Added Distribution.has_reparam property 93
- :new: Add return_all option to MixtureModel.sample 115
- :bug: Fixed a bug that unintentional overwrite of parameters of basic Distribution when torch.load 113
Loss API
- 🛠︎StochasticReconstructionLoss removed (need to explicitly configure this loss) 103
- 🛠︎Changed the base class of Loss to torch.nn.Module 100
- 🛠︎Renamed Loss.train to Loss.loss_train, and Loss.test to Loss.loss_test 100
- 🛠︎Renamed Loss._get_eval to Loss.forward 95
- 🛠︎Added Entropy method to switch entropy estimation by AnalyticalEntropy and MonteCarlo in options (deprecated Entropy class) 89
- 🛠︎Added a CrossEntropy method that can be switched as well (deprecated the CrossEntropy class) 89
- 🛠︎Added a KullbackLeibler method that can be switched as well (deprecated the KullbackLeibler class) 89
- 🛠︎ deprecated ELBO class and replaced it with ELBO method that returns a Loss instance 89
- :new: Support for Loss.detach method 93
- :new: Support for MinLoss, MaxLoss 95
- :new: Support for alternative loss `REINFORCE` to derive a policy gradient. 93
- :new: Loss support for DataParallel 100
- :new: Separated some features of the Loss class into the Divergence class 95
- :new: Added return_all option to Loss.eval (that is, when `return_dict = True`, you can also select whether to return unrelated random variables) 115
- :bug: Fixed a bug in IterativeLoss where the past value was conditioned on each step as the future value. 115
- :bug: Placed the parameter tensor of ValueLoss in nn.Module.device. 100
- :bug: Fixed incorrect argument checking in WassersteinDistance and MMDLoss. 103
- :bug: Fixed a bug in checking variables in Loss initialization 107
- :bug: Fixed a bug in IterativeLoss 107
Other
- :new: Add utils.lru_cache_for_sample_dict decorator that enables memoization with a function that takes a dictionary of random variables and their realization values 109
- :new: Renamed examples/vae_model to vae_with_vae_class
- :new: Added some exception messages 103
- :bug: Fixed jacobian calculation in flow/Preprocess 107
- :bug: Fixed a bug in some browsers that does not show the formula of readme.
- :bug: Alternate text is displayed when readme formula is not displayed. 117