D2l

Latest version: v1.0.3

Safety actively analyzes 638819 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.17.6

Add PaddlePaddle implementation for the `d2l` library (compatible with classic.d2l.ai).
See the [PR in d2l-zh](https://github.com/d2l-ai/d2l-zh/pull/1198).

0.17.5

This release fixes issues when installing d2l package and running d2l notebooks on Google Colab with Python 3.7 and updates PyTorch & TensorFlow to their respective latest versions.

More concretely, this release includes the following upgrades/fixes:

* Update TensorFlow==2.8.0 (2055)
* Update PyTorch: torch==1.11.0 & torchvision==0.12.0 (2063)
* Rollback NumPy==1.21.5 & Support Python>=3.7 (2066)
* Fix MXNet plots; NumPy auto coercion & Unpin matplotlib==3.4 dependency (2078)
* Fix the broken download link for MovieLens dataset (2074)
* Fix iPython deprecation warning of set_matplotlib_formats (2065)
* Fix Densenet PyTorch implementation using nn.AdaptiveAvgPool2d (f6b1dd0053a5caeb8a53c81f97eb929c27fb868e)
* Fix the hotdog class in section Fine Tuning for imagenet, which is number [934](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a#file-imagenet1000_clsidx_to_labels-txt-L935) instead of 713 (2009)
* Use `reduction=none` in PyTorch loss for `train_epoch_ch3` (2007)
* Fix argument `test_feature`->`test_features` of `train_and_pred` in kaggle house price section (1982)
* Fix TypeError: can’t convert CUDA tensor to numpy, explicitly moving torch tensor to cpu before plotting (1966)

0.17.1

This release supports running the book with [SageMaker Studio Lab](https://studiolab.sagemaker.aws/import/github/d2l-ai/d2l-pytorch-sagemaker-studio-lab/blob/main/GettingStarted-D2L.ipynb) for free and introduces several fixes:

* Fix data synchronization for multi-GPU training in PyTorch (https://github.com/d2l-ai/d2l-en/pull/1978)
* Fix token sampling in BERT datasets (https://github.com/d2l-ai/d2l-en/pull/1979/)
* Fix semantic segmentation normalization in PyTorch (https://github.com/d2l-ai/d2l-en/pull/1980/)
* Fix mean square loss calculation in PyTorch and TensorFlow (https://github.com/d2l-ai/d2l-en/pull/1984)
* Fix broken paragraphs (https://github.com/d2l-ai/d2l-en/commit/8e0fe4ba54b6e2a0aa0f15f58a1e81f7fef1cdd7)

0.17.0

*Dive into Deep Learning* is now available on [arxiv](https://arxiv.org/abs/2106.11342)!

Framework Adaptation

We have added TensorFlow implementations up to Chapter 11 (Optimization Algorithms).

0.16.0

Brand-New Attention Chapter

We have added the brand-new Chapter: Attention Mechanisms:

* Attention Cues
* Attention Cues in Biology
* Queries, Keys, and Values
* Visualization of Attention

* Attention Pooling: Nadaraya-Watson Kernel Regression
* Generating the Dataset
* Average Pooling
* Nonparametric Attention Pooling
* Parametric Attention Pooling

* Attention Scoring Functions
* Masked Softmax Operation
* Additive Attention
* Scaled Dot-Product Attention

* Bahdanau Attention
* Model
* Defining the Decoder with Attention
* Training

* Multi-Head Attention
* Model
* Implementation

* Self-Attention and Positional Encoding
* Self-Attention
* Comparing CNNs, RNNs, and Self-Attention
* Positional Encoding

* Transformer
* Model
* Positionwise Feed-Forward Networks
* Residual Connection and Layer Normalization
* Encoder
* Decoder
* Training


PyTorch Adaptation Completed

We have completed PyTorch implementations for Vol.1 (Chapter 1--15).

0.15.0

Framework Adaptation

We have added PyTorch implementations up to Chapter 11 (Optimization Algorithms). Chapter 1--7 and Chapter 11 have also been adapted to TensorFlow.

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.