D2l

Latest version: v1.0.3

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 3

0.14.0

Highlights

We have added both PyTorch and TensorFlow implementations up to Chapter 7 (Modern CNNs).

Improvements

- We updated the text to be framework neutral, such as now we call `ndarray` as tensor.
- Readers can click the tab in the HTML version to switch between frameworks, both colab button and discussion thread will change properly.
- We changed the release process, d2l.ai will host the latest release (i.e. the release branch), instead of the contents from the master branch. We unified the version number of both text and the `d2l` package. That's why we jumped from v0.8 to v0.14.0
- The notebook zip contains three folders, `mxnet`, `pytorch` and `tensorflow` (though we only build the PDF for mxnet yet).

0.8.0

Highlights

D2L is now runnable on [Amazon SageMaker](https://d2l.ai/chapter_appendix-tools-for-deep-learning/sagemaker.html) and [Google Colab](https://d2l.ai/chapter_appendix-tools-for-deep-learning/colab.html).

New Contents

The following chapters are re-organized:

* Natural Language Processing: Pretraining
* Natural Language Processing: Applications

The following sections are added:

* Subword Embedding (Byte-pair encoding)
* Bidirectional Encoder Representations from Transformers (BERT)
* The Dataset for Pretraining BERT
* Pretraining BERT
* Natural Language Inference and the Dataset
* Natural Language Inference: Using Attention
* Fine-Tuning BERT for Sequence-Level and Token-Level Applications
* Natural Language Inference: Fine-Tuning BERT

Improvements

There have been many light revisions and improvements throughout the book.

0.7.0

Highlights

* D2L is now based on the NumPy interface. All the code samples are rewritten.

New Contents

* Recommender Systems
* Overview of Recommender Systems
* The MovieLens Dataset
* Matrix Factorization
* AutoRec: Rating Prediction with Autoencoders
* Personalized Ranking for Recommender Systems
* Neural Collaborative Filtering for Personalized Ranking
* Sequence-Aware Recommender Systems
* Feature-Rich Recommender Systems
* Factorization Machines
* Deep Factorization Machines

* Appendix: Mathematics for Deep Learning
* Geometry and Linear Algebraic Operations
* Eigendecompositions
* Single Variable Calculus
* Multivariable Calculus
* Integral Calculus
* Random Variables
* Maximum Likelihood
* Distributions
* Naive Bayes
* Statistics
* Information Theory

* Attention Mechanisms
* Attention Mechanism
* Sequence to Sequence with Attention Mechanism
* Transformer

* Generative Adversarial Networks
* Generative Adversarial Networks
* Deep Convolutional Generative Adversarial Networks

* Preliminaries
* Data Preprocessing
* Calculus


Improvements

* The Preliminaries chapter is improved.
* More theoretical analysis is added to the Optimization chapter.


Preview Version

Hard copies of a D2L preview version based on this release (excluding chapters of Recommender Systems and Generative Adversarial Networks) are distributed at AWS re:Invent 2019 and NeurIPS 2019.

0.6.0

Change of Contents
=================

We heavily revised the following chapters, especially during teaching [STAT 157 at Berkeley](https://courses.d2l.ai/berkeley-stat-157/index.html).

* Preface
* Installation
* Introduction
* The Preliminaries: A Crashcourse
* Linear Neural Networks
* Multilayer Perceptrons
* Recurrent Neural Networks


The Community Are Translating D2L into Korean and Japanese
=================

[d2l-ko in Korean](https://github.com/d2l-ai/d2l-ko) (website: ko.d2l.ai) joins d2l.ai! Thank [Muhyun Kim](https://github.com/muhyun), [Kyoungsu Lee](https://github.com/yikster), [Ji hye Seo](https://github.com/jihys), [Jiyang Kang](https://github.com/jamiekang) and [many other contributors](https://github.com/d2l-ai/d2l-ko/graphs/contributors)!

[d2l-ja in Japanese](https://github.com/d2l-ai/d2l-ja) (website: ja.d2l.ai) joins d2l.ai! Thank [Masaki Samejima](https://github.com/harusametime)!


Thanks to Our Contributors
=================
alxnorden, avinashingit, bowen0701, brettkoonce, Chaitanya Prakash Bapat, cryptonaut, Davide Fiocco, edgarroman, gkutiel, John Mitro, Liang Pu, Rahul Agarwal, mohamed-ali, mstewart141, Mike Müller, NRauschmayr, Prakhar Srivastav, sad-, sfermigier, Sheng Zha, sundeepteki, topecongiro, tpdi, vermicelli, Vishaal Kapoor, vishwesh5, YaYaB, Yuhong Chen, Evgeniy Smirnov, lgov, Simon Corston-Oliver, IgorDzreyev, trungha-ngx, pmuens, alukovenko, senorcinco, vfdev-5, dsweet, Mohammad Mahdi Rahimi, Abhishek Gupta, uwsd, DomKM, Lisa Oakley, vfdev-5, bowen0701, arush15june, prasanth5reddy.

0.5.0

Contents
=================

* Translated contents from https://github.com/d2l-ai/d2l-zh, including the following chapters
* Introduction
* A Taste of Deep Learning
* Deep Learning Basics
* Deep Learning Computation
* Convolutional Neural Networks
* Recurrent Neural Networks
* Optimization Algorithms
* Computational Performance
* Computer Vision
* Natural Language Processing
* Appendix

* Added new contents in the following chapters
* Introduction
* A Taste of Deep Learning
* Deep Learning Basics
* Deep Learning Computation
* Convolutional Neural Networks


Style
=================
* Improved HTML styles
* Improved PDF styles

Chinese Version
=================

Page 3 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.