Nvtabular

Latest version: v23.8.0

Safety actively analyzes 682334 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.7.1

Improvements

- Add LogOp support for list features [1153](https://github.com/NVIDIA-Merlin/NVTabular/issues/1153)
- Add Normalize operator support for list features [1154](https://github.com/NVIDIA-Merlin/NVTabular/issues/1154)
- Add DataLoader.epochs() method and Dataset.to_iter(epochs=) argument [1147](https://github.com/NVIDIA-Merlin/NVTabular/pull/1147)
- Add ValueCount operator for recording of multihot min and max list lengths [1171](https://github.com/NVIDIA-Merlin/NVTabular/pull/1171)

Bug Fixes

- Fix Criteo inference [1198](https://github.com/NVIDIA-Merlin/NVTabular/issues/1198)
- Fix performance regressions in Criteo benchmark [1222](https://github.com/NVIDIA-Merlin/NVTabular/issues/1222)
- Fix error in JoinGroupby op [1167](https://github.com/NVIDIA-Merlin/NVTabular/issues/1222)
- Fix Filter/JoinExternal key error [1143](https://github.com/NVIDIA-Merlin/NVTabular/issues/1143)
- Fix LambdaOp transforming dependency values [1185](https://github.com/NVIDIA-Merlin/NVTabular/issues/)
- Fix reading parquet files with list columns from GCS [1155](https://github.com/NVIDIA-Merlin/NVTabular/issues/1155)
- Fix TargetEncoding with dependencies as the target [1165](https://github.com/NVIDIA-Merlin/NVTabular/issues/1165)
- Fix Categorify op to calculate unique count stats for Nulls [1159](https://github.com/NVIDIA-Merlin/NVTabular/issues/1159)

0.7.0

Improvements

- Add column tagging API [943](https://github.com/NVIDIA/NVTabular/issues/943)
- Export dataset schema when writing out datasets [948](https://github.com/NVIDIA/NVTabular/issues/948)
- Make dataloaders aware of schema [947](https://github.com/NVIDIA/NVTabular/issues/947)
- Standardize a Workflows representation of its output columns [372](https://github.com/NVIDIA/NVTabular/issues/372)
- Add multi-gpu training example using PyTorch Distributed [775](https://github.com/NVIDIA/NVTabular/issues/775)
- Speed up reading Parquet files from remote storage like GCS or S3 [1119](https://github.com/NVIDIA/NVTabular/pull/1119)
- Add utility to convert TFRecord datasets to Parquet [1085](https://github.com/NVIDIA/NVTabular/pull/1085)
- Add multi-gpu training example using PyTorch Distributed [775](https://github.com/NVIDIA/NVTabular/issues/775)
- Add multihot support for PyTorch inference [719](https://github.com/NVIDIA/NVTabular/issues/719)
- Add options to reserve categorical indices in the Categorify() op [1074](https://github.com/NVIDIA/NVTabular/issues/1074)
- Update notebooks to work with CPU only systems [960](https://github.com/NVIDIA/NVTabular/issues/960)
- Save output from Categorify op in a single table for HugeCTR [946](https://github.com/NVIDIA/NVTabular/issues/946)
- Add a keyset file for HugeCTR integration [1049](https://github.com/NVIDIA/NVTabular/issues/1049)

Bug Fixes

- Fix category counts written out by the Categorify op [1128](https://github.com/NVIDIA/NVTabular/issues/1128)
- Fix HugeCTR inference example [1130](https://github.com/NVIDIA/NVTabular/pull/1130)
- Fix make_feature_column_workflow bug in Categorify if features have vocabularies of varying size. [1062](https://github.com/NVIDIA/NVTabular/issues/1062)
- Fix TargetEncoding op on CPU only systems [976](https://github.com/NVIDIA/NVTabular/issues/976)
- Fix writing empty partitions to Parquet files [1097](https://github.com/NVIDIA/NVTabular/issues/1097)

0.6.1

Bug Fixes

- Fix installing package via pip [1030](https://github.com/NVIDIA/NVTabular/pull/1030)
- Fix inference with groupby operator [1019](https://github.com/NVIDIA/NVTabular/issues/1019)
- Install tqdm with conda package [1030](https://github.com/NVIDIA/NVTabular/pull/1030)
- Fix workflow output_dtypes with empty partitions [1028](https://github.com/NVIDIA/NVTabular/pull/1028)

0.6.0

Improvements

- Add CPU support [534](https://github.com/NVIDIA/NVTabular/issues/534)
- Speed up inference on Triton Inference Server [744](https://github.com/NVIDIA/NVTabular/issues/744)
- Add support for session based recommenders [355](https://github.com/NVIDIA/NVTabular/issues/355)
- Add PyTorch Dataloader support for Sparse Tensors [500](https://github.com/NVIDIA/NVTabular/issues/500)
- Add ListSlice operator for truncating list columns [734](https://github.com/NVIDIA/NVTabular/issues/734)
- Categorical ids sorted by frequency [799](https://github.com/NVIDIA/NVTabular/issues/799)
- Add ability to select a subset of a ColumnGroup [809](https://github.com/NVIDIA/NVTabular/issues/809)
- Add option to use Rename op to give a single column a new fixed name [825](https://github.com/NVIDIA/NVTabular/issues/824)
- Add a 'map' function to KerasSequenceLoader, which enables sample weights [667](https://github.com/NVIDIA/NVTabular/issues/667)
- Add JoinExternal option on nvt.Dataset in addition to cudf [370](https://github.com/NVIDIA/NVTabular/issues/370)
- Allow passing ColumnGroup to get_embedding_sizes [732](https://github.com/NVIDIA/NVTabular/issues/732)
- Add ability to name LambdaOp and provide a better default name in graph visualizations [860](https://github.com/NVIDIA/NVTabular/issues/860)

Bug Fixes

- Fix make_feature_column_workflow for Categorical columns [763](https://github.com/NVIDIA/NVTabular/issues/763)
- Fix Categorify output dtypes for list columns [963](https://github.com/NVIDIA/NVTabular/issues/963)
- Fix inference for Outbrain example [669](https://github.com/NVIDIA/NVTabular/issues/669)
- Fix dask metadata after calling workflow.to_ddf() [852](https://github.com/NVIDIA/NVTabular/issues/734)
- Fix out of memory errors [896](https://github.com/NVIDIA/NVTabular/issues/896), [#971](https://github.com/NVIDIA/NVTabular/pull/971)
- Fix normalize output when stdev is zero [993](https://github.com/NVIDIA/NVTabular/pull/993)
- Fix using UCX with a dask cluster on Merlin containers [872](https://github.com/NVIDIA/NVTabular/pull/872)

0.5.3

Bug Fixes

- Fix Shuffling in Torch DataLoader [818](https://github.com/NVIDIA/NVTabular/pull/818)
- Fix "Unsupported type_id conversion" in triton inference for string columns [813](https://github.com/NVIDIA/NVTabular/issues/813)
- Fix HugeCTR inference backend [Merlin8](https://github.com/NVIDIA-Merlin/Merlin/pull/8)

0.5.1

Improvements

- Update dependencies to use cudf 0.19
- Removed conda from docker containers, leading to much smaller container sizes
- Added CUDA 11.2 support
- Added FastAI v2.3 support

Bug Fixes

- Fix NVTabular preprocessing with HugeCTR inference

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.