What's Changed
* Correct Typo in SparseAutoModelForCausalLM docstring by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/56
* Disable Default Bitmask Compression by Satrat in https://github.com/vllm-project/llm-compressor/pull/60
* TRL Example fix by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/59
* Fix typo by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/63
* Correct typo by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/61
* correct import in README.md by zzc0430 in https://github.com/vllm-project/llm-compressor/pull/66
* Fix for issue 43 -- starcoder model by horheynm in https://github.com/vllm-project/llm-compressor/pull/71
* Update README.md by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/74
* Layer by Layer Sequential GPTQ Updates by Satrat in https://github.com/vllm-project/llm-compressor/pull/47
* [ Docs ] Update main readme by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/77
* [ Docs ] `gemma2` examples by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/78
* [ Docs ] Update `FP8` example to use dynamic per token by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/75
* [ Docs ] Overhaul `accelerate` user guide by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/76
* Support `kv_cache_scheme` for quantizing KV Cache by mgoin in https://github.com/vllm-project/llm-compressor/pull/88
* Propagate `trust_remote_code` Argument by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/90
* Fix for issue 81 by horheynm in https://github.com/vllm-project/llm-compressor/pull/84
* Fix for issue 83 by horheynm in https://github.com/vllm-project/llm-compressor/pull/85
* [ DOC ] Big Model Example by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/99
* Enable obcq/finetune integration tests with `commit` cadence by dsikka in https://github.com/vllm-project/llm-compressor/pull/101
* metric logging on GPTQ path by horheynm in https://github.com/vllm-project/llm-compressor/pull/65
* Update test config files by dsikka in https://github.com/vllm-project/llm-compressor/pull/97
* remove workflows + update runners by dsikka in https://github.com/vllm-project/llm-compressor/pull/103
* metrics by horheynm in https://github.com/vllm-project/llm-compressor/pull/104
* add debug by horheynm in https://github.com/vllm-project/llm-compressor/pull/108
* Add FP8 KV Cache quant example by mgoin in https://github.com/vllm-project/llm-compressor/pull/113
* Add vLLM e2e tests by dsikka in https://github.com/vllm-project/llm-compressor/pull/117
* Fix style, fix noqa by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/123
* GPTQ Algorithm Cleanup by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/120
* GPTQ Activation Ordering by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/94
* demote recipe string initialization to debug and make more descriptive by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/116
* compressed-tensors main dependency for base-tests by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/125
* Set `ready` label for transformer tests; add message reminder on PR opened by dsikka in https://github.com/vllm-project/llm-compressor/pull/126
* Fix markdown check test by dsikka in https://github.com/vllm-project/llm-compressor/pull/127
* Naive Run Compressed Pt. 2 by Satrat in https://github.com/vllm-project/llm-compressor/pull/62
* Fix transformer test conditions by dsikka in https://github.com/vllm-project/llm-compressor/pull/131
* Run Compressed Tests by Satrat in https://github.com/vllm-project/llm-compressor/pull/132
* Correct typo by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/124
* Activation Ordering Strategies by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/121
* Fix README Issue by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/139
* update by dsikka in https://github.com/vllm-project/llm-compressor/pull/143
* Update finetune and oneshot tests by dsikka in https://github.com/vllm-project/llm-compressor/pull/114
* Validate Recipe Parsing Output by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/100
* fix build error for nightly by dhuangnm in https://github.com/vllm-project/llm-compressor/pull/145
* Fix recipe nested in configs by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/140
* MOE example with warning by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/87
* Bug Fix: recipe stages were not being concatenated by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/150
* fix package name bug for nightly by dhuangnm in https://github.com/vllm-project/llm-compressor/pull/155
* Add descriptions for pytest marks by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/156
* Fix Sparsity Unit Test by Satrat in https://github.com/vllm-project/llm-compressor/pull/153
* Fix: Error during model saving with shared tensors by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/158
* Update 2:4 Examples by dsikka in https://github.com/vllm-project/llm-compressor/pull/161
* DeepSeek: Fix Hessian Estimation by Satrat in https://github.com/vllm-project/llm-compressor/pull/157
* bump up main to 0.2.0 by dhuangnm in https://github.com/vllm-project/llm-compressor/pull/163
* Fix help dialogue by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/151
* Add MoE and Compressed Inference Examples by Satrat in https://github.com/vllm-project/llm-compressor/pull/160
* Separate `trust_remote_code` args by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/152
* Enable a skipped finetune test by dsikka in https://github.com/vllm-project/llm-compressor/pull/169
* Fix filename in example command by dbarbuzzi in https://github.com/vllm-project/llm-compressor/pull/173
* Add DeepSeek V2.5 Example by dsikka in https://github.com/vllm-project/llm-compressor/pull/171
* fix quality by dsikka in https://github.com/vllm-project/llm-compressor/pull/176
* Patch log function name in gptq by kylesayrs in https://github.com/vllm-project/llm-compressor/pull/168
* README for Modifiers by Satrat in https://github.com/vllm-project/llm-compressor/pull/165
* Fix default for sequential updates by dsikka in https://github.com/vllm-project/llm-compressor/pull/186
* fix default test case by dsikka in https://github.com/vllm-project/llm-compressor/pull/193
* Fix Initalize typo by Imss27 in https://github.com/vllm-project/llm-compressor/pull/190
* Update MoE examples by mgoin in https://github.com/vllm-project/llm-compressor/pull/192
New Contributors
* zzc0430 made their first contribution in https://github.com/vllm-project/llm-compressor/pull/66
* horheynm made their first contribution in https://github.com/vllm-project/llm-compressor/pull/71
* dsikka made their first contribution in https://github.com/vllm-project/llm-compressor/pull/101
* dhuangnm made their first contribution in https://github.com/vllm-project/llm-compressor/pull/145
* Imss27 made their first contribution in https://github.com/vllm-project/llm-compressor/pull/190
**Full Changelog**: https://github.com/vllm-project/llm-compressor/compare/0.1.0...0.2.0