Petals

Latest version: v2.2.0.post1

Safety actively analyzes 682471 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

1.1.3

Not secure
Highlights

🐞 **Bug fixes.** We have fixed a variety of minor issues related to timeout errors in the client, fine-tuning, and tensor parallelism.

⚙️ **New options in the client.** Added `allowed_servers` and `max_retries` options:

- `allowed_servers` allows to restrict the set of servers a client can use for its requests (e.g., to only use the servers trusted to process your data).
- `max_retries` allows to limit the number of retries a client does before raising an exception (previously, clients continued retrying indefinitely).

📚 **FAQ.** We have released the [FAQ page](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions) that covers common questions about running clients and servers, as well as troubleshooting common problems.

What's Changed
* Fix typo in prompt-tuning-sst2.ipynb by borzunov in https://github.com/bigscience-workshop/petals/pull/245
* Minor changes to examples/prompt-tuning notebooks by justheuristic in https://github.com/bigscience-workshop/petals/pull/247
* Fix examples/sst, add cls_model embeddings by justheuristic in https://github.com/bigscience-workshop/petals/pull/248
* Fix TP crashing when hypo_ids are used by borzunov in https://github.com/bigscience-workshop/petals/pull/249
* Add `allowed_servers`, `max_retries` options to the client, improve logs by borzunov in https://github.com/bigscience-workshop/petals/pull/235
* Lower payload size threshold for stream handlers by borzunov in https://github.com/bigscience-workshop/petals/pull/251
* Improve reachability logs by borzunov in https://github.com/bigscience-workshop/petals/pull/253
* Link FAQ in readme by borzunov in https://github.com/bigscience-workshop/petals/pull/260
* Show visible maddrs for public swarm too by borzunov in https://github.com/bigscience-workshop/petals/pull/263
* Limit max delay between retries to 15 min by borzunov in https://github.com/bigscience-workshop/petals/pull/264
* Use get_logger(__name__) instead of get_logger(__file__) by borzunov in https://github.com/bigscience-workshop/petals/pull/265
* Improve "connect your GPU" message by borzunov in https://github.com/bigscience-workshop/petals/pull/266
* Fix use_chunked_forward="auto" on non-x86_64 machines by borzunov in https://github.com/bigscience-workshop/petals/pull/267
* Use inference mode in _MergedInferenceStep by justheuristic in https://github.com/bigscience-workshop/petals/pull/275
* Increase default request_timeout by borzunov in https://github.com/bigscience-workshop/petals/pull/276


**Full Changelog**: https://github.com/bigscience-workshop/petals/compare/v1.1.2...v1.1.3

1.1.2

Not secure
Highlights

🏃‍♀️ **Faster inference.** We've shipped server-side changes improving the inference speed by up to 30%. This is a result of profiling the server's inference performance (see details in 224 and 225). The public swarm will become faster once everyone upgrades to the latest Petals version and restarts their servers.

🐞 **Prompt-tuning bug fixes.** We've shipped bug fixes for prompt-tuning notebooks (see details in 231).

🧑‍🏫 **New pretrained model.** We've added a new model, [BLOOMZ-176B](https://huggingface.co/bigscience/bloomz) by BigScience, to the public swarm. You can run it (or host its blocks) by specifying `bigscience/bloomz-petals` as the model name.

- BLOOMZ is a version of BLOOM fine-tuned to **follow human instructions** in the zero-shot regime. See details in its [model card](https://huggingface.co/bigscience/bloomz) and [paper](https://arxiv.org/abs/2211.01786).
- The [chatbot app](http://chat.petals.ml/) now uses BLOOMZ by default. You can ask it to generate texts, code, or perform various tasks. It responds better than the regular BLOOM, which often went off-topic instead of actually doing the task you asked.

What's Changed
* Choose --num_blocks automatically for all models by borzunov in https://github.com/bigscience-workshop/petals/pull/217
* Add one more link to the "Getting started" tutorial by borzunov in https://github.com/bigscience-workshop/petals/pull/218
* Mention BLOOMZ in readme by borzunov in https://github.com/bigscience-workshop/petals/pull/221
* Fix a typo in error message. by zsc in https://github.com/bigscience-workshop/petals/pull/227
* Merge inference pools into one to increase inference speed by justheuristic in https://github.com/bigscience-workshop/petals/pull/225
* Add citation to readme by Muhtasham in https://github.com/bigscience-workshop/petals/pull/219
* Fix dtype error in fine-tuning notebooks by artek0chumak in https://github.com/bigscience-workshop/petals/pull/231
* Prompt-tuning notebooks: suggest to use a smaller model for faster prototyping by borzunov in https://github.com/bigscience-workshop/petals/pull/234
* Bump version to 1.1.2 by borzunov in https://github.com/bigscience-workshop/petals/pull/244

New Contributors
* zsc made their first contribution in https://github.com/bigscience-workshop/petals/pull/227
* Muhtasham made their first contribution in https://github.com/bigscience-workshop/petals/pull/219

**Full Changelog**: https://github.com/bigscience-workshop/petals/compare/v1.1.1...v1.1.2

1.1.1

Not secure
Highlights

⛰️ **Stability.** This release improves **stability and performance** of the Petals [DHT](https://en.wikipedia.org/wiki/Distributed_hash_table) in presence of many servers joined via NAT traversal & relays. Now, the DHT prefers to store the keys on directly reachable peers, so that all peers can access them faster and with less failures. Also, this release contains a minor fix to the block reassignment algorithm that decreases excess reassignments that were leading to the swarm downtime in the past.

🌎 **Basic routing.** We have improved the routing algorithm for inference, so that clients weakly prefer servers holding more blocks to minimize latency and **increase inference speed**. This is only a basic algorithm, and we are working on smarter routing (taking into account latency, throughput, etc.) for both inference and fine-tuning in future releases. This release also makes the servers share more technical information about themselves (their version, free cache, etc.), so it can be used by the smarter routing algorithms in future and shown at http://health.petals.ml for debugging purposes.

What's Changed
* Fix fine-tuning notebooks intros by borzunov in https://github.com/bigscience-workshop/petals/pull/194
* Ignore network RPS if we failed to measure it by borzunov in https://github.com/bigscience-workshop/petals/pull/198
* Make client ignore blacklist if all servers holding a block are blacklisted by borzunov in https://github.com/bigscience-workshop/petals/pull/197
* Increase tolerances in test_tp_block by justheuristic in https://github.com/bigscience-workshop/petals/pull/196
* Fix --no_auto_relay help by borzunov in https://github.com/bigscience-workshop/petals/pull/199
* Use length-weighted sampling in routing for inference by justheuristic in https://github.com/bigscience-workshop/petals/pull/204
* Return available cache size in rpc_info() by justheuristic in https://github.com/bigscience-workshop/petals/pull/191
* Add service checking direct reachability from peers by justheuristic in https://github.com/bigscience-workshop/petals/pull/195
* Report server version and dht.client_mode in rpc_info(), check for updates on startup by borzunov in https://github.com/bigscience-workshop/petals/pull/209
* Don't switch blocks if it makes swarm disjoint by borzunov in https://github.com/bigscience-workshop/petals/pull/210
* Fix output shape when resuming generation by borzunov in https://github.com/bigscience-workshop/petals/pull/211
* Improve errors in case of missing blocks, suggest to join your own server by borzunov in https://github.com/bigscience-workshop/petals/pull/212
* CI: Convert model only when convert_model.py or setup.cfg change by borzunov in https://github.com/bigscience-workshop/petals/pull/213
* CI: Update deprecated actions, don't measure network RPS by borzunov in https://github.com/bigscience-workshop/petals/pull/215
* Bump version to 1.1.1 by borzunov in https://github.com/bigscience-workshop/petals/pull/214


**Full Changelog**: https://github.com/bigscience-workshop/petals/compare/v1.1.0...v1.1.1

1.1.0

Not secure
Highlights

🏠 **NAT traversal & relays.** Now, servers can join the swarm automatically even if your machine is located behind a NAT or a firewall, or has a dynamic IP address. You don't have to manually set up port forwarding or provide any arguments to make it work.

- Please upgrade the Petals package and restart all your servers & clients to use this feature or access servers joined via relays:

`pip install --upgrade petals`

- __How does it work?__ If the server learns that it can't accept incoming connections due to NAT/firewall, it opens a long-term outcoming connection to one of **relay nodes**, then the relay node forwards all requests to this server through this connection. In turn, any server with a public IP may serve as a relay node if necessary. We use libp2p circuit relays under the hood: https://docs.libp2p.io/concepts/nat/circuit-relay/

💬 **Chatbot app.** We've released a chatbot app working over Petals: http://chat.petals.ml ([source code](https://github.com/borzunov/chat.petals.ml)).

- __Disclaimer:__ This chatbot uses the regular BLOOM, which is not fine-tuned for question answering. Please do not expect it to behave like ChatGPT.

- __How does it work?__ Under the hood, this web app uses our HTTP endpoint for running inference using the public Petals swarm. You can use this endpoint for your own projects, or set up another endpoint yourself (no GPU needed). See API docs here: https://github.com/borzunov/chat.petals.ml#http-api-methods

🏃‍♀️ **Faster CPU-only clients.** If your CPU supports the AVX512 instruction set, a CPU-only client now runs almost as fast as a GPU-enabled one. This way, you can rent cheap CPU instances to run the client or an HTTP endpoint, like the one we use for the chatbot app.

- __How to use it?__ AVX512 is mostly present on late Intel Xeon CPUs. You can rent one by choosing a "dedicated CPU" instance with 16+ GB RAM on [DigitalOcean](https://m.do.co/c/4fc38037f84c).

🏥 **Swarm health monitor.** We've updated the swarm health monitor: http://health.petals.ml ([source code](https://github.com/borzunov/health.petals.ml)). It provides an overview of servers who joined the public swarm and reports any connection issues.

What's Changed
* Add PyPI badge, update instructions and links in readme by borzunov in https://github.com/bigscience-workshop/petals/pull/172
* Add link to PyPI by borzunov in https://github.com/bigscience-workshop/petals/pull/173
* Add local tensor-parallel fwd/bwd by justheuristic in https://github.com/bigscience-workshop/petals/pull/143
* Make Docker command more visible by borzunov in https://github.com/bigscience-workshop/petals/pull/175
* Allow to disable chunked forward by borzunov in https://github.com/bigscience-workshop/petals/pull/176
* Disable chunked_forward() on AVX512 CPUs by borzunov in https://github.com/bigscience-workshop/petals/pull/179
* Use slightly less memory in .generate() by borzunov in https://github.com/bigscience-workshop/petals/pull/177
* Import bitsandbytes only if it's going to be used by borzunov in https://github.com/bigscience-workshop/petals/pull/180
* hotfix: add initial peer that did not crash :) by justheuristic in https://github.com/bigscience-workshop/petals/pull/181
* Remove protobuf from requirements by borzunov in https://github.com/bigscience-workshop/petals/pull/182
* Add more links to BLOOM to readme by borzunov in https://github.com/bigscience-workshop/petals/pull/183
* Add link to health.petals.ml to readme by borzunov in https://github.com/bigscience-workshop/petals/pull/184
* Add readme subsections by borzunov in https://github.com/bigscience-workshop/petals/pull/185
* Fix GiBs in the "insufficient disk space" message by borzunov in https://github.com/bigscience-workshop/petals/pull/187
* Support libp2p relays for NAT traversal by Vahe1994 in https://github.com/bigscience-workshop/petals/pull/186
* Fix psutil-related AccessDenied crash, disable --load_in_8bit by default in case of TP by borzunov in https://github.com/bigscience-workshop/petals/pull/188
* Bump version to 1.1.0 by borzunov in https://github.com/bigscience-workshop/petals/pull/190

New Contributors
* Vahe1994 made their first contribution in https://github.com/bigscience-workshop/petals/pull/186

**Full Changelog**: https://github.com/bigscience-workshop/petals/compare/v1.0.0...v1.1.0

1.0.0

Not secure
General

This release contains the core functionality of the Petals platform described in [our paper](https://arxiv.org/pdf/2209.01188.pdf).

What's Changed
* Rudimentary decentralization by justheuristic in https://github.com/bigscience-workshop/petals/pull/9
* Update model by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/17
* Chained rpc_forward & rpc_backward by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/18
* Implement block selection on servers by borzunov in https://github.com/bigscience-workshop/petals/pull/20
* LM head module by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/19
* Measure and cache network & compute throughput by borzunov in https://github.com/bigscience-workshop/petals/pull/21
* Shallow prompt tuning with run example on SST-2 by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/22
* minimalistic automated tests by justheuristic in https://github.com/bigscience-workshop/petals/pull/23
* Clean up readme by justheuristic in https://github.com/bigscience-workshop/petals/pull/24
* [Test CI] add instructions to test the full model by justheuristic in https://github.com/bigscience-workshop/petals/pull/25
* Fix default branch in CI by justheuristic in https://github.com/bigscience-workshop/petals/pull/26
* Fix CI runs in master by justheuristic in https://github.com/bigscience-workshop/petals/pull/27
* CI: use GIT_REF_NAME instead of GIT_HEAD_REF by justheuristic in https://github.com/bigscience-workshop/petals/pull/28
* Add GenerationMixin class by artek0chumak in https://github.com/bigscience-workshop/petals/pull/29
* Decouple make_sequence and move to RemoteSequenceManager by justheuristic in https://github.com/bigscience-workshop/petals/pull/30
* fix is_subsequence by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/32
* Miscellaneous fixes to automatic tests by justheuristic in https://github.com/bigscience-workshop/petals/pull/35
* Efficient forward & backward by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/36
* Pack of Inference Changes by artek0chumak in https://github.com/bigscience-workshop/petals/pull/37
* Support various backend dtypes & async serialization by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/38
* Use "PETALS" as the readme title by borzunov in https://github.com/bigscience-workshop/petals/pull/40
* integrate mixed-8bit model by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/39
* Rename 350m -> 560m by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/43
* make pytest outputs more verbose by justheuristic in https://github.com/bigscience-workshop/petals/pull/44
* Distributed prompt tuning by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/42
* Reduce vocabulary size in test model, fix bug in routing when overlapped by justheuristic in https://github.com/bigscience-workshop/petals/pull/45
* Convert actual model weights by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/46
* [quickfix 1/n] remove expensive assertions in inference code by justheuristic in https://github.com/bigscience-workshop/petals/pull/48
* [Fix] make distributed seq cls to not create the full bloom model by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/49
* Fix recovering for sequential_backward by dbaranchuk in https://github.com/bigscience-workshop/petals/pull/50
* Inference: require max sequence length instead of assuming 2048 by justheuristic in https://github.com/bigscience-workshop/petals/pull/52
* Add shallow prefix-tuned inference by artek0chumak in https://github.com/bigscience-workshop/petals/pull/55
* remove transformer block, implement as sequence size 1 by GreenFatGuy in https://github.com/bigscience-workshop/petals/pull/54
* Update readme for the 1st public release by borzunov in https://github.com/bigscience-workshop/petals/pull/57
* Use latest version of Petals scheme, shrink Petals logo by borzunov in https://github.com/bigscience-workshop/petals/pull/59
* Update bullet points with feedback from Tim and other people by borzunov in https://github.com/bigscience-workshop/petals/pull/61
* Update readme with arxiv link and more discussions by borzunov in https://github.com/bigscience-workshop/petals/pull/62
* Warn that current instructions involve 6B model but we will replace them soon by borzunov in https://github.com/bigscience-workshop/petals/pull/63
* Add deep prompt inference by artek0chumak in https://github.com/bigscience-workshop/petals/pull/66
* Fix calling rpc_info multiple times by justheuristic in https://github.com/bigscience-workshop/petals/pull/60
* Make attention cache wait until memory is freed by justheuristic in https://github.com/bigscience-workshop/petals/pull/53
* Build cpuonly from bitsandbytes main by justheuristic in https://github.com/bigscience-workshop/petals/pull/70
* Priority tasks by GreenFatGuy in https://github.com/bigscience-workshop/petals/pull/47
* Update dependency versions by justheuristic in https://github.com/bigscience-workshop/petals/pull/71
* fix protobuf version by justheuristic in https://github.com/bigscience-workshop/petals/pull/74
* Add prompt tuning example on Personachat dataset by artek0chumak in https://github.com/bigscience-workshop/petals/pull/69
* Quality of life changes: update readme, simplify run_server interface by justheuristic in https://github.com/bigscience-workshop/petals/pull/75
* Use bitsandbytes==0.34.0, update readme by justheuristic in https://github.com/bigscience-workshop/petals/pull/76
* Make small readability & style changes to the instructions by borzunov in https://github.com/bigscience-workshop/petals/pull/77
* Rebalance swarm when necessary by borzunov in https://github.com/bigscience-workshop/petals/pull/34
* Update hivemind to 1.1.2, mark `model` argument as required by borzunov in https://github.com/bigscience-workshop/petals/pull/81
* Fix "Too many open files" during rebalancing by borzunov in https://github.com/bigscience-workshop/petals/pull/83
* Add colab-related changes by artek0chumak in https://github.com/bigscience-workshop/petals/pull/80
* Enable rebalancing by default by borzunov in https://github.com/bigscience-workshop/petals/pull/84
* Implement exponential backoff for forward & backward by borzunov in https://github.com/bigscience-workshop/petals/pull/85
* Add sst-2 ipynb example by artek0chumak in https://github.com/bigscience-workshop/petals/pull/86
* Fix floating point issues in block_selection.py by borzunov in https://github.com/bigscience-workshop/petals/pull/89
* Implement timeouts in forward/backward by borzunov in https://github.com/bigscience-workshop/petals/pull/90
* Force reinstall of hivemind in example notebooks by artek0chumak in https://github.com/bigscience-workshop/petals/pull/88
* Make inference, forward, and backward fully fault-tolerant by borzunov in https://github.com/bigscience-workshop/petals/pull/91
* Use public swarm by default by borzunov in https://github.com/bigscience-workshop/petals/pull/92
* Make ServerState announcements work better by borzunov in https://github.com/bigscience-workshop/petals/pull/93
* Require hivemind with fixed compression and protobuf working on Colab by borzunov in https://github.com/bigscience-workshop/petals/pull/94
* Try to fix protobuf versions once again by borzunov in https://github.com/bigscience-workshop/petals/pull/95
* Add Beam Search decoding algorithm by artek0chumak in https://github.com/bigscience-workshop/petals/pull/87
* Improve server's logging by borzunov in https://github.com/bigscience-workshop/petals/pull/96
* Add various server timeouts, lower --max_batch_size and --inference_max_length defaults by borzunov in https://github.com/bigscience-workshop/petals/pull/97
* Fix dtype- and device-related client issues by borzunov in https://github.com/bigscience-workshop/petals/pull/98
* Make Petals a pip-installable package (attempt 2) by borzunov in https://github.com/bigscience-workshop/petals/pull/102
* Fix dtypes in backend schemas by borzunov in https://github.com/bigscience-workshop/petals/pull/99
* Fix ptune with `low_cpu_mem_usage=True` (as in Colab) by borzunov in https://github.com/bigscience-workshop/petals/pull/103
* Add Dockerfile by mryab in https://github.com/bigscience-workshop/petals/pull/82
* Remove unused imports, add missing arguments to docstrings by mryab in https://github.com/bigscience-workshop/petals/pull/108
* Expose request_timeout to DistributedBloomConfig by artek0chumak in https://github.com/bigscience-workshop/petals/pull/105
* Optimize RemoteSequenceManager by justheuristic in https://github.com/bigscience-workshop/petals/pull/106
* Hotfix span selection by justheuristic in https://github.com/bigscience-workshop/petals/pull/110
* Patch Linear8bit to enable CxB backward by justheuristic in https://github.com/bigscience-workshop/petals/pull/111
* Fix Linear8bitlt state config, update tests by justheuristic in https://github.com/bigscience-workshop/petals/pull/112
* Measure throughput for different configs, devices, and dtypes separately by borzunov in https://github.com/bigscience-workshop/petals/pull/114
* Support --load_in_8bit on pre-Turing GPUs by justheuristic in https://github.com/bigscience-workshop/petals/pull/113
* Fix tile size on ampere by justheuristic in https://github.com/bigscience-workshop/petals/pull/116
* Make server use smart defaults by borzunov in https://github.com/bigscience-workshop/petals/pull/115
* Suppress quantization warning and fix dtype defaults in compute benchmark by borzunov in https://github.com/bigscience-workshop/petals/pull/117
* Choose --num_blocks for bigscience/bloom-petals automatically by borzunov in https://github.com/bigscience-workshop/petals/pull/119
* Require hivemind==1.1.4 with p2pd v0.3.13 by borzunov in https://github.com/bigscience-workshop/petals/pull/121
* Rework readme, move code example to the top, link draft of Colab by borzunov in https://github.com/bigscience-workshop/petals/pull/118
* Remove "-r" when installing Petals in examples by mryab in https://github.com/bigscience-workshop/petals/pull/122
* Update notebooks to use full BLOOM-176B by artek0chumak in https://github.com/bigscience-workshop/petals/pull/104
* Call block.load_state_dict only once by mryab in https://github.com/bigscience-workshop/petals/pull/124
* Add checks for forward() inputs on the client side by justheuristic in https://github.com/bigscience-workshop/petals/pull/123
* Fix typos with codespell by mryab in https://github.com/bigscience-workshop/petals/pull/126
* Set dht.num_workers = n_layer, update_period = 150, expiration = 300 by borzunov in https://github.com/bigscience-workshop/petals/pull/125
* Avoid synchronous updates, ban peers based on request outcome by justheuristic in https://github.com/bigscience-workshop/petals/pull/127
* Revert to hivemind==1.1.3 for stability by borzunov in https://github.com/bigscience-workshop/petals/pull/129
* Clear trigger before engaging in update by justheuristic in https://github.com/bigscience-workshop/petals/pull/130
* Fix inference and rpc_info() fault tolerance by borzunov in https://github.com/bigscience-workshop/petals/pull/131
* Set default --step_timeout to 5 min by borzunov in https://github.com/bigscience-workshop/petals/pull/133
* Don't ban servers in case of client-caused handler errors by borzunov in https://github.com/bigscience-workshop/petals/pull/134
* Allow .generate() to reuse existing inference session by borzunov in https://github.com/bigscience-workshop/petals/pull/132
* Fix waiting until free memory is available by borzunov in https://github.com/bigscience-workshop/petals/pull/136
* Fix "could not unlink the shared memory file" during rebalancing by borzunov in https://github.com/bigscience-workshop/petals/pull/135
* Add Docker commands, use permanent Discord links by borzunov in https://github.com/bigscience-workshop/petals/pull/137
* Update texts in "Terms of use" and "Privacy and security" sections by borzunov in https://github.com/bigscience-workshop/petals/pull/138
* Show route on client by borzunov in https://github.com/bigscience-workshop/petals/pull/139
* Update Anaconda instructions by borzunov in https://github.com/bigscience-workshop/petals/pull/140
* Use common folder for all caches, make it a volume in Dockerfile by borzunov in https://github.com/bigscience-workshop/petals/pull/141
* Suppress asyncio error logs by default by borzunov in https://github.com/bigscience-workshop/petals/pull/142
* Add link to privacy & security Wiki by borzunov in https://github.com/bigscience-workshop/petals/pull/144
* Improve block size calculations by borzunov in https://github.com/bigscience-workshop/petals/pull/149
* Fix OOMs during server rebalancing by borzunov in https://github.com/bigscience-workshop/petals/pull/150
* Bump transformers to 4.25.1 by justheuristic in https://github.com/bigscience-workshop/petals/pull/151
* Clean up disk space by borzunov in https://github.com/bigscience-workshop/petals/pull/152
* Fix arguments in remove_old_models.py by mryab in https://github.com/bigscience-workshop/petals/pull/153
* Add missing methods for SamplingAlgorithm, fix docstrings by mryab in https://github.com/bigscience-workshop/petals/pull/107
* Reset MemoryCache during rebalancings by borzunov in https://github.com/bigscience-workshop/petals/pull/154
* Check reachability automatically and give advice how to fix it by borzunov in https://github.com/bigscience-workshop/petals/pull/155
* Fix logging: do not duplicate lines, enable colors in Colab by borzunov in https://github.com/bigscience-workshop/petals/pull/156
* Update advanced notebooks by artek0chumak in https://github.com/bigscience-workshop/petals/pull/148
* Downgrade CUDA in Docker image to 11.0.3 by mryab in https://github.com/bigscience-workshop/petals/pull/145
* Switch to speedtest-cli by justheuristic in https://github.com/bigscience-workshop/petals/pull/157
* Fix issues related to `petals` as a module by borzunov in https://github.com/bigscience-workshop/petals/pull/159
* Alloc inference cache as one contiguous buffer by borzunov in https://github.com/bigscience-workshop/petals/pull/160
* Fix misstypos in the example notebooks. by artek0chumak in https://github.com/bigscience-workshop/petals/pull/161
* Hot fix: Increase hivemind.P2P's startup_timeout for Colab, remove absent initial peer by borzunov in https://github.com/bigscience-workshop/petals/pull/162
* Shield alloc & free from cancellation by borzunov in https://github.com/bigscience-workshop/petals/pull/163
* Update wording in readme by borzunov in https://github.com/bigscience-workshop/petals/pull/165
* Correct grammar in readme by vadi2 in https://github.com/bigscience-workshop/petals/pull/166
* Add link to chat.petals.ml by borzunov in https://github.com/bigscience-workshop/petals/pull/168
* Fix code example in readme by borzunov in https://github.com/bigscience-workshop/petals/pull/169
* Fix instruction for developers by justheuristic in https://github.com/bigscience-workshop/petals/pull/170

New Contributors
* dbaranchuk made their first contribution in https://github.com/bigscience-workshop/petals/pull/17
* borzunov made their first contribution in https://github.com/bigscience-workshop/petals/pull/20
* artek0chumak made their first contribution in https://github.com/bigscience-workshop/petals/pull/29
* GreenFatGuy made their first contribution in https://github.com/bigscience-workshop/petals/pull/54
* mryab made their first contribution in https://github.com/bigscience-workshop/petals/pull/82
* vadi2 made their first contribution in https://github.com/bigscience-workshop/petals/pull/166

**Full Changelog**: https://github.com/bigscience-workshop/petals/commits/v1.0.0

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.