Ciscosparkapi

Latest version: v0.10.1

Safety actively analyzes 682449 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.8.3

Merged in pull request 49 from dlspano with a fix to the package's rate-limit handling support where we (me) had accidentally removed the `SparkApiError.retry_after` attribute that is critical to handling the rate-limit messages. 🤦‍♂️ _-Thank you for catching this Dave!_

This release also includes a few minor commits that were added to the package in support of the ciscosparksdk work that is underway.

0.8.2

A couple of small feature updates in this release:
* We are now exposing the Spark data object's JSON data in three formats ( 48 ): 💯
* `<spark data object>.json_data` returns a copy of the object's JSON data as an `OrderedDict`.
* `<spark data object>.to_dict()` returns a copy of the object's JSON data as a `dict` object.
* `<spark data object>.to_json()` returns a copy of the object's JSON data as an JSON string. **Note:** You can pass your favorite Python JSON encoding keyword arguments to this method (like `indent=2` and etc.).
* We have refactored the `ciscosparkapi` main package to more clearly articulate what classes and data are being exposed to you for your use. 😎

0.8

All of the API wrappers and data models have been reviewed and updated to match the latest Cisco Spark API capabilities. **-and-** We have completed a significant internal restructuring to improve all of the API wrappers:

* All API methods that accept parameters and post-data have been updated to consistently accept optional (`**request_parameters`) keyword arguments. So, **if Cisco releases an API update tomorrow with some new awesome parameter... You can go ahead and use it.** We'll update the code later so that it shows up in your IDE as soon as we can.

* **New WebhookEvent** - Webhook posts to your bot or automation can now be modeled via a new `WebhookEvent`. Just pass the JSON body that Spark posts to your web service to the `WebhookEvent()` initializer and you can use native dot-syntax to access all of the attributes.

* **Exceptions** - some changes to what exceptions are raised...
* **TypeErrors** - If you happen to pass an incorrectly typed parameter to one of the API methods or object initializers, the package will now raise a more appropriate and informative `TypeError` rather than an `AssertionError`.
* **ValueErrors** - If you pass an incorrect value... You guessed it `ValueError`.

Exception handling should be very straightforward now. **The only exception that you should have to catch and handle at runtime should be the `SparkApiError`'s**, which are returned when Cisco Spark responds with an error code. By the way, these were recently updated to show you the full request and response body, when an error occurs. Any other errors should show up and be addressed when you are writing and debugging your code.

Please open an [issue](https://github.com/CiscoDevNet/ciscosparkapi/issues) if you experience any issues with the package. We have tested it extensively so hopefully you shouldn't! ...but the issue log is there just in case. 🤞 😎

_-Thank You!_

0.7.1

Micro release with some goodness!

We corrected issue 46 where `SparkApiError`s were not printing / displaying correctly, and we enhanced them while we were in correcting the issue. `SparkApiError`s now include the full details of the request and response that triggered / caused the error. No more having to go to your debugger to see what the offending request and response looked like. 😎

0.7

If you have sent a few too many messages (actually usually a lot of messages) in a short period of time, you may receive a 429 response from Cisco Spark (which the ciscosparkapi package raises as a Python exception). While you can catch and handle these exceptions yourself... Why don't we just do that for you? 😎

Experience with the _New_ Rate-Limit Handling
Now, when a rate-limit message is received in response to one of your requests, the ciscosparkapi package will automatically:
1. Catch the exception
2. Wait / sleep for the time interval provided by Cisco Spark
3. Automatically retry your request

All of this should be transparent to the execution of your code, and you shouldn't need to make any modifications to your code to take advantage of the new functionality (unless you were already handling rate-limit responses, in which case you should be able to pull out that code and simplify your app 🙂 ).

**Experience:** Your code should run as expected with the package handling any rate-limit responses that are received. Note that if your requests do trigger a rate-limit response, **the wait times prescribed by Cisco Spark usually measure in the minutes** (averaging about 5 minutes from my experience). It may appear that your code is running very slowly due to these wait times, but the good news is that your code is running and your requests are being handled as quickly as possible.

Can I disable the automated rate-limit handling?
Absolutely. Should you desire to disable the automatic rate-limit handling, you can do so by setting the `wait_on_rate_limit` parameter to `False` when creating your CiscoSparkAPI connection object. Like So:

python
spark = CiscoSparkAPI(wait_on_rate_limit=False)


New Package Exception | Error
Rate limit messages (if you have disabled the automated handling) are now raised as a more specific `SparkRateLimitError` instead of the more general `SparkApiError`. Since `SparkRateLimitError` is a sub-class of `SparkApiError`, your code should still work as needed if you are catching rate-limit messages by catching `SparkApiError`s. It's just a little easier to catch the rate-limit messages separately from the broader API errors.

As always, please raise an issue if you are experiencing any challenges or have ideas for enhancement.

Thank You!

0.6.2

Deepar3292 corrected an errant HTTP method causing Team Updates to fail.

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.