If you have sent a few too many messages (actually usually a lot of messages) in a short period of time, you may receive a 429 response from Cisco Spark (which the ciscosparkapi package raises as a Python exception). While you can catch and handle these exceptions yourself... Why don't we just do that for you? 😎
Experience with the _New_ Rate-Limit Handling
Now, when a rate-limit message is received in response to one of your requests, the ciscosparkapi package will automatically:
1. Catch the exception
2. Wait / sleep for the time interval provided by Cisco Spark
3. Automatically retry your request
All of this should be transparent to the execution of your code, and you shouldn't need to make any modifications to your code to take advantage of the new functionality (unless you were already handling rate-limit responses, in which case you should be able to pull out that code and simplify your app 🙂 ).
**Experience:** Your code should run as expected with the package handling any rate-limit responses that are received. Note that if your requests do trigger a rate-limit response, **the wait times prescribed by Cisco Spark usually measure in the minutes** (averaging about 5 minutes from my experience). It may appear that your code is running very slowly due to these wait times, but the good news is that your code is running and your requests are being handled as quickly as possible.
Can I disable the automated rate-limit handling?
Absolutely. Should you desire to disable the automatic rate-limit handling, you can do so by setting the `wait_on_rate_limit` parameter to `False` when creating your CiscoSparkAPI connection object. Like So:
python
spark = CiscoSparkAPI(wait_on_rate_limit=False)
New Package Exception | Error
Rate limit messages (if you have disabled the automated handling) are now raised as a more specific `SparkRateLimitError` instead of the more general `SparkApiError`. Since `SparkRateLimitError` is a sub-class of `SparkApiError`, your code should still work as needed if you are catching rate-limit messages by catching `SparkApiError`s. It's just a little easier to catch the rate-limit messages separately from the broader API errors.
As always, please raise an issue if you are experiencing any challenges or have ideas for enhancement.
Thank You!