- Add `__all__` to all core imports
- Installation will now finish normally, even if `requests` isn't installed. Before, it would fail because the `requests` package wasn't installed before.
- **Note:** pyFlarum still depends on the `requests` library.
- `forum_url` is now parsed by the `urllib` module. pyFlarum now checks whether the URL uses a `https://` or `http://` protocol - if not, a `TypeError` is raised when initializing `FlarumUser`.
- This removes the requirement that `forum_url` mustn't end with slash - now, it can, as the URL will always be parsed to the root one (e. g. `forum_url = "https://forum.com/abc/def"` is now valid)
- Added ability to scrap all posts from a long discussion that doesn't contain complete data for all posts
- This feature wasn't properly tested yet, but possible improvements will be made in the future
- This uses the new `ids` parameter to speed things up and fetch multiple posts by their IDs, in chunks
- [See an example](https://github.com/CWKevo/pyflarum/blob/ffd54ff1b598cb9abc6302085ea5fab8da4d32fd/tests/absolutely_all_posts_from_discussion.py)
- `Filter`:
- Improved/added docstring
- Added `ids` parameter to `Filter` to fetch multiple entries with specific IDs at once in the API
- Added a warning when a `Filter` with `limit` above 50 is created - Flarum can only fetch max. 50 entires from API by default, and there is no reason to go above that.
v1.1.x will be a stable one. All that is left is making posts' and users' data more complete. No ETA.