* Addressed an issue raised when a robots.txt file is not UTF-8 encoded (thank you, tumma72, for spotting the problem and providing a suggestion for a fix - https://github.com/andreburgaud/robotspy/issues/200)
* Added a user agent to fetch the robots.txt, as some websites, such as pages hosted on Cloudflare, may return a 403 error.
* Updated the documentation to link to RFC 9309, Robots Exclusion Protocol (REP).
* Added a GitHub action job to execute the tests against Python versions 3.8 to 3.12.