-------------
* Added new keyword argument ``large_decimals_as_64_bit_types`` to
``make_options()``. If set to ``True``, decimals with more than ``18``
digits will be retrieved as 64 bit integers or floats as appropriate.
The default retains the previous behavior of returning strings.
* Added support for ``datetime64[ns]`` data type for ``executemanycolumns()``.
This is particularly helpful when dealing with `pandas <https://pandas.pydata.org>`_
``DataFrame`` objects, since this is the type that contains time stamps.
* Added the keyword argument ``limit_varchar_results_to_max`` to ``make_options()``. This
allows to truncate ``VARCHAR(n)`` fields to ``varchar_max_character_limit``
characters, see the next item.
* Added possibility to enforce NumPy and Apache Arrow requirements using extra requirements
during installation: ``pip install turbodbc[arrow,numpy]``
* Updated Apache Arrow support to work with version 0.6.x
* Fixed an issue with retrieving result sets with ``VARCHAR(max)`` fields and
similar types. The size of the buffer allocated for such fields can be controlled
with the ``varchar_max_character_limit`` option to ``make_options()``.
* Fixed an `issue with some versions of Boost <https://svn.boost.org/trac10/ticket/3471>`_
that lead to problems with ``datetime64[us]`` columns with ``executemanycolumns()``.
An overflow when converting microseconds since 1970 to a database-readable timestamp
could happen, badly garbling the timestamps in the process. The issue was
surfaced with Debian 7's Boost version (1.49), although the Boost
issue was allegedly fixed with version 1.43.
* Fixed an issue that lead to undefined behavior when character sequences
could not be decoded into Unicode code points. The new (and defined) behavior
is to ignore the offending character sequences completely.