=======================================
New Features
------------
- Implemented Sliced Object Download feature.
This breaks up a single large object into multiple pieces and
downloads them in parallel, improving performance. The gsutil cp, mv
and rsync commands now use this by default when compiled crcmod
is available for performing fast end-to-end integrity checks.
If compiled crcmod is not available, normal object download will
be used. Sliced download can be used in conjunction with the global -m
flag for maximum performance to download multiple objects in
parallel while additionally slicing each object.
See the "SLICED OBJECT DOWNLOAD" section of "gsutil help cp" for
details.
Note: sliced download may cause performance degradation for disks
with very slow seek times. You can disable this feature by setting
sliced_object_download_threshold = 0 in your .boto configuration file.
- Added rthru_file and wthru_file test modes to perfdiag, allowing
measurement of reads and writes from a disk. This also allows
measurement of transferring objects too large to fit in memory.
The size restriction of 20GiB has been lifted.
- perfdiag now supports a -p flag to choose a parallelism strategy
(slice, fan, or both) when using multiple threads and/or processes.
Bug Fixes
---------
- Fixed an IOError that could occur in apitools when acquiring credentials
using multiple threads and/or processes on Google Compute Engine.
- Fixed a bug where rm -r would attempt to delete a nonexistent bucket.
- Fixed a bug where a default object ACL could not be set or changed to empty.
- Fixed a bug where cached credentials corresponding to an old account could
be used (for example, credentials associated with a prior .boto
configuration file).
- Fixed a bug in apitools for retrieving byte ranges of size 1 (for example,
"cat -r 1-1 ...")
- Fixed a bug that caused the main gsutil process to perform all work leaving
all gsutil child processes idle.
- Fixed a bug that caused multiple threads not to be used when
multiprocessing was unavailable.
- Fixed a bug that caused rsync to skip files that start with "." when the
-r option was not used.
- Fixed a bug that caused rsync -C to bail out when it failed to read
a source file.
- Fixed a bug where gsutil stat printed unwanted output to stderr.
- Fixed a bug where a parallel composite upload could return a nonzero exit
code even though the upload completed successfully. This occurred if
temporary component deletion triggered a retry but the original request
succeeded.
- Fixed a bug where gsutil would exit with code 0 when both running in
debug mode and encountering an unhandled exception.
- Fixed a bug where gsutil would suggest using parallel composite uploads
multiple times.
Other Changes
-------------
- Bucket removal is now supported even if billing is disabled for that bucket.
- Refactored Windows installs to no longer use any multiprocessing module
functions, as gsutil has never supported multiple processes on Windows.
Multithreading is unaffected and still available on Windows.
- All downloads are now written to a temporary file with a "_.gstmp" suffix
while the download is still in progress.
- Re-hashing of existing bytes when resuming downloads now displays progress.
- Reduced the total number of multiprocessing.Manager processes to two.
- The rm command now correctly counts the number of objects that could
not be removed.
- Increased the default retries to match the Google Cloud Storage SLA.
By default, gsutil will now retry 23 times with exponential backoff up
to 32 seconds, for a total timespan of ~10 minutes.
- Improved bucket subdirectory checks to a single HTTP call. Detection of
_$folder$ placeholder objects is now eventually consistent.
- Eliminated two unnecessary HTTP calls when performing uploads via
the cp, mv, or rsync commands.
- Updated documentation for several topics including acl, cache-control,
crcmod, cp, mb, rsync, and subdirs.
- Added a warning about using parallel composite upload with NEARLINE
storage-class buckets.