New Features
Queue consumer support with Kombu
Baseplate now has first class support for consuming messages from queue brokers like RabbitMQ using [Kombu](http://kombu.readthedocs.io/en/latest/). The full trace and diagnostic framework works here.
python
from kombu import Connection, Exchange
from baseplate import queue_consumer
def process_links(context, msg_body, msg):
print('processing %s' % msg_body)
queue_consumer.consume(
baseplate=make_baseplate(cfg, app_config),
exchange=Exchange('reddit_exchange', 'direct'),
connection=Connection(
hostname='amqp://guest:guestreddit.local:5672',
virtual_host='/',
),
queue_name='process_links_q',
routing_keys=[
'link_created',
'link_deleted',
'link_updated',
],
handler=process_links,
)
[See the documentation for more details](http://baseplate.readthedocs.io/en/latest/baseplate/queue_consumer.html).
Changes
* The memcached instrumentation now adds details about each call to span tags. This includes key names, key counts, and other settings.
* When preparing CQL statements with the Cassandra integration, Baseplate will now cache the prepared statement for you. This means you can call `prepare()` every time safely.
* The secret fetcher daemon can now be run in a single-shot mode where it exits immediately after fetching secrets. This can be used for situations like cron jobs in Kubernetes.
* When installing as a wheel, the baseplate CLI scripts no longer have a Python version suffix. `baseplate-serve2` -> `baseplate-serve`.
* The Zipkin tracing observer can now ship spans to a sidecar span publisher daemon rather than sending from within the application itself.
* There are now new methods to check experiment names are valid and to get lists of all active experiments.
* Experiments now send exposure events.
Bug Fixes
* Fix a case where connection failures in the thrift connection pool implementation would cause the pool to lose connection slots and eventually be depleted.
* Fix an issue where for r2 experiments with low bucketing and 3 total treatments, bucketing is uneven.