Features:
- Policy networks are now defined as functions mapping sequences of observations to sequences of actions. As a result, feed forward policies are faster now, and memory based agents are easier to implement. Previously, networks were restricted to be defined as `RNNCell`s.
- All functions of the agent interface receive a tensor of agent indices now. This adds the flexibility to process observations in smaller batches. Previously, `perform()` and `experience()` was defined on data from all the environments.