Mayor rewrite to enable clean extensibility + added scoping mechanism.
**scoping**
You can now write code with scopes like this
h = (
tb.build(x)
.relu_layer(100)
.then_with(tf.device, "/gpu:0")(lambda layer:
layer
.tanh_layer(50)
.dropout()
.softmax_layer(10)
)
.tensors()
)
It internally uses the `with` statement to do what you expect. The DSL was also expanded to support these operations and it looks more natural
h = dl.pipe(
x,
dl.relu_layer(100),
{ tf.device("/gpu:0"):
dl.tanh_layer(50)
.dropout()
.softmax_layer(10)
},
dl.tensors()
)
as you see scoping is done using a dictionary object.
New methods
* then_with: support scoping
* with_device: shortcut for `then_with(tf.device, ...)`
* with_variable_scope: shortcut for `then_with(tf.variable_scope, ...)`
* linear_layer: sets `activation_fn` to `None`
* flatten: taken from `tflearn`
Bugs Fixed
* `*_layer` methods where getting inserted an in-between `relu` operation because `fully_connected` has `activation_fn=tf.nn.relu` as a default parameter.
Other changes
* Builder, BuilderTree, and Applicative now inherit from BuilderBase, BuilderTree and ApplicativeBase respectively. They are also dynamically generated by a function so patches get set own set of these classes and do not pollute each other.