dV = lambda V, t, w, Iext: V - V * V * V / 3 - w + Iext
dw = lambda w, t, V: (V + a - b * w) / tau
fhn = bp.odeint(bp.JointEq([dV, dw]), method='rk4', dt=0.1)
differential integrator runner
runner = bp.IntegratorRunner(fhn, monitors=['V', 'w'], inits=[1., 1.])
run 1
Iext, duration = bp.inputs.section_input([0., 1., 0.5], [200, 200, 200], return_length=True)
runner.run(duration, dyn_args=dict(Iext=Iext))
bp.visualize.line_plot(runner.mon.ts, runner.mon['V'], legend='V')
run 2
Iext, duration = bp.inputs.section_input([0.5], [200], return_length=True)
runner.run(duration, dyn_args=dict(Iext=Iext))
bp.visualize.line_plot(runner.mon.ts, runner.mon['V'], legend='V-run2', show=True)
Enable call a customized function during fitting of ``brainpy.BPTrainer``.
This customized function (provided through ``fun_after_report``) will be useful to save a checkpoint during the training. For instance,
python
class CheckPoint:
def __init__(self, path='path/to/directory/'):
self.max_acc = 0.
self.path = path
def __call__(self, idx, metrics, phase):
if phase == 'test' and metrics['acc'] > self.max_acc:
self.max_acc = matrics['acc']
bp.checkpoints.save(self.path, net.state_dict(), idx)
trainer = bp.BPTT()
trainer.fit(..., fun_after_report=CheckPoint())
Enable data with ``data_first_axis`` format when predicting or fitting in a ``brainpy.DSRunner`` and ``brainpy.DSTrainer``.
Previous version of BrainPy only supports data with the batch dimension at the first axis. Currently, ``brainpy.DSRunner`` and ``brainpy.DSTrainer`` can support the data with the time dimension at the first axis. This can be set through ``data_first_axis='T'`` when initializing a runner or trainer.
python
runner = bp.DSRunner(..., data_first_axis='T')
trainer = bp.DSTrainer(..., data_first_axis='T')
4. Utility in BrainPy
``brainpy.encoding`` module for encoding rate values into spike trains
Currently, we support
- `brainpy.encoding.LatencyEncoder`
- `brainpy.encoding.PoissonEncoder`
- `brainpy.encoding.WeightedPhaseEncoder`
``brainpy.checkpoints`` module for model state serialization.
This version of BrainPy supports to save a checkpoint of the model into the physical disk. Inspired from the Flax API, we provide the following checkpoint APIs:
- ``brainpy.checkpoints.save()`` for saving a checkpoint of the model.
- ``brainpy.checkpoints.multiprocess_save()`` for saving a checkpoint of the model in multi-process environment.
- ``brainpy.checkpoints.load()`` for loading the last or best checkpoint from the given checkpoint path.
- ``brainpy.checkpoints.load_latest()`` for retrieval the path of the latest checkpoint in a directory.
Deprecations
1. Deprecations in the running supports of BrainPy
``func_monitors`` is no longer supported in all ``brainpy.Runner`` subclasses.
We will remove its supports since version 2.4.0. Instead, monitoring with a dict of callable functions can be set in ``monitors``. For example,
python
old version
runner = bp.DSRunner(model,
monitors={'sps': model.spike, 'vs': model.V},
func_monitors={'sp10': model.spike[10]})
python
new version
runner = bp.DSRunner(model,
monitors={'sps': model.spike,
'vs': model.V,
'sp10': model.spike[10]})
``func_inputs`` is no longer supported in all ``brainpy.Runner`` subclasses.
Instead, giving inputs with a callable function should be done with ``inputs``.
python
old version
net = EINet()
def f_input(tdi):
net.E.input += 10.
runner = bp.DSRunner(net, fun_inputs=f_input, inputs=('I.input', 10.))
python
new version
def f_input(tdi):
net.E.input += 10.
net.I.input += 10.
runner = bp.DSRunner(net, inputs=f_input)
``inputs_are_batching`` is deprecated.
``inputs_are_batching`` is deprecated in ``predict()``/``.run()`` of all ``brainpy.Runner`` subclasses.
``args`` and ``dyn_args`` are now deprecated in ``IntegratorRunner``.
Instead, users should specify ``args`` and ``dyn_args`` when using ``IntegratorRunner.run()`` function.
python
dV = lambda V, t, w, I: V - V * V * V / 3 - w + I
dw = lambda w, t, V, a, b: (V + a - b * w) / 12.5
integral = bp.odeint(bp.JointEq([dV, dw]), method='exp_auto')
old version
runner = bp.IntegratorRunner(
integral,
monitors=['V', 'w'],
inits={'V': bm.random.rand(10), 'w': bm.random.normal(size=10)},
args={'a': 1., 'b': 1.}, CHANGE
dyn_args={'I': bp.inputs.ramp_input(0, 4, 100)}, CHANGE
)
runner.run(100.,)
python
new version
runner = bp.IntegratorRunner(
integral,
monitors=['V', 'w'],
inits={'V': bm.random.rand(10), 'w': bm.random.normal(size=10)},
)
runner.run(100.,
args={'a': 1., 'b': 1.},
dyn_args={'I': bp.inputs.ramp_input(0, 4, 100)})
2. Deprecations in ``brainpy.math`` module
`ditype()` and `dftype()` are deprecated.
`brainpy.math.ditype()` and `brainpy.math.dftype()` are deprecated. Using `brainpy.math.int_` and `brainpy.math.float()` instead.
``brainpy.modes`` module is now moved into ``brainpy.math``
The correspondences are listed as the follows:
- ``brainpy.modes.Mode`` => ``brainpy.math.Mode``
- ``brainpy.modes.NormalMode `` => ``brainpy.math.NonBatchingMode``
- ``brainpy.modes.BatchingMode `` => ``brainpy.math.BatchingMode``
- ``brainpy.modes.TrainingMode `` => ``brainpy.math.TrainingMode``
- ``brainpy.modes.normal `` => ``brainpy.math.nonbatching_mode``
- ``brainpy.modes.batching `` => ``brainpy.math.batching_mode``
- ``brainpy.modes.training `` => ``brainpy.math.training_mode``