Dear Josh,
it looks like both Norse and Rockpool support
stochastic-gradient-descent-based learning using automatic
differentiation provided by the supported libraries (e.g., pytorch,
jax), i guess augmented with various tricks to make this work with the
temporal and spiking nature of LIF-like models.
bindsnet in contrast seems to be focused on implementing more
traditional learning paradigms that do not necessarily rely on
(back)propagating errors, such as reward-modulated STDP.
Best,
Jakob
On 6/15/20 5:36 PM, Joshua Stern wrote:
> I recently read a potentially relevant review on this subject from
> earlier this year: A review of learning in biologically plausible
> spiking neural networks
>
<https://www.sciencedirect.com/science/article/abs/pii/S0893608019303181?fbclid=IwAR3Zu4aj38sb4pchp-jtzaizsIYvYao-QAZpFC2Ay8Nb672fipM-TDGE9eY>.
>
>
> Does anyone have thoughts on how to think about the differences (esp. in
> terms of learning) between BindsNet, Norse, and Rockpool?
>
> Best wishes,
> Josh
>
>
>
>
>
> On Mon, Jun 15, 2020 at 11:18 AM Benedikt S. Vogler
> <benedikt.s.vogler(a)tum.de <mailto:benedikt.s.vogler@tum.de>> wrote:
>
> Dear Sonja,
>
> nest is a spiking neural network simulator and not a machine
> learning library.
> My current knowledge on SNN is that there is no established learning
> algorithm as there are best-practices for ANNs.
> Here is a brief overview over some methods:
> There ist STDP for correlation learning, reward based STDP is a
> reinforcement learning algorithm still being researched. Another
> option is to train a ANN and then convert it to a SNN.
> SNN don’t have derivative of the activation function, therefore
> backprop is not transferable easily to SNN. There are methods like
> BPTT and e-prop to make backprop work. There might be more methods
> in the area of backprop adaptions. I am not an expert on this.
> SNNs can also be used for reservoir computing which is yet another
> thing (
https://gitlab.com/aiCTX/rockpool).
> I am not sure which learning algorithm norse uses, they mention
> Policy gradient.
>
> Kind regards,
> Benedikt S. Vogler
>
>
> > Am 15.06.2020 um 14:24 schrieb s.kraemer96(a)gmx.net
> <mailto:s.kraemer96@gmx.net>:
> >
> > Dear all,
> > I´m writing a master thesis on spiking neural networks and how
> transparent they are. For that I need to implement a SNN network and
> train it. So I started with Brian but that is much to complex and I
> don´t need something special. So I decided to use PyNest. I did all
> the tutorials but I´m missing a tutorial how to train the network. I
> don´t know how to put in a dataset to train the model. I haven´t
> found anything to this topic. So my questions are:
> > 1. Can PyNest train set up a SNN and train it trough data and
> if not is there another simulator who can do this?
> > 2. How do I do it? Is there anything I missed to read or can
> someone send me an example? This would be very helpful.
> > Thanks for your help.
> >
> > Best,
> > Sonja
> > _______________________________________________
> > NEST Users mailing list -- users(a)nest-simulator.org
> <mailto:users@nest-simulator.org>
> > To unsubscribe send an email to users-leave(a)nest-simulator.org
> <mailto:users-leave@nest-simulator.org>
> _______________________________________________
> NEST Users mailing list -- users(a)nest-simulator.org
> <mailto:users@nest-simulator.org>
> To unsubscribe send an email to users-leave(a)nest-simulator.org
> <mailto:users-leave@nest-simulator.org>
>
>
> _______________________________________________
> NEST Users mailing list -- users(a)nest-simulator.org
> To unsubscribe send an email to users-leave(a)nest-simulator.org
>