I'm a bit confused with NEST's STDP models. I'm not sure if the issues that I have found are design choices or unintended bugs.
To debug stdp models, I added print statements in the stdp_synapse.h file:
double minus_dt; while ( start != finish ) { minus_dt = t_lastspike_ - ( start->t_ + dendritic_delay ); std::cout<<"minus_dt: "<<minus_dt<<std::endl; std::cout<<"postspike time: "<<start->t_<<std::endl; ++start;
// get_history() should make sure that // start->t_ > t_lastspike - dendritic_delay, i.e. minus_dt < 0 assert( minus_dt < -1.0 * kernel().connection_manager.get_stdp_eps() ); weight_ = facilitate_( weight_, Kplus_ * std::exp( minus_dt / tau_plus_ ) ); }
I created two parrot neurons and fed spike times to the first neuron. I set the syn_delay to 0.3 ms. I found two issues:
1. If the spike train consists of only one spike, no STDP updates are performed. I would expect one positive update since the postsynaptic spike arrives 0.3 ms after the presynaptic spike.
2. minus_dt=-0.6 instead of -0.3. The spike separation is 0.3 ms, however for the STDP calculation, one uses 0.6 ms which confuses me. Indeed, simulating with presynaptic spike train [10., 20.] leads to weight change 1.0->35.5443 (lamba=1, mu_plus=mu_minus=0, otherwise defaults). Analytically, the same can be achieved by calculating
(1/100 + np.exp(-0.6/20) - np.exp(-9.4/20))*100 = np.float64(35.54432652658074)
This confirms that indeed minus_dt=-0.6 ms and that the first LTP STDP interaction is neglected.
Are these known issues or new findings? I tested with NEST 3.3 and 3.8 and both had the same issues. I do not understand the point of having dendritic_delay in minus_dt = t_lastspike_ - ( start->t_ + dendritic_delay ), as it is already included in the start->t_ (?).