# Questions and answers about precise neurons

- Q:
**Is it meaningful to compare the precise sequences of spikes generated by the simulations of a recurrent network using different solvers?**

A: No, due to the chaotic nature of the dynamics, minor differences in the computer representation of the spike times lead to completely different spike sequences after a short time.

- Q:
**Does an event-driven algorithm which determines the precise spike times of a neuron by numerically evaluating a closed form expression or an iterative procedure like Newton-Raphson lead to machine independent spike sequences?**

A: No. For example, if machine A uses "double" for the representation of floating point numbers and machine B uses "quad" precision, the spike sequences of the two simulations deviate after a short time. Even with the same representation of floating point values, results rapidly diverge if some library function like exp() is implemented in a slightly different way or the terms of mathematical expressions are reordered.

- Q:
**Given the non-reproducibility of spike sequences in network simulations, is there any meaningful way to talk about the accuracy of a solver?**

A: Yes, even though network dynamics may be chaotic, for many neuron models relevant to Computational Neuroscience the dynamics of the single neuron is not. Examples are integrate-and-fire models with linear subthreshold dynamics and the AdEx model considered in Hanuschkin (2010). In these cases it is possible to study the accuracy of a solution of the single neuron dynamics.

- Q:
**Why are we investigating the performance of network simulations anyway?**

A: A single neuron simulation is no challenge for modern processors in terms of memory consumption. The data fit into the fast cache memory and memory bandwidth is not an issue. In a network simulation, however, the run time of a simulation algorithm is to a large extent determined by the organization of the data flow between main memory and processor. Solvers may differ considerably in their demands on memory bandwidth. Therefore it is essential that integration algorithms are compared with respect to the run time of network simulations.

- Q:
**How can the efficiency of a solver be defined if accuracy is only accessible in single neuron simulations and run time is only of interest for network simulations?**

A: Efficiency needs to be defined as the run time of a network simulation required to achieve a certain accuracy goal of a single neuron simulation with input statistics corresponding to the network simulation. This was developed and described in Morrison et al. (2007).

- Q:
**Given that network dynamics is chaotic anyway, why is it important that single neuron dynamics is accurately integrated?**

A: Although the networks dynamics is chaotic, in some cases mesoscopic measures of network activity can be affected by the quality of the single neuron solver. For example, Hansel et al. (1998) showed that a measure of network synchrony exhibits a considerable error if the single neuron dynamics is integrated using a grid-constrained algorithm. Without confidence in the precision of the single neuron solver we cannot interpret features observed on the network level or control for artifacts.

- Q:
**The biological system contains noise and any model is only an accurate description of nature to some degree. Why is it then important to be able to integrate a model with a precision of n digits?**

A: This question is based on a mix-up between a scientific model and a simulation of the model. A simulation should always attempt to solve the equations of a model accurately, so that the scientist can be sure of the predictions of the model. Any noise terms or variability of parameters should be explicit constituents of the model, not of a particular simulation.

- Q:
**Does this mean that we should always simulate using the maximum precision implementations of neuron models?**

A: No, for many scientific problems a limited precision is good enough. The fastest method delivering at least the required precision is the one of choice. In the case of chaotic dynamics there is generally no good reason to consider results produced by a neuron model implementation with high precision as being 'more correct' than those produced by a faster implementation with lower precision, as long as mesoscopic measures of interest remain unchanged. With a more accurate method at hand, the researcher can always carry out control simulations at higher precision to verify that the scientific results are robust with respect to the integration method.

- Q:
**Is there a fundamental difference between event-driven and time-driven algorithms in the reproducibility of the spike sequences of network simulations if the solvers do not miss any spikes?**

A: No. In both cases the sequence of spike times is generally not reproducible by a different implementation or on a different machine because it depends on the details of the numerical implementation and the representation of floating point numbers.

- Q:
**Is there a fundamental difference in the accuracy of an event-driven algorithm and the time-driven algorithm presented in Hanuschkin (2010)?**

A: Yes. In a class of integrate-and-fire neuron models with linear subthreshold dynamics the event-driven methods never miss a spike. The time-driven method presented in the study misses spikes with a low probability.

- Q:
**Is there a fundamental difference in the accuracy of an event-driven algorithm and the time-driven algorithm presented in Hanuschkin (2010) if the event-driven algorithm is used for a neuron model like the AdEx model, for which a spike prediction expression remains to be discovered?**

A: No, in this case both types of algorithms rely on solvers moving forward with an adaptive step size which can theoretically miss spikes, but in practice does not, due to the explosive dynamics at threshold. As there is no difference in the accuracy, the faster algorithm should be chosen.

- Q:
**Why is the time-driven method for the AdEx model presented in Hanuschkin (2010) the preferred method if neither an event-driven nor a time-driven algorithm is known which theoretically excludes the loss of spikes**?

A: The time-driven method is more efficient: it delivers the same accuracy in a shorter time because of a lower administrative overhead.

- Q:
**What is the rate at which spikes are missed in a typical large-scale neuronal network simulation of integrate-and-fire model neurons with linear subthreshold dynamics in the balanced state and a spike rate of around 10 Hz**?

A: At a typical parameter setting for a simulation with around 10,000 neurons and 15 million synapses, the total rate at which spikes are missed is up to 5 spikes per second.

- Q:
**Is the time-driven method presented in Hanuschkin (2010) more general than the event-driven methods discussed?**

A: Yes, the event-driven methods that do not miss any spikes are specific to a particular class of neuron models (current based with exponential synapses). In contrast, the time-driven method presented in the study is applicable to any neuron model with a threshold condition independently of the nature of the subthreshold dynamics.

- Q:
**What is the scalability of the proposed solution for large-scale network simulations in comparison to an event-driven scheme?**

A: The scalability of the time-driven method presented in Hanuschkin (2010) is excellent. It is identical to that of the classical time-driven solver constraining spikes to a fixed computation time grid. In contrast, the classical event-driven scheme does not scale well because it requires a central queue. This can be improved if a decoupling technique based on the existence of a minimal delay (Morrison et al. 2005) is employed, see Lytton & Hines (2005).