Dear NEST community,
I am working with the triplets STDP connection on NEST simulations, and I
am interested in how the weight dynamics change when the spike trains are
altered in some specific ways.
In order to do that, I run a network simulation using NEST and the triplets
STDP rule and save the full spike trains of two neurons which are
synaptically connected (one pre and one post). I then recreate the weight
changes on a code outside the NEST loop, so that I can manipulate the spike
trains and observe what happens to the synaptic weights.
Trying to validate this approach, though, I find that my code (outside NEST
loop) generates different weight values than the NEST simulation when using
the same spike trains generated during the simulation. I guess I have some
error in my implementation of the triplets rule. I thought it could be
something with the implementation of the delays, or the moment when the
weights are measured in the simulation, but I have had no success trying to
fix it yet.
I know this is not exactly a NEST issue, but I thought I would give it a
try and ask here at the list. In case someone has already worked with the
triplets rule and could point me out to what is/could be wrong in my
implementation, I would very much appreciate it :)
Thanks!
best,
Júlia
Dear Nest Community,
Has anyone encountered a "bad_alloc" error like the one below and if so,
any recommendations? It appears to be a VM memory issue but only using
21% of harddrive space (ref: below).
My simulation successfully completes for 200,000 ms but errors out at 98%
complete for 230,000 ms, 75% for 300,000 ms and 56% for 400,000 ms.
I'm running on NEST 2.18.0 VirtualBox lubuntu 18.04 (ref: image of
settings below).
Thank you for any suggestions.
Best Regards,
--Allen
**********************************************
**** Error Message *****
>> # SIMULATION
>> nest.Simulate(300000)
Nov 21 17:10:28 NodeManager::prepare_nodes [Info]:
Preparing 684 nodes for simulation.
Nov 21 17:10:28 MUSICManager::enter_runtime [Info]:
Entering MUSIC runtime with tick = 1 ms
Nov 21 17:10:28 SimulationManager::start_updating_ [Info]:
Number of local nodes: 684
Simulation time (ms): 300000
Number of OpenMP threads: 2
Number of MPI processes: 1
75 %: network time: 223698.0 ms, realtime factor: 0.6277Traceback (most
recent call last):
File "<stdin>", line 3, in <module>
File
"/home/nest/work/nest-install/lib/python3.6/site-packages/nest/ll_api.py",
line 246, in stack_checker_func
return f(*args, **kwargs)
File
"/home/nest/work/nest-install/lib/python3.6/site-packages/nest/lib/hl_api_simulation.py",
line 66, in Simulate
sr('ms Simulate')
File
"/home/nest/work/nest-install/lib/python3.6/site-packages/nest/ll_api.py",
line 132, in catching_sli_run
raise exceptionCls(commandname, message)
nest.ll_api.std::bad_alloc: ('std::bad_alloc in Simulate_d: C++ exception:
std::bad_alloc', 'std::bad_alloc', <SLILiteral: Simulate_d>, ': C++
exception: std::bad_alloc')
********************************************
**** Folder Space on VirtualBox after Error ****
nest@nestvm:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.2G 0 5.2G 0% /dev
tmpfs 1.1G 1.1M 1.1G 1% /run
/dev/sda1 99G 20G 76G 21% /
tmpfs 5.2G 0 5.2G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.2G 0 5.2G 0% /sys/fs/cgroup
SharedNest2 917G 447G 470G 49% /media/sf_SharedNest2
tmpfs 1.1G 16K 1.1G 1% /run/user/1000
/dev/sr0 74M 74M 0 100% /media/nest/VBox_GAs_6.0.10
***********************************
**** VirtualBox Settings *******
[image: image.png]
Hello,
my name is Arturo and I think I found a bug when calling PlotTargets using
the mask plot option with an azimuthal angle > 0.
Below I give an example very easy to follow that shows that although the
targets are plotted correctly, the mask is not.
Correct me if I am doing anything wrong please. I attach an image of the
results also.
Thank you, Arturo.
Code:
l = topo.CreateLayer({'rows': 21, 'columns': 21,
'elements': 'iaf_psc_alpha'})
conndict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-0.3, -0.12],
'upper_right': [-0.05, 0.12],
'azimuth_angle': 45.}},
'kernel': 1.0}
topo.ConnectLayers(l, l, conndict)
fig = topo.PlotLayer(l, nodesize=40)
ctr = topo.FindCenterElement(l)
topo.PlotTargets(ctr, l, fig=fig,
mask=conndict['mask'],
src_size=250, tgt_color='red', tgt_size=20)
[image: Captura.PNG]
Dear NEST Developers!
Thanks to Lekshmi's great work, we can now use Github checks for continuous integration testing. Please merge master into your open PRs/branches to make sure your contributions run the Github checks. We will run Github checks and Travis checks in parallel for now.
Best
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Hi!
Just a question, I may have ovelooked something. If I use NEST help to get information about a neuron model, I get the message that there is no help:
In [28]: nest.help('iaf_psc_alpha')
Sorry, there is no help for 'iaf_psc_alpha'.
But we have all the nice model documentation. Have I overlooked something or do we need to upgrade help()?
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Dear Johan,
You do not tell us what the “result” number is that you print out. But I notice that the value you get when running with 10 MPI processes on PDC, is about 10 times smaller than the value you see with one process on your PC.
I’ll venture a guess and assume you have a network of N neurons which you connect to one spike detector. You simulate, read out the number of spikes and convert it into a firing rate, dividing number of spikes by N * simtime.
In an MPI-parallel simulation, the spike detector on each MPI process only gets the spikes of neurons simulated on that MPI process. If you still divide by the full number of neurons to compute the rate, one would expect exactly the behavior you observe.
In the future, please provide more details about what you are doing, so we can provide more pointed advice.
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
On 19/02/2021, 22:20, "JOHAN LILJEFORS" <JOHAN(a)liljefors.eu<mailto:JOHAN@liljefors.eu>> wrote:
Dear Nest users,
I am fairly new to Nest having used it only for a few months and it has been working without any problems on my PC running Linux. Recently I was given access to a PDC where I have been submitting jobs but I am encountering a problem with my model. I'm running Python 3.7.3 and Nest 2.18.
On both my PC and on the PDC, I run the following python script:
------------------------------
import Nest
For counter in range(0,10):
nest.resetkernel()
___run___script
print(result)
------------------------------
I execute this with "srun -n=1 python3 run.py"
This outputs 10 numbers, ranging from 0.35-0.4.
On the PDC, i run the same scripts but without the for loop:
------------------------------
import Nest
nest.resetkernel()
___run___script
print(result)
------------------------------
I execute this with "srun -n=10 python3 run.py"
this outputs 10 numbers, but this time much smaller around 0.04.
I am not doing any file operations, nor any MPI communication between the tasks and I am genuinely confused as to how submitting multiple tasks can yield a different result than a single task.
Has anyone encountered anything similar?
Regards
Johan Liljefors
Dear Nest users,
I am fairly new to Nest having used it only for a few months and it has been working without any problems on my PC running Linux. Recently I was given access to a PDC where I have been submitting jobs but I am encountering a problem with my model. I'm running Python 3.7.3 and Nest 2.18.
On both my PC and on the PDC, I run the following python script:
------------------------------
import Nest
For counter in range(0,10):
nest.resetkernel()
___run___script
print(result)
------------------------------
I execute this with "srun -n=1 python3 run.py"
This outputs 10 numbers, ranging from 0.35-0.4.
On the PDC, i run the same scripts but without the for loop:
------------------------------
import Nest
nest.resetkernel()
___run___script
print(result)
------------------------------
I execute this with "srun -n=10 python3 run.py"
this outputs 10 numbers, but this time much smaller around 0.04.
I am not doing any file operations, nor any MPI communication between the tasks and I am genuinely confused as to how submitting multiple tasks can yield a different result than a single task.
Has anyone encountered anything similar?
Regards
Johan Liljefors
Dear Colleagues,
The NEST Initiative is excited to invite everyone interested in Neural
Simulation Technology and the NEST Simulator to the NEST Conference 2021!
The NEST Conference provides an opportunity for the NEST Community to
meet, exchange success stories, swap advice, learn about current
developments in and around NEST spiking network simulation and
its application.
This year's conference willagaintake place as a *virtual conference*on
Monday/Tuesday *28**/**29 June **2021*followed by a virtual
NESTUserHackathon until Friday 2 July, whichoffers the opportunity to
deep-dive your own code with expert developers at your fingertips.
For more information please visit the conference website
_*https://nest-simulator.org/conference*_
<https://nest-simulator.org/conference>
We are looking forward to seeing you all in June!
Hans EkkehardPlesser, Dennis Terhorst & Anne Elfgen
Dear NEST Users,
I'm trying to get NEST up and running on Piz-Daint.
I could manage to have it fully compiled and installed with python
bindings, MPI, GSL and Boost.
However, simulations start but get killed returning a std::bad_alloc.
make installcheck also fails.
Did anyone managed to have it installed on this HPC?
Best regards,
Sergio MG Solinas
Dip. di Scienze Biomediche
Università di Sassari
Viale San Pietro 23
07100 - Sassari
The NEURON School <https://www.neuronschool.org/>
--
--
*Dona il 5x1000* all'Università degli Studi di Sassaricodice fiscale:
00196350904