Hi Itaru, Hi Juan,
you are both right, this should be made more clear. I created issue #1581 to track the documentation side.
Best,
Dennis
Then, this should be documented clearly for the Python interface users, otherwise, people tend to fall in this trap. Itaru. On Wed, Apr 29, 2020 at 3:00 PM Hans Ekkehard Plesser < hans.ekkehard.plesser@nmbu.no> wrote:Itaru, Until we do a very careful assessment about interactions between NEST and multiprocessing, I would say NEVER. We have a few test cases and maybe also examples which use mpi4py, but only to collect results from MPI-parallel simulations for analysis. When running NEST/PyNEST with MPI, it is also essential that all MPI ranks make *identical* calls to nest for NEST to work correctly. Best, Hans Ekkehard On 29 Apr 2020, at 07:46, Itaru Kitayama <itaru.kitayama@gmail.com> wrote: Hans, In what scenarios, Python's multi-processing API is okay to use in NEST? I am still using SLI as an interface to NEST, so I am just curious. Itaru. On Wed, Apr 29, 2020 at 2:32 PM Hans Ekkehard Plesser < hans.ekkehard.plesser@nmbu.no> wrote:Hi Juan, Creating connections in complex networks can take time. Sometimes, it is possible to improve on connection times by tweaks to the way in which the network is constructed. Given that you have a quite large network, I assume you have a considerable number of layers and thus also a quite large number of calls to ConnectLayers(). In that case, the forthcoming NEST 3 will most likely reduce construction times noticeably, because layers passed to Connect in a much more efficient way. We currently also have not fully thread-parallelised connection construction for "divergent" connections, in contrast to "convergent". We could look into that if switching between "convergent" and "divergent" gives you noticeable improvements in speed. Please DO NOT USE MULTIPROCESSING with NEST. NEST internally parallelizes network construction and maintains internal data structures in this process. Running several ConnectLayers() calls simultaneously will lead to unpredictable results. Best, Hans Ekkehard On 28 Apr 2020, at 20:27, Juan Manuel Vicente <juanma.v82@gmail.com> wrote: Hi all, I'm trying to understand some inner workings of Nest. Rigth now I'm running simulations with close half millons elements, using mpirun in a cluster with 25 nodes. The problem I am having is that the "setup" (layer creation and connections) phase takes close to 8min and the simulation only takes 1min. So I tried to use python's multiprocessing package to speed it up, with the following code: nest.ResetKernel() nest.SetKernelStatus({"local_num_threads": 1}) #... connections = [ (layer1, layer1, conn_ee_dict, 1), (layer1, layer2, conn_ee_dict, 2), (layer2, layer2, conn_ee_dict, 3), (layer2, layer1, conn_ee_dict, 4) ] # Process the connections. def parallel_topology_connect(parameters): [pre, post, projection, number] = parameters print(f"Connection number: {number}") topology.ConnectLayers(pre, post, projection) pool = multiprocessing.Pool(processes=4) pool.map(parallel_topology_connect, connections) The above example takes around 0.9s, but if the last two to lines are changed for a sequential call, it takes 2.1s: for [pre, post, projection, number] in connections: print(f"Connection number: {number}") topology.ConnectLayers(pre, post, projection) So far the multiprocessing works great, the problem comes when the "local_num_threads" parameters is changed from 1 to 2 or more, in the cluster it could be 32. The code gets stuck in the topology.Connect without any error, after a while I just stopped it. Also I realised that the tolopoly.ConnectLayers just spawn one thread to connects layers despite the local_num_threads is setted more than one. Any idea what is going on? Thanks in advance Juan Manuel PD: The full example code is attached (60 lines of code). The local_num_threads and multiprocessing_flag variables change the behaviors of the code. <smalltestcase.py>_______________________________________________ NEST Users mailing list -- users@nest-simulator.org To unsubscribe send an email to users-leave@nest-simulator.org -- Prof. Dr. Hans Ekkehard Plesser Head, Data Science Section Faculty of Science and Technology Norwegian University of Life Sciences PO Box 5003, 1432 Aas, Norway Phone +47 6723 1560 Email hans.ekkehard.plesser@nmbu.no Home http://arken.nmbu.no/~plesser _______________________________________________ NEST Users mailing list -- users@nest-simulator.org To unsubscribe send an email to users-leave@nest-simulator.org_______________________________________________ NEST Users mailing list -- users@nest-simulator.org To unsubscribe send an email to users-leave@nest-simulator.org -- Prof. Dr. Hans Ekkehard Plesser Head, Data Science Section Faculty of Science and Technology Norwegian University of Life Sciences PO Box 5003, 1432 Aas, Norway Phone +47 6723 1560 Email hans.ekkehard.plesser@nmbu.no Home http://arken.nmbu.no/~plesser _______________________________________________ NEST Users mailing list -- users@nest-simulator.org To unsubscribe send an email to users-leave@nest-simulator.org
_______________________________________________ NEST Users mailing list -- users@nest-simulator.org To unsubscribe send an email to users-leave@nest-simulator.org
-- Dipl.-Phys. Dennis Terhorst Coordinator Software Development Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience & Theoretical Neuroscience, Institute for Advanced Simulation (IAS-6) Jülich Research Centre, Member of the Helmholz Association and JARA 52425 Jülich, Germany Building 15.22 Room 4004 Phone +49 2461 61-85062 Fax +49 2461 61- 9460 d.terhorst@fz-juelich.de ------------------------------------------------------------------------------------- Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt -------------------------------------------------------------------------------------