'Programming/openMPI'에 해당되는 글 6건

  1. 2019.04.02 openMPI 서비스 설치하기...?
  2. 2017.02.05 ubuntu mpich cluster
  3. 2017.02.04 opempi 패키지
  4. 2017.02.03 openmpi with heterogenerous
  5. 2017.02.03 openmpi with openmp
  6. 2011.09.19 OpenMP 그리고 OpenMPI 2
Programming/openMPI2019. 4. 2. 22:27

MPI + thread 어쩌구 나오길래 보는 중

openMPI는 Message Passing Interface니까.. 받는 쪽에 데몬이 있어야 할텐데

어떤식으로 주고 받는지 모르겠네..

 

[링크 : http://wgropp.cs.illinois.edu/courses/cs598-s16/lectures/lecture36.pdf]

[링크 : http://ympahk.blogspot.com/2014/05/openmpi.html]

[링크 : http://ap2.khu.ac.kr/download/mpi_lec.pdf]

[링크 : https://www.open-mpi.org/faq/?category=running]

 

+

2019.04.03

서비스 돌리는거 보면 의외로 별거 없는데.. 어떻게 설정하는거지?

[링크 : https://openfoamwiki.net/index.php/Installation/Linux/OpenFOAM-4.x/CentOS_SL_RHEL]

'Programming > openMPI' 카테고리의 다른 글

ubuntu mpich cluster  (0) 2017.02.05
opempi 패키지  (0) 2017.02.04
openmpi with heterogenerous  (0) 2017.02.03
openmpi with openmp  (0) 2017.02.03
OpenMP 그리고 OpenMPI  (2) 2011.09.19
Posted by 구차니
Programming/openMPI2017. 2. 5. 12:39

음.. 구조를 일단 파악을 해봐야 하나..

odroid 쪽은 mpich2 패키지가 없네.. 버전이 달라서 그런가?

아무튼 mpiexec나 mpirun 나 둘다 똑같이 실행파일이 연결되네

lrwxrwxrwx 1 root root 24  2월  5 12:43 /usr/bin/mpirun -> /etc/alternatives/mpirun*

lrwxrwxrwx 1 root root 25  2월  5 12:43 /usr/bin/mpiexec -> /etc/alternatives/mpiexec*

lrwxrwxrwx 1 root root 21  2월  5 12:43 /etc/alternatives/mpirun -> /usr/bin/mpirun.mpich*

lrwxrwxrwx 1 root root 22  2월  5 12:43 /etc/alternatives/mpiexec -> /usr/bin/mpiexec.mpich* 

lrwxrwxrwx 1 root root 13  2월  7  2016 /usr/bin/mpirun.mpich -> mpiexec.hydra*

lrwxrwxrwx 1 root root 13  2월  7  2016 /usr/bin/mpiexec.mpich -> mpiexec.hydra*


$ mpiexec --help


Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ...


Global options (passed to all executables):


  Global environment options:

    -genv {name} {value}             environment variable name and value

    -genvlist {env1,env2,...}        environment variable list to pass

    -genvnone                        do not pass any environment variables

    -genvall                         pass all environment variables not managed

                                          by the launcher (default)


  Other global options:

    -f {name}                        file containing the host names 


호스트 파일은 아래와 같이 넣는데 얘도 winbind 적용 가능하려나?

ub3:4  # this will spawn 4 processes on ub3

ub2:2  # this will spawn 2 processes on ub2

ub1    # this will spawn 1 process on ub1

ub0    # this will spawn 1 process on ub0 


[링크 : https://help.ubuntu.com/community/MpichCluster]

    [링크 : http://www.brianjp93.com/blog/building-a-beowulf-cluster-ubuntu-1404-server-lts/]

[링크 : https://www.digitalocean.com/.../how-to-create-a-beowulf-cluster-using-ubuntu-12-04-vps-instances]

[링크 : http://techtinkering.com/2009/12/02/setting-up-a-beowulf-cluster-using-open-mpi-on-linux/]

설정방법을 대충보니...

빌드 돌릴 녀석이 host node가 되고, 얘가 nfs로 공유할 디렉토리를 생성해서 work node들이 여기를 연결

빌드에는 ssh를 key적용해서 암호없이도 할 수 있도록 해야 하는 듯..

옛날에 있던 녀석이니.. rlogin이나 telnet으로는 안되려나?


$ mpiexec -info

HYDRA build details:

    Version:                                 3.2

    Release Date:                            Wed Nov 11 22:06:48 CST 2015

    CC:                              gcc   -Wl,-Bsymbolic-functions -Wl,-z,relro 

    CXX:                             g++   -Wl,-Bsymbolic-functions -Wl,-z,relro 

    F77:                             gfortran  -Wl,-Bsymbolic-functions -Wl,-z,relro 

    F90:                             gfortran  -Wl,-Bsymbolic-functions -Wl,-z,relro 

    Configure options:                       '--disable-option-checking' '--prefix=/usr' '--build=arm-linux-gnueabihf' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=${prefix}/lib/arm-linux-gnueabihf' '--libexecdir=${prefix}/lib/arm-linux-gnueabihf' '--disable-maintainer-mode' '--disable-dependency-tracking' '--enable-shared' '--enable-fortran=all' '--disable-rpath' '--disable-wrapper-rpath' '--sysconfdir=/etc/mpich' '--libdir=/usr/lib/arm-linux-gnueabihf' '--includedir=/usr/include/mpich' '--docdir=/usr/share/doc/mpich' '--with-hwloc-prefix=system' 'CPPFLAGS= -Wdate-time -D_FORTIFY_SOURCE=2 -I/build/mpich-52KREu/mpich-3.2/src/mpl/include -I/build/mpich-52KREu/mpich-3.2/src/mpl/include -I/build/mpich-52KREu/mpich-3.2/src/openpa/src -I/build/mpich-52KREu/mpich-3.2/src/openpa/src -D_REENTRANT -I/build/mpich-52KREu/mpich-3.2/src/mpi/romio/include' 'CFLAGS= -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -O2' 'CXXFLAGS= -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -O2' 'FFLAGS= -g -O2 -fstack-protector-strong -O2' 'FCFLAGS= -g -O2 -fstack-protector-strong -O2' 'build_alias=arm-linux-gnueabihf' 'MPICHLIB_CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security' 'MPICHLIB_CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'MPICHLIB_CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security' 'MPICHLIB_FFLAGS=-g -O2 -fstack-protector-strong' 'MPICHLIB_FCFLAGS=-g -O2 -fstack-protector-strong' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro' 'FC=gfortran' 'F77=gfortran' 'MPILIBNAME=mpich' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'LIBS=-lpthread '

    Process Manager:                         pmi

    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist

    Topology libraries available:            hwloc

    Resource management kernels available:   user slurm ll lsf sge pbs cobalt

    Checkpointing libraries available:       blcr

    Demux engines available:                 poll select 


'Programming > openMPI' 카테고리의 다른 글

openMPI 서비스 설치하기...?  (0) 2019.04.02
opempi 패키지  (0) 2017.02.04
openmpi with heterogenerous  (0) 2017.02.03
openmpi with openmp  (0) 2017.02.03
OpenMP 그리고 OpenMPI  (2) 2011.09.19
Posted by 구차니
Programming/openMPI2017. 2. 4. 22:58

멀 깔아야 서버가 되려나?


$ sudo apt-cache search openmpi

gromacs-openmpi - Molecular dynamics sim, binaries for OpenMPI parallelization

libblacs-openmpi1 - Basic Linear Algebra Comm. Subprograms - Shared libs. for OpenMPI

libhdf5-openmpi-8 - Hierarchical Data Format 5 (HDF5) - runtime files - OpenMPI version

libhdf5-openmpi-8-dbg - Hierarchical Data Format 5 (HDF5) - OpenMPI Debug package

libhdf5-openmpi-dev - Hierarchical Data Format 5 (HDF5) - development files - OpenMPI version

libmeep-lam4-7 - library for using parallel (OpenMPI) version of meep

libmeep-lam4-dev - development library for using parallel (OpenMPI) version of meep

libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep

libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep

libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep

libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep

libmeep-openmpi-dev - development library for using parallel (OpenMPI) version of meep

libmeep-openmpi7 - library for using parallel (OpenMPI) version of meep

libopenmpi-dev - high performance message passing library -- header files

libopenmpi1.6 - high performance message passing library -- shared library

libopenmpi1.6-dbg - high performance message passing library -- debug library

libscalapack-openmpi1 - Scalable Linear Algebra Package - Shared libs. for OpenMPI

meep-lam4 - software package for FDTD simulation, parallel (OpenMPI) version

meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version

meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version

meep-openmpi - software package for FDTD simulation, parallel (OpenMPI) version

mpqc-openmpi - Massively Parallel Quantum Chemistry Program (OpenMPI transitional package)

netpipe-openmpi - Network performance tool using OpenMPI

octave-openmpi-ext - Transitional package for parallel computing in Octave using MPI

openmpi-bin - high performance message passing library -- binaries

openmpi-checkpoint - high performance message passing library -- checkpoint support

openmpi-common - high performance message passing library -- common files

openmpi-doc - high performance message passing library -- man pages

openmpi1.6-common - high performance message passing library -- common files

openmpi1.6-doc - high performance message passing library -- man pages

openmpipython - MPI-enhanced Python interpreter (OpenMPI based version)

yorick-full - full installation of the Yorick interpreter and add-ons

yorick-mpy-openmpi - Message Passing Yorick (OpenMPI build) 


$ sudo apt-cache search mpich

gromacs-mpich - Molecular dynamics sim, binaries for MPICH parallelization

libhdf5-mpich-8 - Hierarchical Data Format 5 (HDF5) - runtime files - MPICH2 version

libhdf5-mpich-8-dbg - Hierarchical Data Format 5 (HDF5) - Mpich Debug package

libhdf5-mpich-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version

libhdf5-mpich2-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version

libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep

libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep

libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep

libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep

libmpich-dev - Development files for MPICH

libmpich12 - Shared libraries for MPICH

libmpich2-3 - Shared libraries for MPICH2

libmpich2-dev - Transitional dummy package for MPICH development files

libmpl-dev - Development files for mpl part of MPICH

libmpl1 - Shared libraries for mpl part of MPICH

libopa-dev - Development files for opa part of MPICH

libopa1 - Shared libraries for opa part of MPICH

libscalapack-mpi-dev - Scalable Linear Algebra Package - Dev. files for MPICH

meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version

meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version

mpb-mpi - MIT Photonic-Bands, parallel (mpich) version

mpi-default-bin - Standard MPI runtime programs (metapackage)

mpi-default-dev - Standard MPI development files (metapackage)

mpich - Implementation of the MPI Message Passing Interface standard

mpich-doc - Documentation for MPICH

mpich2 - Transitional dummy package

mpich2-doc - Transitional dummy package for MPICH documentation

mpich2python - MPI-enhanced Python interpreter (MPICH2 based version)

netpipe-mpich2 - Network performance tool using MPICH2 MPI

scalapack-mpi-test - Scalable Linear Algebra Package - Test files for MPICH

scalapack-test-common - Test data for ScaLAPACK testers

yorick-full - full installation of the Yorick interpreter and add-ons

yorick-mpy-mpich2 - Message Passing Yorick (MPICH2 build)


$ sudo apt-cache search mpirun

lam-runtime - LAM runtime environment for executing parallel programs

mpi-default-bin - Standard MPI runtime programs (metapackage) 

[링크 : https://likymice.wordpress.com/2015/03/13/install-open-mpi-in-ubuntu-14-04-13-10/]


$ sudo apt-get install libcr-dev mpich2 mpich2-doc 

[링크 : https://jetcracker.wordpress.com/2012/03/01/how-to-install-mpi-in-ubuntu/]


mpich와 mpich2는 별 차이가 없네? 그래도 2는 1을 포함하는 듯

$ sudo apt-get install mpich

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following extra packages will be installed:

  gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins

  libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1

  ocl-icd-libopencl1

Suggested packages:

  gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms

  libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd

The following NEW packages will be installed:

  gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins

  libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1

  mpich ocl-icd-libopencl1

0 upgraded, 15 newly installed, 0 to remove and 3 not upgraded.

Need to get 6,879 kB of archives.

After this operation, 25.5 MB of additional disk space will be used.

Do you want to continue? [Y/n] 

$ sudo apt-get install mpich2

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following extra packages will be installed:

  gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins

  libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1

  mpich ocl-icd-libopencl1

Suggested packages:

  gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms

  libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd

The following NEW packages will be installed:

  gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins

  libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1

  mpich mpich2 ocl-icd-libopencl1

0 upgraded, 16 newly installed, 0 to remove and 3 not upgraded.

Need to get 6,905 kB of archives.

After this operation, 25.6 MB of additional disk space will be used.

Do you want to continue? [Y/n]


그냥 실행하는게 없으니 에러나네?

$ mpirun

[mpiexec@raspberrypi] set_default_values (ui/mpich/utils.c:1528): no executable provided

[mpiexec@raspberrypi] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1739): setting default values failed

[mpiexec@raspberrypi] main (ui/mpich/mpiexec.c:153): error parsing parameters 


대충 소스 받아서 돌려보니 되긴한데.. 다른 서버를 구축해서 하는건 또 나중에 해봐야지..

$ mpicc mpi.c -o hello

$ mpirun -np 2 ./hello

Hello world from process 0 of 2

Hello world from process 1 of 2 


-np는 number of processes

$ mpirun --help


Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ...


Global options (passed to all executables):


  Global environment options:

    -genv {name} {value}             environment variable name and value

    -genvlist {env1,env2,...}        environment variable list to pass

    -genvnone                        do not pass any environment variables

    -genvall                         pass all environment variables not managed

                                          by the launcher (default)


  Other global options:

    -f {name}                        file containing the host names

    -hosts {host list}               comma separated host list

    -wdir {dirname}                  working directory to use

    -configfile {name}               config file containing MPMD launch options



Local options (passed to individual executables):


  Local environment options:

    -env {name} {value}              environment variable name and value

    -envlist {env1,env2,...}         environment variable list to pass

    -envnone                         do not pass any environment variables

    -envall                          pass all environment variables (default)


  Other local options:

    -n/-np {value}                   number of processes

    {exec_name} {args}               executable name and arguments



Hydra specific options (treated as global):


  Launch options:

    -launcher                        launcher to use (ssh rsh fork slurm ll lsf sge manual persist)

    -launcher-exec                   executable to use to launch processes

    -enable-x/-disable-x             enable or disable X forwarding


  Resource management kernel options:

    -rmk                             resource management kernel to use (user slurm ll lsf sge pbs cobalt)


  Processor topology options:

    -topolib                         processor topology library (hwloc)

    -bind-to                         process binding

    -map-by                          process mapping

    -membind                         memory binding policy


  Checkpoint/Restart options:

    -ckpoint-interval                checkpoint interval

    -ckpoint-prefix                  checkpoint file prefix

    -ckpoint-num                     checkpoint number to restart

    -ckpointlib                      checkpointing library (blcr)


  Demux engine options:

    -demux                           demux engine (poll select)


  Other Hydra options:

    -verbose                         verbose mode

    -info                            build information

    -print-all-exitcodes             print exit codes of all processes

    -iface                           network interface to use

    -ppn                             processes per node

    -profile                         turn on internal profiling

    -prepend-rank                    prepend rank to output

    -prepend-pattern                 prepend pattern to output

    -outfile-pattern                 direct stdout to file

    -errfile-pattern                 direct stderr to file

    -nameserver                      name server information (host:port format)

    -disable-auto-cleanup            don't cleanup processes on error

    -disable-hostname-propagation    let MPICH auto-detect the hostname

    -order-nodes                     order nodes as ascending/descending cores

    -localhost                       local hostname for the launching node

    -usize                           universe size (SYSTEM, INFINITE, <value>)


Please see the intructions provided at

http://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager

for further details 


'Programming > openMPI' 카테고리의 다른 글

openMPI 서비스 설치하기...?  (0) 2019.04.02
ubuntu mpich cluster  (0) 2017.02.05
openmpi with heterogenerous  (0) 2017.02.03
openmpi with openmp  (0) 2017.02.03
OpenMP 그리고 OpenMPI  (2) 2011.09.19
Posted by 구차니
Programming/openMPI2017. 2. 3. 21:13

이기종간에도(arm + x86) 적용이 가능한가 보네.. 어떻게 되는 원리지?!


[링크 : https://rafaelaroca.wordpress.com/2011/08/31/mpi-on-arm/]

[링크 : https://github.com/open-mpi/ompi/wiki/Heterogeneous]

'Programming > openMPI' 카테고리의 다른 글

openMPI 서비스 설치하기...?  (0) 2019.04.02
ubuntu mpich cluster  (0) 2017.02.05
opempi 패키지  (0) 2017.02.04
openmpi with openmp  (0) 2017.02.03
OpenMP 그리고 OpenMPI  (2) 2011.09.19
Posted by 구차니
Programming/openMPI2017. 2. 3. 21:01

openMP + openMPI 예제

그나저나 서버 구축부터 찾아 봐야겠네


[링크 : http://www.slac.stanford.edu/comp/unix/farm/mpi_and_openmp.html]

[링크 : https://www.open-mpi.org/projects/mtt/]

'Programming > openMPI' 카테고리의 다른 글

openMPI 서비스 설치하기...?  (0) 2019.04.02
ubuntu mpich cluster  (0) 2017.02.05
opempi 패키지  (0) 2017.02.04
openmpi with heterogenerous  (0) 2017.02.03
OpenMP 그리고 OpenMPI  (2) 2011.09.19
Posted by 구차니
Programming/openMPI2011. 9. 19. 07:39
libgomp 와 openmpi-dev를 설치하라는데
[http://ubuntuforums.org/showthread.php?t=710281]

서버에서 한번 검색을 해보니
흐으음.. GCC OpenMP의 약자로 GOMP와 openMPI 두가지가 나오고
$ apt-cache search openmp
libgomp1 - GCC OpenMP (GOMP) support library
libgomp1-dbg - GCC OpenMP (GOMP) support library (debug symbols)
lib64gomp1 - GCC OpenMP (GOMP) 지원 라이브러리 (64비트)
lib64gomp1-dbg - GCC OpenMP (GOMP) 지원 라이브러리 (64비트 디버깅 심볼)
conky-all - highly configurable system monitor (all features enabled)
gromacs-lam - Transition package to gromacs-openmpi
gromacs-openmpi - Molecular dynamics sim, binaries for OpenMPI parallelization
libcomplearn-gomp-dev - CompLearn library, OpenMP version development files
libcomplearn-gomp1 - machine-learning core library runtime files with OpenMP (libgomp)
libopenmpi-dbg - high performance message passing library -- debug library
libopenmpi-dev - high performance message passing library -- header files
libopenmpi1.3 - high performance message passing library -- shared library
libqsearch-gomp-dev - CompLearn library, OpenMP version development files
libqsearch-gomp1 - machine-learning core library runtime files with OpenMP (libgomp)
mpi-default-bin - Standard MPI runtime programs
mpi-default-dev - Standard MPI development files
openmpi-bin - high performance message passing library -- binaries
openmpi-checkpoint - high performance message passing library -- checkpoint support
openmpi-common - high performance message passing library -- common files
openmpi-doc - high performance message passing library -- man pages
libhdf5-openmpi-1.8.4 - Hierarchical Data Format 5 (HDF5) - runtime files - OpenMPI version
libhdf5-openmpi-dev - Hierarchical Data Format 5 (HDF5) - development files - OpenMPI version 
 
OpenMP와 OpenMPI는 다르다는데 흐음.. 내가 보기에는 그게 그거지만..
[http://www.linuxquestions.org/questions/programming-9/lost-on-parallel-programming-openmpi-openmp-657210/]

아무튼 MPI는 복잡한 Implementation이라고 하니, OpenMP 부터 해보면 되려나?
[링크 : http://openmp.org/wp/]
[링크 : http://www.open-mpi.org/]
 

'Programming > openMPI' 카테고리의 다른 글

openMPI 서비스 설치하기...?  (0) 2019.04.02
ubuntu mpich cluster  (0) 2017.02.05
opempi 패키지  (0) 2017.02.04
openmpi with heterogenerous  (0) 2017.02.03
openmpi with openmp  (0) 2017.02.03
Posted by 구차니