프로그램 사용/openHPC2020. 12. 2. 12:19

설치 문서를 보고 있는데 역시 개인이 가볍게(?) 설치할 만한 규모가 아니다.

 

문서에서는 최소 5대를 가지고 설치해야 하는데

그나마도 lustre같은 파일 시스템이 아닌 nfs로 구성하는 것이고

lustre나 BeeGFS 분산파일 시스템 추가되면 최소 2대 이상이 더 필요해진다.

 

마스터 노드와 컴퓨트 노드 총 5대로 구성을 하고

마스터 노드는 SMS(System Management Server)와 프로비저닝(warewulf)을 제공한다.

1.2 Requirements/Assumptions
This installation recipe assumes the availability of a single head node master, and four compute nodes.The master node serves as the overall system management server (SMS) and is provisioned with CentOS 8.2 and is subsequently configured to provision the remaining compute nodes with Warewulf in a stateless configuration. The terms master and SMS are used interchangeably in this guide. For power management, we assume that the compute node baseboard management controllers (BMCs) are available via IPMI from the chosen master host. For file systems, we assume that the chosen master server will host an NFS file system that is made available to the compute nodes. Installation information is also discussed to optionally mount a parallel file system and in this case, the parallel file system is assumed to exist previously

 

HPC systems rely on synchronized clocks throughout the system and the NTP protocol can be used to facilitate this synchronization

 

[링크 : https://github.com/openhpc/ohpc/releases/download/v2.0.GA/Install_guide-CentOS8-Warewulf-SLURM-2.0-x86_64.pdf]

 

 

[링크 : https://www.admin-magazine.com/HPC/Articles/warewulf_cluster_manager_completing_the_environment]

'프로그램 사용 > openHPC' 카테고리의 다른 글

ipmi simulator  (0) 2020.12.02
IPoIB - IP over InfiniBand , OFED, PSM  (0) 2020.12.02
warewulf  (0) 2020.11.26
openHPC  (0) 2020.11.25
beowulf 프로그램은 없다  (0) 2020.11.25
Posted by 구차니