Reconfigure an old GPU server
in Tutorial with 0 comment
Reconfigure an old GPU server
in Tutorial with 0 comment

background

It mainly records the process of tool installation and configuration.

Host configuration:

Configuration process

System upgrade

 cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) yum clean all yum update reboot cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core)

Clean Source

 yum clean all yum makecache yum update

Installation dependency

 yum group install "Development Tools" yum install gcc gcc-c++ -y

Ntp time synchronization

 #Install without the ntpdate command yum -y install ntpdate #Sync and View ntpdate ntp1.aliyun.com date #Add Timing crontab -e */10 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null &

Check the graphics card

 yum install -y lshw lshw -numeric -C display
 #Add a new elrepo source rpm --import  https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh  http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
 yum install -y nvidia-detect nvidia-detect lspci -nnk | grep -i nvi

Installing the graphics card driver

 #View system release number uname -r 3.10.0-327.el7.x86_64 #Install the kernel source command #  https://buildlogs.centos.org/c7.1511.00/kernel/20151119220809/3.10.0-327.el7.x86_64/ #Download the corresponding kernel headers kernel devel yum install kernel-headers-3.10.0-327.el7.x86_64 -y yum install kernel-devel-3.10.0-327.el7.x86_64 -y #Find the corresponding link on the official website # https://www.nvidia.com/Download/index.aspx?lang=en -us #Download video card driver wget  https://us.download.nvidia.com/XFree86/Linux-x86_64/470.74/NVIDIA-Linux-x86_64-470.74.run #Give execution permission chmod a+x NVIDIA-Linux-x86_64-470.74.run #Installing the drive ./NVIDIA-Linux-x86_64-470.74.run -no-x-check -no-nouveau-check -no-opengl-files #During installation #Select No when encountering Install NVIDIA's 32-bit compatibility libraries #Would you like to run the nvidia xconfiguration utility to automatically update your x configuration so that the NVIDIA x driver will be used when you restart x? Any pre existing x profile will be backed up

Install CUDA

 #Go to the official website to find the download link # https://developer.nvidia.com/cuda-toolkit-archive wget  https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda_11.4.2_470.57.02_linux.run sh cuda_11.4.2_470.57.02_linux.run #Because the driver has already been installed, do not select the driver installation option
 =========== = Summary = =========== Driver:   Not Selected Toolkit:  Installed in /usr/local/cuda-11.4/ Samples:  Installed in /home/liuzexu/, but missing recommended libraries Please make sure that -    PATH includes /usr/local/cuda-11.4/bin -    LD_LIBRARY_PATH includes /usr/local/cuda-11.4/lib64, or, add /usr/local/cuda-11.4/lib64 to /etc/ld.so.conf and run ldconfig as root To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-11.4/bin ***WARNING: Incomplete installation!  This installation did not install the CUDA Driver. A driver of version at least 470.00 is required for CUDA 11.4 functionality to work. To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file: sudo <CudaInstaller>.run --silent --driver Logfile is /var/log/cuda-installer.log
 vim ~/.bashrc #Add the following export PATH=/usr/local/cuda-11.4/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH #Then refresh source ~/.bashrc vim /etc/profile export PATH=/usr/local/cuda-11.4/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module  470.74  Mon Sep 13 23:09:15 UTC 2021 GCC version:  gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Aug_15_21:14:11_PDT_2021 Cuda compilation tools, release 11.4, V11.4.120 Build cuda_11.4.r11.4/compiler.30300941_0

CUDA installation is now complete

Upgrade gcc is required before executing cuda sample

 yum install centos-release-scl -y  yum install devtoolset-7 -y  scl enable devtoolset-7 bash
 #Use CUDA's small tool to verify whether it is normal cd /home/liuzexu/NVIDIA_CUDA-11.4_Samples/1_Utilities/deviceQuery make ./deviceQuery #The information after execution is as follows ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 2 CUDA Capable device(s) Device 0: "NVIDIA GeForce GTX 1080 Ti" CUDA Driver Version / Runtime Version          11.4 / 11.4 CUDA Capability Major/Minor version number:    6.1 Total amount of global memory:                 11178 MBytes (11721506816 bytes) (028) Multiprocessors, (128) CUDA Cores/MP:    3584 CUDA Cores GPU Max Clock rate:                            1620 MHz (1.62 GHz) Memory Clock rate:                             5505 Mhz Memory Bus Width:                              352-bit L2 Cache Size:                                 2883584 bytes Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size,  (num) layers  1D=(32768), 2048 layers Maximum Layered 2D Texture Size,  (num) layers  2D=(32768, 32768), 2048 layers Total amount of constant memory:               65536 bytes Total amount of shared memory per block:       49152 bytes Total shared memory per multiprocessor:        98304 bytes Total number of registers available per block: 65536 Warp size:                                     32 Maximum number of threads per multiprocessor:  2048 Maximum number of threads per block:           1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size     (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch:                          2147483647 bytes Texture alignment:                             512 bytes Concurrent copy and kernel execution:          Yes with 2 copy engine(s) Run time limit on kernels:                     No Integrated GPU sharing Host Memory:            No Support host page-locked memory mapping:       Yes Alignment requirement for Surfaces:            Yes Device has ECC support:                        Disabled Device supports Unified Addressing (UVA):      Yes Device supports Managed Memory:                Yes Device supports Compute Preemption:            Yes Supports Cooperative Kernel Launch:            Yes Supports MultiDevice Co-op Kernel Launch:      Yes Device PCI Domain ID / Bus ID / location ID:   0 / 2 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > Device 1: "NVIDIA GeForce GTX 1080 Ti" CUDA Driver Version / Runtime Version          11.4 / 11.4 CUDA Capability Major/Minor version number:    6.1 Total amount of global memory:                 11178 MBytes (11721506816 bytes) (028) Multiprocessors, (128) CUDA Cores/MP:    3584 CUDA Cores GPU Max Clock rate:                            1620 MHz (1.62 GHz) Memory Clock rate:                             5505 Mhz Memory Bus Width:                              352-bit L2 Cache Size:                                 2883584 bytes Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size,  (num) layers  1D=(32768), 2048 layers Maximum Layered 2D Texture Size,  (num) layers  2D=(32768, 32768), 2048 layers Total amount of constant memory:               65536 bytes Total amount of shared memory per block:       49152 bytes Total shared memory per multiprocessor:        98304 bytes Total number of registers available per block: 65536 Warp size:                                     32 Maximum number of threads per multiprocessor:  2048 Maximum number of threads per block:           1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size     (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch:                          2147483647 bytes Texture alignment:                             512 bytes Concurrent copy and kernel execution:          Yes with 2 copy engine(s) Run time limit on kernels:                     No Integrated GPU sharing Host Memory:            No Support host page-locked memory mapping:       Yes Alignment requirement for Surfaces:            Yes Device has ECC support:                        Disabled Device supports Unified Addressing (UVA):      Yes Device supports Managed Memory:                Yes Device supports Compute Preemption:            Yes Supports Cooperative Kernel Launch:            Yes Supports MultiDevice Co-op Kernel Launch:      Yes Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > > Peer access from NVIDIA GeForce GTX 1080 Ti (GPU0) -> NVIDIA GeForce GTX 1080 Ti (GPU1) : Yes > Peer access from NVIDIA GeForce GTX 1080 Ti (GPU1) -> NVIDIA GeForce GTX 1080 Ti (GPU0) : Yes deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 2 Result = PASS

If you see the line Detected 2 CUDA Capable device (s), it means that two video cards have been detected, which is normal
Seeing PASS means everything is normal

Install cuDNN

 #To download from the official website, you need to register and log in to download # https://developer.nvidia.com/rdp/cudnn-download #It is troublesome to download here. The user token needs to be verified for downloading. If there is no token, 403 Forbidden will be reported #The processing method is to use the desktop browser to operate the download page, click Download, go to the browser's download content, copy the download link, and this link will contain the user's token information wget  https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.4/11.4_20210831/cudnn-11.4-linux-x64-v8.2.4.15.tgz tar zxvf cudnn-11.4-linux-x64-v8.2.4.15.tgz -C ./ cp cuda/include/cudnn.h /usr/local/cuda-11.4/include/ cp cuda/lib64/libcudnn* /usr/local/cuda-11.4/lib64/ chmod a+r /usr/local/cuda-11.4/include/cudnn.h chmod a+r /usr/local/cuda-11.4/lib64/libcudnn* #The same is found and downloaded on the same page. I downloaded this step without performing the installation #libcudnn8-8.2.2.26-1.cuda11.4.x86_64.rpm #libcudnn8-devel-8.2.2.26-1.cuda11.4.x86_64.rpm #libcudnn8-samples-8.2.2.26-1.cuda11.4.x86_64.rpm #If the file name is too long to download, use wget - O to specify the file name

Installing Anaconda

 #Downloaded from the official website, the selected version is Anaconda3-2021.05-Linux-x86_64.sh #  https://repo.anaconda.com/archive/ wget //repo.anaconda.com/archive/Anaconda3-2021.05-Linux-x86_64.sh bash Anaconda3-2021.05-Linux-x86_64.sh #Can read long terms, just keep Eneter to the end, and finally enter yes #Then specify the absolute path of installation/usr/local/anaconda3 instead of/root/anaconda3 #The last step is to ask conda init whether to add environment variables and write yes vim .bashrc export PATH=/usr/local/anaconda3/bin:$PATH source .bashrc vim /etc/profit export PATH=/usr/local/anaconda3/bin:$PATH

After the installation is completed, close the terminal and open it again. Enter the following command to test

 conda --version conda 4.10.1

Closure~ 👊

Responses