openEuler AI Container Image User Guide
Overview
The openEuler AI container images encapsulate SDKs for different hardware computing power and software such as AI frameworks and foundation model applications. Start a container using one of the images, and you can use or develop AI applications in your environment. This greatly reduces the time required for application deployment and environment configuration.
Obtaining Images
openEuler has released container images for the Ascend and NVIDIA platforms. Click the links below to download:
openeuler/cann Stores SDK images for installing CANN software on the openEuler base image in the Ascend environment.
openeuler/cuda Stores SDK images for installing CUDA software on the openEuler base image in the NVIDIA environment.
openeuler/pytorch Stores the AI framework image for installing PyTorch based on the SDK image.
openeuler/tensorflow Stores the AI framework image for installing TensorFlow based on the SDK image.
openeuler/llm Stores model application images for installing foundation model applications and toolchains based on the AI framework image.
For details about AI container image classification and image tag specifications, see oEEP-0014.
The size of an AI container image is large. You are advised to run the following command to pull the image to the local environment before starting the container:
docker pull image:tag
In the command, image
indicates the repository name, for example, openeuler/cann
, and tag
indicates the tag of the target image. After the image is pulled, you can start the container. Note that you must have Docker installed before running the docker pull
command.
Starting a Container
Install Docker. For details about how to install Docker, see Install Docker Engine. Alternatively, run the following command to install:
yum install -y docker
or
apt-get install -y docker
Installing nvidia-container in the NVIDIA Environment
(1) Configure the Yum or APT repository.
- For Yum:
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \ sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
- For APT:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
(2) Install nvidia-container-toolkit and nvidia-container-runtime.
# For Yum yum install -y nvidia-container-toolkit nvidia-container-runtime
# For APT apt-get install -y nvidia-container-toolkit nvidia-container-runtime
(3) Configure Docker.
nvidia-ctk runtime configure --runtime=docker systemctl restart docker
Skip this step in the non-NVIDIA environment.
Ensure that the correct driver and firmware are installed. You can obtain the correct versions from NVIDIA or Ascend official site. If the driver and firmware are installed, run the
npu-smi
command on the Ascend platform or run thenvidia-smi
command on the NVIDIA platform. If the hardware information is correctly displayed, the installed version is correct.After the preceding operations are complete, run the
docker run
command to start the container.
# In the Ascend environment
docker run --rm --network host \
--device /dev/davinci0:/dev/davinci0 \
--device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-ti image:tag
# In the NVIDIA environment
docker run --gpus all -d -ti image:tag