brand changed
This commit is contained in:
@ -26,9 +26,9 @@ You need a 3Engines Cloud hosting account with Horizon interface <https://horizo
|
||||
|
||||
No. 2 **Knowledge of RC files and CLI commands for Magnum**
|
||||
|
||||
You should be familiar with utilizing OpenStack CLI and Magnum CLI. Your RC file should be sourced and pointing to your project in OpenStack. See article
|
||||
You should be familiar with utilizing 3Engines CLI and Magnum CLI. Your RC file should be sourced and pointing to your project in 3Engines. See article
|
||||
|
||||
[How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.md).
|
||||
[How To Install 3Engines and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon](How-To-Install-3Engines-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.md).
|
||||
|
||||
Note
|
||||
|
||||
@ -38,11 +38,11 @@ If you are using CLI when creating vGPU nodegroups and are being authenticated w
|
||||
|
||||
No. 3 **Cluster and kubectl should be operational**
|
||||
|
||||
To connect to the cluster via **kubectl** tool, see this article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.md).
|
||||
To connect to the cluster via **kubectl** tool, see this article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud 3Engines Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-3Engines-Magnum.html.md).
|
||||
|
||||
No. 4 **Familiarity with the notion of nodegroups**
|
||||
|
||||
[Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.md).
|
||||
[Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud 3Engines Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-3Engines-Magnum.html.md).
|
||||
|
||||
vGPU flavors per cloud[🔗](#vgpu-flavors-per-cloud "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
@ -50,7 +50,7 @@ vGPU flavors per cloud[🔗](#vgpu-flavors-per-cloud "Permalink to this headline
|
||||
Below is the list of GPU flavors in each cloud, applicable for using with Magnum Kubernetes service.
|
||||
|
||||
WAW3-1
|
||||
: WAW3-1 supports both four GPU flavors and the Kubernetes, through OpenStack Magnum.
|
||||
: WAW3-1 supports both four GPU flavors and the Kubernetes, through 3Engines Magnum.
|
||||
|
||||
> | | | | |
|
||||
> | --- | --- | --- | --- |
|
||||
@ -61,7 +61,7 @@ WAW3-1
|
||||
> | **vm.a6000.4** | 114688 | 320 | 16 |
|
||||
|
||||
WAW3-2
|
||||
: These are the vGPU flavors for WAW3-2 and Kubernetes, through OpenStack Magnum:
|
||||
: These are the vGPU flavors for WAW3-2 and Kubernetes, through 3Engines Magnum:
|
||||
|
||||
> | | | | | |
|
||||
> | --- | --- | --- | --- | --- |
|
||||
@ -72,7 +72,7 @@ WAW3-2
|
||||
> | **gpu.l40sx8** | 254 | 953.75 GB | 1000 GB | Yes |
|
||||
|
||||
FRA1-2
|
||||
: FRA1-2 Supports L40S and the Kubernetes, through OpenStack Magnum.
|
||||
: FRA1-2 Supports L40S and the Kubernetes, through 3Engines Magnum.
|
||||
|
||||
> | | | | | |
|
||||
> | --- | --- | --- | --- | --- |
|
||||
@ -109,7 +109,7 @@ Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters crea
|
||||
In order to create a new nodegroup, called **gpu**, with one node vGPU flavor, say, **vm.a6000.2**, we can use the following Magnum CLI command:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create $CLUSTER_ID gpu \
|
||||
3Engines coe nodegroup create $CLUSTER_ID gpu \
|
||||
--labels "worker_type=gpu" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
@ -129,7 +129,7 @@ Your request will be accepted:
|
||||
Now list the available nodegroups:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_RECENT \
|
||||
3Engines coe nodegroup list $CLUSTER_ID_RECENT \
|
||||
--max-width 120
|
||||
|
||||
```
|
||||
@ -156,7 +156,7 @@ where **$MASTER\_0\_SERVER\_ID** is the ID of the **master0** VM from your clust
|
||||
> * or using a CLI command to isolate the *uuid* for the master node:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_OLDER \
|
||||
3Engines coe nodegroup list $CLUSTER_ID_OLDER \
|
||||
-c uuid \
|
||||
-c name \
|
||||
-c status \
|
||||
@ -176,7 +176,7 @@ export MASTER_0_SERVER_ID="413c7486-caa9-4e12-be3b-3d9410f2d32f"
|
||||
and execute the following command to create an additional nodegroup in this scenario:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create $CLUSTER_ID_OLDER gpu \
|
||||
3Engines coe nodegroup create $CLUSTER_ID_OLDER gpu \
|
||||
--labels "worker_type=gpu,existing_helm_handler_master_id=$MASTER_0_SERVER_ID" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
@ -190,7 +190,7 @@ There may not be any space between the labels.
|
||||
The request will be accepted and after a while, a new nodegroup will be available and based on GPU flavor. List the nodegroups with the command:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_OLDER --max-width 120
|
||||
3Engines coe nodegroup list $CLUSTER_ID_OLDER --max-width 120
|
||||
|
||||
```
|
||||
|
||||
@ -218,7 +218,7 @@ kubectl get namespaces
|
||||
The final command to create the required cluster is:
|
||||
|
||||
```
|
||||
openstack coe cluster create k8s-gpu-with_template \
|
||||
3Engines coe cluster create k8s-gpu-with_template \
|
||||
--cluster-template "k8s-1.23.16-vgpu-v1.0.0" \
|
||||
--keypair=$KEYPAIR \
|
||||
--master-count 1 \
|
||||
@ -353,7 +353,7 @@ In such clusters, to add an additional, non-GPU nodegroup, you will need to:
|
||||
In order to retrieve the image ID, you need to know with which template you want to use to create the new nodegroup. Out of the existing non-GPU templates, we select **k8s-1.23.16-v1.0.2** for this example. Run the following command to extract the template ID, as that will be needed for nodegroup creation:
|
||||
|
||||
```
|
||||
openstack coe cluster \
|
||||
3Engines coe cluster \
|
||||
template show k8s-1.23.16-v1.0.2 | grep image_id
|
||||
|
||||
```
|
||||
@ -367,7 +367,7 @@ We can then add the non-GPU nodegroup with the following command, in which you c
|
||||
```
|
||||
export CLUSTER_ID="k8s-gpu-with_template"
|
||||
export IMAGE_ID="42696e90-57af-4124-8e20-d017a44d6e24"
|
||||
openstack coe nodegroup create $CLUSTER_ID default \
|
||||
3Engines coe nodegroup create $CLUSTER_ID default \
|
||||
--labels "worker_type=default" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
@ -380,7 +380,7 @@ openstack coe nodegroup create $CLUSTER_ID default \
|
||||
Then list the nodegroup contents to see whether the creation succeeded:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID \
|
||||
3Engines coe nodegroup list $CLUSTER_ID \
|
||||
--max-width 120
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user