before changing links
This commit is contained in:
@ -0,0 +1,12 @@
|
||||
Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum[](#automatic-kubernetes-cluster-upgrade-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================
|
||||
|
||||
Warning
|
||||
|
||||
Upgradeable cluster templates are available on CloudFerro Cloud WAW4-1 region only at the moment of this writing.
|
||||
|
||||
OpenStack Magnum clusters created in CloudFerro Cloud can be **automatically** upgraded to the next minor Kubernetes version. This feature is available for clusters starting with version 1.29 of Kubernetes.
|
||||
|
||||
In this article we demonstrate an upgrade of a Magnum Kubernetes cluster from version 1.29 to version 1.30.
|
||||
|
||||
What are we going to cover
|
||||
@ -0,0 +1,353 @@
|
||||
Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum[](#autoscaling-kubernetes-cluster-resources-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=======================================================================================================================================================================================
|
||||
|
||||
When **autoscaling of Kubernetes clusters** is turned on, the system can
|
||||
|
||||
> * Add resources when the demand is high, or
|
||||
> * Remove unneeded resources when the demand is low and thus keep the costs down.
|
||||
> * The whole process can be automatic, helping the administrator concentrate on more important tasks at hand.
|
||||
|
||||
This article explains various commands to resize or scale the cluster and will lead to a command to automatically create an autoscalable Kubernetes cluster for OpenStack Magnum.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Definitions of horizontal, vertical and nodes scaling
|
||||
> * Define autoscaling when creating the cluster in Horizon interface
|
||||
> * Define autoscaling when creating the cluster using the CLI
|
||||
> * Get cluster template labels from Horizon interface
|
||||
> * Get cluster template labels from the CLI
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Creating clusters with CLI**
|
||||
|
||||
The article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) will introduce you to creation of clusters using a command line interface.
|
||||
|
||||
No. 3 **Connect openstack client to the cloud**
|
||||
|
||||
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html)
|
||||
|
||||
No. 4. **Resizing Nodegroups**
|
||||
|
||||
Step 7 of article [Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-CloudFerro-Cloud-OpenStack-Magnum.html) shows example of resizing the nodegroups for autoscaling.
|
||||
|
||||
No. 5 **Creating Clusters**
|
||||
|
||||
Step 2 of article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html) shows how to define master and worker nodes for autoscaling.
|
||||
|
||||
There are three different autoscaling features that a Kubernetes cloud can offer:
|
||||
|
||||
Horizontal Pod Autoscaler[](#horizontal-pod-autoscaler "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
Scaling Kubernetes cluster horizontally means increasing or decreasing the number of running pods, depending on the actual demands at run time. Parameters to take into account are the usage of CPU and memory, as well as the desired minimum and maximum numbers of pod replicas.
|
||||
|
||||
Horizontal scaling is also known as “scaling out” and is shorthened as HPA.
|
||||
|
||||
Vertical Pod Autoscaler[](#vertical-pod-autoscaler "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Vertical scaling (or “scaling up”, VPA) is adding or subtracting resources to and from an existing machine. If more CPUs are needed, add them. When they are not needed, shut some of them down.
|
||||
|
||||
Cluster Autoscaler[](#cluster-autoscaler "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
HPA and VPA reorganize the usage of resources and the number of pods, however, there may come a time when the size of the system itself prevents from satisfying the demand. The solution is to autoscale the cluster itself, to increase or decrease the number of nodes on which the pods will run on.
|
||||
|
||||
Once the number of nodes is adjusted, the pods and other resources need to rebalance themselves across the cluster, also automatically. The number of nodes acts as a physical barrier to the autoscaling of pods.
|
||||
|
||||
All three models of autoscaling can be combined together.
|
||||
|
||||
Define Autoscaling When Creating a Cluster[](#define-autoscaling-when-creating-a-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can define autoscaling parameters while defining a new cluster, using window called **Size** in the cluster creation wizard:
|
||||
|
||||

|
||||
|
||||
Specify a minimum and maximum number of worker nodes. If these values are 2 and 4 respectively, the cluster will have not less that 2 nodes and not more than 4 nodes at any time. If there is no traffic to the cluster, it will be automatically scaled to 2 nodes. In this example, the cluster can have 2, 3, or 4 nodes depending on the traffic.
|
||||
|
||||
For the entire process of creating a Kubernetes cluster in Horizon, see Prerequisites No. 5.
|
||||
|
||||
Warning
|
||||
|
||||
If you decide to use NGINX Ingress option while defining a cluster, NGINX ingress will run as 3 replicas on 3 separate nodes. This will override the minimum number of nodes in Magnum autoscaler.
|
||||
|
||||
Autoscaling Node Groups at Run Time[](#autoscaling-node-groups-at-run-time "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The autoscaler in Magnum uses Node Groups. Node groups can be used to create workers with different flavors. The default-worker node group is automatically created when cluster is provisioned. Node groups have lower and upper limits of node count. This is the command to print them out for a given cluster:
|
||||
|
||||
```
|
||||
openstack coe nodegroup show NoLoadBalancer default-worker -f json -c max_node_count -c node_count -c min_node_count
|
||||
|
||||
```
|
||||
|
||||
The result would be:
|
||||
|
||||
```
|
||||
{
|
||||
"node_count": 1,
|
||||
"max_node_count": 2,
|
||||
"min_node_count": 1
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
This works fine until you try to resize cluster beyond the limit set in node group. If you try to resize the above cluster to 12 nodes, like this:
|
||||
|
||||
```
|
||||
openstack coe cluster resize NoLoadBalancer --nodegroup default-worker 12
|
||||
|
||||
```
|
||||
|
||||
you will get the following error:
|
||||
|
||||
```
|
||||
Resizing default-worker outside the allowed range: min_node_count = 1, max_node_count = 2 (HTTP 400) (Request-ID: req-bbb09fc3-7df4-45c3-8b9b-fbf78d202ffd)
|
||||
|
||||
```
|
||||
|
||||
To resolve this error, change *node\_group max\_node\_count* manually:
|
||||
|
||||
```
|
||||
openstack coe nodegroup update NoLoadBalancer default-worker replace max_node_count=15
|
||||
|
||||
```
|
||||
|
||||
and then resize cluster to the desired value which was less that 15 in this example:
|
||||
|
||||
openstack coe cluster resize NoLoadBalancer –nodegroup default-worker 12
|
||||
|
||||
If you repeat the first statement:
|
||||
|
||||
```
|
||||
openstack coe nodegroup show NoLoadBalancer default-worker -f json -c max_node_count -c node_count -c min_node_count
|
||||
|
||||
```
|
||||
|
||||
the result will now be with a corrected value:
|
||||
|
||||
```
|
||||
{
|
||||
"node_count": 12,
|
||||
"max_node_count": 15,
|
||||
"min_node_count": 1
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
How Autoscaling Detects Upper Limit[](#how-autoscaling-detects-upper-limit "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The first version of Autoscaling would take the current upper limit of autoscaling in variable *node\_count* and add 1 to it. If the command to create a cluster were
|
||||
|
||||
```
|
||||
openstack coe cluster create mycluster --cluster-template mytemplate --node-count 8 --master-count 3
|
||||
|
||||
```
|
||||
|
||||
that version of Autoscaler would take the value of **9** (counting as **8 + 1**). However, that procedure was limited to the default-worker node group only.
|
||||
|
||||
The current Autoscaler can support multiple node groups by detecting the role of the node group:
|
||||
|
||||
```
|
||||
openstack coe nodegroup show NoLoadBalancer default-worker -f json -c role
|
||||
|
||||
```
|
||||
|
||||
and the result is
|
||||
|
||||
```
|
||||
{
|
||||
"role": "worker"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
As long as the role is *worker* and *max\_node\_count* is greater than 0, the Autoscaler will try to scale the *default-worker* node group by adding **1** to *max\_node\_count*.
|
||||
|
||||
Attention
|
||||
|
||||
Any additional node group must include concrete *max\_node\_count* attribute.
|
||||
|
||||
See Prerequisites No. 4 for detailed examples of using the **openstack coe nodegroup** family of commands.
|
||||
|
||||
Autoscaling Labels for Clusters[](#autoscaling-labels-for-clusters "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
There are three labels for clusters that influence autoscaling:
|
||||
|
||||
> * **auto\_scaling\_enabled** – if true, it is enabled
|
||||
> * **min\_node\_count** – the minimal number of nodes
|
||||
> * **max\_node\_count** – the maximal number of nodes, at any time.
|
||||
|
||||
When defining cluster through the Horizon interface, you are actually setting up these cluster labels.
|
||||
|
||||

|
||||
|
||||
List clusters with **Container Infra** => **Cluster** and click on the name of the cluster. Under *Labels*, you will find the current value for **auto\_scaling\_enabled**.
|
||||
|
||||

|
||||
|
||||
If true, it is enabled, the cluster will autoscale.
|
||||
|
||||
Create New Cluster Using CLI With Autoscaling On[](#create-new-cluster-using-cli-with-autoscaling-on "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The command to create a cluster with CLI must encompass all of the usual parameters as well as **all of the labels** needed for the cluster to function. The peculiarity of the syntax is that label parameters must be one single string, without any blanks inbetween.
|
||||
|
||||
This is what one such command could look like:
|
||||
|
||||
```
|
||||
openstack coe cluster create mycluster
|
||||
--cluster-template k8s-stable-1.23.5
|
||||
--keypair sshkey
|
||||
--master-count 1
|
||||
--node-count 3
|
||||
--labels auto_scaling_enabled=true,autoscaler_tag=v1.22.0,calico_ipv4pool_ipip=Always,cinder_csi_plugin_tag=v1.21.0,cloud_provider_enabled=true,cloud_provider_tag=v1.21.0,container_infra_prefix=registry-public.cloudferro.com/magnum/,eodata_access_enabled=false,etcd_volume_size=8,etcd_volume_type=ssd,hyperkube_prefix=registry-public.cloudferro.com/magnum/,k8s_keystone_auth_tag=v1.21.0,kube_tag=v1.21.5-rancher1,master_lb_floating_ip_enabled=true
|
||||
|
||||
```
|
||||
|
||||
If you just tried to copy and paste it into the terminal, you would get syntax errors. The end of the line is not allowed, the entire command must be one long string. To make your life easier, here is a version of the command that you *can* copy with success.
|
||||
|
||||
Warning
|
||||
|
||||
The line containing labels will be only partially visible on the screen, but once you paste it into the command line, the terminal software will execute it without problems.
|
||||
|
||||
The command is:
|
||||
|
||||
> **openstack coe cluster create mycluster –cluster-template k8s-stable-1.23.5 –keypair sshkey –master-count 1 –node-count 3 –labels auto\_scaling\_enabled=true,autoscaler\_tag=v1.22.0,calico\_ipv4pool\_ipip=Always,cinder\_csi\_plugin\_tag=v1.21.0/,cloud\_provider\_enabled=true,cloud\_provider\_tag=v1.21.0,container\_infra\_prefix=registry-public.cloudferro.com/magnum/,eodata\_access\_enabled=false,etcd\_volume\_size=8,etcd\_volume\_type=ssd,hyperkube\_prefix=registry-public.cloudferro.com/magnum/,k8s\_keystone\_auth\_tag=v1.21.0,kube\_tag=v1.21.5-rancher1,master\_lb\_floating\_ip\_enabled=true,min\_node\_count=2,max\_node\_count=4**
|
||||
|
||||
The name will be *mycluster*, one master node and three worker nodes in the beginning.
|
||||
|
||||
Note
|
||||
|
||||
It is mandatory to set up the maximal number of nodes in autoscaling. If not specified, the **max\_node\_count** will default to 0, and there will be no autoscaling at all for the particular nodegroup.
|
||||
|
||||
This is the result after the creation:
|
||||
|
||||

|
||||
|
||||
Three worker node addresses are active: **10.0.0.102**, **10.0.0.27**, and **10.0.0.194**.
|
||||
|
||||
There is no traffic to the cluster so the autoscaling immediately kicked in. A minute or two after the creation was finished, the number of worker nodes fell down by one, to addresses **10.0.0.27** and **10.0.0.194** – that is autoscaling at work.
|
||||
|
||||
Nodegroups With Worker Role Will Be Automatically Autoscalled[](#nodegroups-with-worker-role-will-be-automatically-autoscalled "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Autoscaler automaticaly detects all new nodegroups with “worker” role assigned.
|
||||
The “worker” role is assigned by default if not specified. The maximum number of nodes must be specified as well.
|
||||
|
||||
First see which nodegroups are present for cluster *k8s-cluster*. The command is
|
||||
|
||||
```
|
||||
openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||
```
|
||||
|
||||
Switch **-c** denotes which column to show, disregarding all other columns that are not listed in the command. You will see a table with columns *name*, *node\_count*, *status* and *role*, which means that columns such as *uuid*, *flavor\_id* and *image\_id* will not take valueable space onscreen. The result is table with only the four columns that are relevant to adding nodegroupes with roles:
|
||||
|
||||

|
||||
|
||||
Now add and print a nodegroup without role:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create k8s-cluster nodegroup-without-role --node-count 1 --min-nodes 1 --max-nodes 5
|
||||
|
||||
openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||
```
|
||||
|
||||
Since the role was not specified, a default value of “worker” was assigned to node group *nodegroup-without-role*. Since the system is set up to automatically autoscale nodegroups with *worker* role, if you add nodegroup without a role, it will autoscale.
|
||||
|
||||

|
||||
|
||||
Now add a node group called *nodegroup-with-role* and the name of the role will be *custom*:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create k8s-cluster nodegroup-with-role --node-count 1 --min-nodes 1 --max-nodes 5 --role custom
|
||||
|
||||
openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
That will add a nodegroup but will not autoscale it on its own, as there is no *worker* role specified for the nodegroup.
|
||||
|
||||
Finally, add a nodegroup called *nodegroup-with-role-2* which will have two roles defined in one statement, that is, both *custom* and *worker*. Since at least one of the roles is *worker*, it will autoscale automatically.
|
||||
|
||||
```
|
||||
openstack coe nodegroup create k8s-cluster nodegroup-with-role-2 --node-count 1 --min-nodes 1 --max-nodes 5 --role custom,worker
|
||||
|
||||
openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Cluster **k8s-cluster** now has **8** nodes:
|
||||
|
||||

|
||||
|
||||
You can delete these three clusters with the following set of commands:
|
||||
|
||||
```
|
||||
openstack coe nodegroup delete k8s-cluster nodegroup-with-role
|
||||
|
||||
openstack coe nodegroup delete k8s-cluster nodegroup-with-role-2
|
||||
|
||||
openstack coe nodegroup delete k8s-cluster nodegroup-without-role
|
||||
|
||||
```
|
||||
|
||||
Once again, see the result:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
How to Obtain All Labels From Horizon Interface[](#how-to-obtain-all-labels-from-horizon-interface "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Use **Container Infra** => **Clusters** and click on the cluster name. You will get plain text in browser, just copy the rows under **Labels** and paste them to the text editor of your choice.
|
||||
|
||||

|
||||
|
||||
In text editor, manually remove line ends and make one string without breaks and carriage returns, then paste it back to the command.
|
||||
|
||||
How To Obtain All Labels From the CLI[](#how-to-obtain-all-labels-from-the-cli "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
There is a special command which will produce labels from a cluster:
|
||||
|
||||
```
|
||||
openstack coe cluster template show k8s-stable-1.23.5 -c labels -f yaml
|
||||
|
||||
```
|
||||
|
||||
This is the result:
|
||||
|
||||

|
||||
|
||||
That is *yaml* format, as specified by the **-f** parameter. The rows represent label values and your next action is to create one long string without line breaks as in the previous example, then form the CLI command.
|
||||
|
||||
Use Labels String When Creating Cluster in Horizon[](#use-labels-string-when-creating-cluster-in-horizon "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The long labels string can also be used when creating the cluster manually, i.e. from the Horizon interface. The place to insert those labels is described in *Step 4 Define Labels* in Prerequisites No. 2.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Autoscaling is similar to autohealing of Kubernetes clusters and both bring automation to the table. They also guarantee that the system will autocorrect as long as it is within its basic parameters. Use autoscaling of cluster resources as much as you can!
|
||||
@ -0,0 +1,801 @@
|
||||
Backup of Kubernetes Cluster using Velero[](#backup-of-kubernetes-cluster-using-velero "Permalink to this headline")
|
||||
=====================================================================================================================
|
||||
|
||||
What is Velero[](#what-is-velero "Permalink to this headline")
|
||||
---------------------------------------------------------------
|
||||
|
||||
[Velero](https://velero.io) is the official open source project from VMware. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backed up objects can be restored on the same cluster, or on a new one. Using a package like Velero is essential for any serious development in the Kubernetes cluster.
|
||||
|
||||
In essence, you create object store under OpenStack, either using Horizon or Swift module of **openstack** command and then save cluster state into it. Restoring is the same in reverse – read from that object store and save it to a Kubernetes cluster.
|
||||
|
||||
Velero has its own CLI command system so it is possible to automate creation of backups using cron jobs.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Getting EC2 Client Credentials
|
||||
> * Adjusting “values.yaml”, the configuration file
|
||||
> * Creating namespace called *velero* for precise access to the Kubernetes cluster
|
||||
> * Installing Velero with a Helm chart
|
||||
> * Installing and deleting backups using Velero
|
||||
> * Example 1 Basics of Restoring an Application
|
||||
> * Example 2 Snapshot of Restoring an Application
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at <https://portal.cloudferro.com/>.
|
||||
|
||||
No. 2 **How to Access Kubernetes cluster post-deployment**
|
||||
|
||||
We shall also assume that you have one or more Kubernetes clusters ready and accessible via a **kubectl** command:
|
||||
|
||||
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
The result of that article will be setting up of system variable **KUBECONFIG**, which points to the configuration file for access to the Kubernetes cloud. A typical command will be:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/username/Desktop/kubernetes/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
In case this is the first time you are using that particular config file, make it more secure by executing the following command as well:
|
||||
|
||||
```
|
||||
chmod 600 /home/username/Desktop/kubernetes/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
No. 3 **Handling Helm**
|
||||
|
||||
To install Velero, we shall use Helm:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html).
|
||||
|
||||
No. 4 **An object storage S3 bucket available**
|
||||
|
||||
To create one, you can access object storage with Horizon interface or CLI.
|
||||
|
||||
Horizon commands
|
||||
: [How to use Object Storage on CloudFerro Cloud](../s3/How-to-use-Object-Storage-on-CloudFerro-Cloud.html).
|
||||
|
||||
CLI
|
||||
: You can also use command such as
|
||||
|
||||
```
|
||||
openstack container
|
||||
|
||||
```
|
||||
|
||||
to work with object storage. For more information see [How to access object storage using OpenStack CLI on CloudFerro Cloud](../openstackcli/How-to-access-object-storage-using-OpenStack-CLI-on-CloudFerro-Cloud.html)
|
||||
|
||||
Either way, we shall assume that there is a container called “bucketnew”:
|
||||
|
||||

|
||||
|
||||
Supply your own unique name while working through this article.
|
||||
|
||||
Before Installing Velero[](#before-installing-velero "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
We shall install Velero on Ubuntu 22.04; using other Linux distributions would be similar.
|
||||
|
||||
Update and upgrade your Ubuntu environment:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade
|
||||
|
||||
```
|
||||
|
||||
It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
|
||||
|
||||
### Installation step 1 Getting EC2 client credentials[](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")
|
||||
|
||||
First fetch EC2 credentials from OpenStack. They are necessary to access private bucket (container). Generate them on your own by executing the following commands:
|
||||
|
||||
```
|
||||
openstack ec2 credentials create
|
||||
openstack ec2 credentials list
|
||||
|
||||
```
|
||||
|
||||
Save somewhere the *Access Key* and the *Secret Key*. They will be needed in the next step, in which you set up a Velero configuration file.
|
||||
|
||||
### Installation step 2 Adjust the configuration file - “values.yaml”[](#installation-step-2-adjust-the-configuration-file-values-yaml "Permalink to this headline")
|
||||
|
||||
Now create or adjust a configuration file for Velero. Use text editor of your choice to create that file. On MacOS or Linux, for example, you can use **nano**, like this:
|
||||
|
||||
```
|
||||
sudo nano values.yaml
|
||||
|
||||
```
|
||||
|
||||
Use configuration file provided below. Fill in the required fields, which are marked with **##**:
|
||||
|
||||
**values.yaml**
|
||||
|
||||
WAW4-1WAW3-1WAW3-2FRA1-2
|
||||
|
||||
> ```
|
||||
> initContainers:
|
||||
> - name: velero-plugin-for-aws
|
||||
> image: velero/velero-plugin-for-aws:v1.4.0
|
||||
> imagePullPolicy: IfNotPresent
|
||||
> volumeMounts:
|
||||
> - mountPath: /target
|
||||
> name: plugins
|
||||
>
|
||||
> configuration:
|
||||
> provider: aws
|
||||
> backupStorageLocation:
|
||||
> provider: aws
|
||||
> name: ## enter name of backup storage location (could be anything)
|
||||
> bucket: ## enter name of bucket created in openstack
|
||||
> default: true
|
||||
> config:
|
||||
> region: default
|
||||
> s3ForcePathStyle: true
|
||||
> s3Url: ## enter URL of object storage (for example "https://s3.waw4-1.cloudferro.com")
|
||||
> credentials:
|
||||
> secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
> cloud: |
|
||||
> [default]
|
||||
> aws_access_key_id=
|
||||
> aws_secret_access_key=
|
||||
> ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
> snapshotsEnabled: false
|
||||
> deployRestic: true
|
||||
> restic:
|
||||
> podVolumePath: /var/lib/kubelet/pods
|
||||
> privileged: true
|
||||
> schedules:
|
||||
> mybackup:
|
||||
> disabled: false
|
||||
> schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
|
||||
> template:
|
||||
> ttl: "240h" ## choose ttl, after which the backups will be removed.
|
||||
> snapshotVolumes: false
|
||||
>
|
||||
> ```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: ## enter name of backup storage location (could be anything)
|
||||
bucket: ## enter name of bucket created in openstack
|
||||
default: true
|
||||
config:
|
||||
region: waw3-1
|
||||
s3ForcePathStyle: true
|
||||
s3Url: ## enter URL of object storage (for example "https://s3.waw3-1.cloudferro.com")
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=
|
||||
aws_secret_access_key=
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
|
||||
template:
|
||||
ttl: "240h" ## choose ttl, after which the backups will be removed.
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: ## enter name of backup storage location (could be anything)
|
||||
bucket: ## enter name of bucket created in openstack
|
||||
default: true
|
||||
config:
|
||||
region: default
|
||||
s3ForcePathStyle: true
|
||||
s3Url: ## enter URL of object storage (for example "https://s3.waw3-2.cloudferro.com")
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=
|
||||
aws_secret_access_key=
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
|
||||
template:
|
||||
ttl: "240h" ## choose ttl, after which the backups will be removed.
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: ## enter name of backup storage location (could be anything)
|
||||
bucket: ## enter name of bucket created in openstack
|
||||
default: true
|
||||
config:
|
||||
region: default
|
||||
s3ForcePathStyle: true
|
||||
s3Url: ## enter URL of object storage (for example "https://s3.fra1-2.cloudferro.com")
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=
|
||||
aws_secret_access_key=
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 6,18 * * *" ## choose time, when scheduled backups will be make.
|
||||
template:
|
||||
ttl: "240h" ## choose ttl, after which the backups will be removed.
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
Paste the content to the configuration file **values.yaml** and save.
|
||||
|
||||
Example of an already configured file:
|
||||
|
||||
WAW4-1WAW3-1WAW3-2FRA1-2
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: velerobackupnew
|
||||
bucket: bucketnew
|
||||
default: true
|
||||
config:
|
||||
region: default
|
||||
s3ForcePathStyle: true
|
||||
s3Url: https://s3.waw4-1.cloudferro.com
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
|
||||
aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 * * *"
|
||||
template:
|
||||
ttl: "168h"
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: velerobackupnew
|
||||
bucket: bucketnew
|
||||
default: true
|
||||
config:
|
||||
region: waw3-1
|
||||
s3ForcePathStyle: true
|
||||
s3Url: https://s3.waw3-1.cloudferro.com
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
|
||||
aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 * * *"
|
||||
template:
|
||||
ttl: "168h"
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: velerobackupnew
|
||||
bucket: bucketnew
|
||||
default: true
|
||||
config:
|
||||
region: default
|
||||
s3ForcePathStyle: true
|
||||
s3Url: https://s3.waw3-2.cloudferro.com
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
|
||||
aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 * * *"
|
||||
template:
|
||||
ttl: "168h"
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
|
||||
configuration:
|
||||
provider: aws
|
||||
backupStorageLocation:
|
||||
provider: aws
|
||||
name: velerobackupnew
|
||||
bucket: bucketnew
|
||||
default: true
|
||||
config:
|
||||
region: default
|
||||
s3ForcePathStyle: true
|
||||
s3Url: https://s3.fra1-2.cloudferro.com
|
||||
credentials:
|
||||
secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x
|
||||
aws_secret_access_key= dee1581dac214d3dsa34037e826f9148
|
||||
##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.
|
||||
snapshotsEnabled: false
|
||||
deployRestic: true
|
||||
restic:
|
||||
podVolumePath: /var/lib/kubelet/pods
|
||||
privileged: true
|
||||
schedules:
|
||||
mybackup:
|
||||
disabled: false
|
||||
schedule: "0 * * *"
|
||||
template:
|
||||
ttl: "168h"
|
||||
snapshotVolumes: false
|
||||
|
||||
```
|
||||
|
||||
### Installation step 3 Creating namespace[](#installation-step-3-creating-namespace "Permalink to this headline")
|
||||
|
||||
Velero must be installed in an eponymous namespace, *velero*. This is the command to create it:
|
||||
|
||||
```
|
||||
kubectl create namespace velero
|
||||
namespace/velero created
|
||||
|
||||
```
|
||||
|
||||
### Installation step 4 Installing Velero with a Helm chart[](#installation-step-4-installing-velero-with-a-helm-chart "Permalink to this headline")
|
||||
|
||||
Here are the commands to install Velero by means of a Helm chart:
|
||||
|
||||
```
|
||||
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
|
||||
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
```
|
||||
"vmware-tanzu" has been added to your repositories
|
||||
|
||||
```
|
||||
|
||||
The following command will install velero onto the cluster:
|
||||
|
||||
```
|
||||
helm install vmware-tanzu/velero --namespace velero --version 2.28 -f values.yaml --generate-name
|
||||
|
||||
```
|
||||
|
||||
The output will look like this:
|
||||
|
||||

|
||||
|
||||
To see the version of Velero that is actually installed, use:
|
||||
|
||||
```
|
||||
helm list --namespace velero
|
||||
|
||||
```
|
||||
|
||||
Note the name used, **velero-1721031498**, and we are going to use it in the rest of the article. In your case, note the correct velero name and swap value of **1721031498** with it.
|
||||
|
||||
Here is how to check that Velero is up and running:
|
||||
|
||||
```
|
||||
kubectl get deployment/velero-1721031498 -n velero
|
||||
|
||||
```
|
||||
|
||||
The output will be similar to this:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
velero-1721031498 1/1 1 1 5m30s
|
||||
|
||||
```
|
||||
|
||||
Check that the secret has been created:
|
||||
|
||||
```
|
||||
kubectl get secret/velero-1721031498 -n velero
|
||||
|
||||
```
|
||||
|
||||
The result is:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
velero-1721031498 Opaque 1 3d1h
|
||||
|
||||
```
|
||||
|
||||
### Installation step 5 Installing Velero CLI[](#installation-step-5-installing-velero-cli "Permalink to this headline")
|
||||
|
||||
The final step is to install Velero CLI – Command Line Interface suitable for working from the terminal window on your operating system.
|
||||
|
||||
Download the client specified for your operating system from: <https://github.com/vmware-tanzu/velero/releases>, using **wget**. Here we are downloading version
|
||||
|
||||
> **velero-v1.9.1-linux-amd64.tar.gz**
|
||||
|
||||
but it is recommended to download the latest version. In that case, change the name of the **tar.gz** file accordingly.
|
||||
|
||||
```
|
||||
wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz
|
||||
|
||||
```
|
||||
|
||||
Extract the tarball:
|
||||
|
||||
```
|
||||
tar -xvf velero-v1.9.1-linux-amd64.tar.gz
|
||||
|
||||
```
|
||||
|
||||
This is the expected result:
|
||||
|
||||
```
|
||||
velero-v1.9.1-linux-amd64/LICENSE
|
||||
velero-v1.9.1-linux-amd64/examples/README.md
|
||||
velero-v1.9.1-linux-amd64/examples/minio
|
||||
velero-v1.9.1-linux-amd64/examples/minio/00-minio-deployment.yaml
|
||||
velero-v1.9.1-linux-amd64/examples/nginx-app
|
||||
velero-v1.9.1-linux-amd64/examples/nginx-app/README.md
|
||||
velero-v1.9.1-linux-amd64/examples/nginx-app/base.yaml
|
||||
velero-v1.9.1-linux-amd64/examples/nginx-app/with-pv.yaml
|
||||
velero-v1.9.1-linux-amd64/velero
|
||||
|
||||
```
|
||||
|
||||
Move the extracted **velero** binary to somewhere in your $PATH (/usr/local/bin for most users):
|
||||
|
||||
```
|
||||
cd velero-v1.9.1-linux-amd64
|
||||
# System might force using sudo
|
||||
sudo mv velero /usr/local/bin
|
||||
# check if velero is working
|
||||
velero version
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
After these operations, you should be allowed to use **velero** commands. For help how to use them, execute:
|
||||
|
||||
```
|
||||
velero help
|
||||
|
||||
```
|
||||
|
||||
Working with Velero[](#working-with-velero "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
So far, we have
|
||||
|
||||
> * created an object store named “bucketnew” and
|
||||
> * told velero to use it through the **bucket:** parameter in values.yaml file.
|
||||
|
||||
Velero will create another object store called *backups* under “bucketnew” and then continue creating object stores for particular backups. For example, the following command will add object store called **mybackup2**:
|
||||
|
||||
> ```
|
||||
> velero backup create mybackup2
|
||||
> Backup request "mybackup2" submitted successfully.
|
||||
>
|
||||
> ```
|
||||
|
||||
Here is what it will look like in Horizon:
|
||||
|
||||

|
||||
|
||||
Let us add two other backups. The first should backup all api objects in namespace *velero*:
|
||||
|
||||
```
|
||||
velero backup create mybackup3 --include-namespaces velero
|
||||
|
||||
```
|
||||
|
||||
The second will backup all api objects in default namespace
|
||||
|
||||
```
|
||||
velero backup create mybackup5 --include-namespaces default
|
||||
Backup request "mybackup4" submitted successfully.
|
||||
|
||||
```
|
||||
|
||||
This the object store structure after these three backups:
|
||||
|
||||

|
||||
|
||||
You can also use velero CLI command to list the existing backups:
|
||||
|
||||
```
|
||||
velero backup get
|
||||
|
||||
```
|
||||
|
||||
This is the result in terminal window:
|
||||
|
||||

|
||||
|
||||
Example 1 Basics of Restoring an Application[](#example-1-basics-of-restoring-an-application "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let us now demonstrate how to restore a Kubernetes application. Let us first clone one example app from GitHub. Execute this:
|
||||
|
||||
```
|
||||
git clone https://github.com/vmware-tanzu/velero.git
|
||||
Cloning into 'velero'...
|
||||
Resolving deltas: 100% (27049/27049), done.
|
||||
cd velero
|
||||
|
||||
```
|
||||
|
||||
Start the sample nginx app:
|
||||
|
||||
```
|
||||
kubectl apply -f examples/nginx-app/base.yaml
|
||||
kubectl apply -f base.yaml
|
||||
namespace/nginx-example unchanged
|
||||
deployment.apps/nginx-deployment unchanged
|
||||
service/my-nginx unchanged
|
||||
|
||||
```
|
||||
|
||||
Create a backup:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup --include-namespaces nginx-example
|
||||
Backup request "nginx-backup" submitted successfully.
|
||||
|
||||
```
|
||||
|
||||
This is what the backup of **nginx-backup** looks like in Horizon:
|
||||
|
||||

|
||||
|
||||
Simulate a disaster:
|
||||
|
||||
```
|
||||
kubectl delete namespaces nginx-example
|
||||
# Wait for the namespace to be deleted
|
||||
namespace "nginx-example" deleted
|
||||
|
||||
```
|
||||
|
||||
Restore your lost resources:
|
||||
|
||||
```
|
||||
velero restore create --from-backup nginx-backup
|
||||
Restore request "nginx-backup-20220728013338" submitted successfully.
|
||||
Run `velero restore describe nginx-backup-20220728013338` or `velero restore logs nginx-backup-20220728013338` for more details.
|
||||
|
||||
velero backup get
|
||||
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
|
||||
backup New 0 0 <nil> n/a <none>
|
||||
nginx-backup New 0 0 <nil> n/a <none>
|
||||
|
||||
```
|
||||
|
||||
Example 2 Snapshot of restoring an application[](#example-2-snapshot-of-restoring-an-application "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Start the sample nginx app:
|
||||
|
||||
```
|
||||
kubectl apply -f examples/nginx-app/with-pv.yaml
|
||||
namespace/nginx-example created
|
||||
persistentvolumeclaim/nginx-logs created
|
||||
deployment.apps/nginx-deployment created
|
||||
service/my-nginx created
|
||||
|
||||
```
|
||||
|
||||
Create a backup with PV snapshotting:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup-vp --include-namespaces nginx-example
|
||||
Backup request "nginx-backup" submitted successfully.
|
||||
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
|
||||
|
||||
```
|
||||
|
||||
Simulate a disaster:
|
||||
|
||||
```
|
||||
kubectl delete namespaces nginx-example
|
||||
namespace "nginx-example" deleted
|
||||
|
||||
```
|
||||
|
||||
Important
|
||||
|
||||
Because the default [reclaim policy](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for dynamically-provisioned PVs is “Delete”, these commands should trigger your cloud provider to delete the disk that backs up the PV. Deletion is asynchronous, so this may take some time.
|
||||
|
||||
Restore your lost resources:
|
||||
|
||||
```
|
||||
velero restore create --from-backup nginx-backup-vp
|
||||
Restore request "nginx-backup-20220728015234" submitted successfully.
|
||||
Run `velero restore describe nginx-backup-20220728015234` or `velero restore logs nginx-backup-20220728015234` for more details.
|
||||
|
||||
```
|
||||
|
||||
Delete a Velero backup[](#delete-a-velero-backup "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
There are two ways to delete a backup made by Velero.
|
||||
|
||||
Delete backup custom resource only
|
||||
: ```
|
||||
kubectl delete backup <backupName> -n <veleroNamespace>
|
||||
|
||||
```
|
||||
|
||||
will delete the backup custom resource only and will not delete any associated data from object/block storage
|
||||
|
||||
Delete all data in object/block storage
|
||||
: ```
|
||||
velero backup delete <backupName>
|
||||
|
||||
```
|
||||
|
||||
will delete the backup resource including all data in object/block storage
|
||||
|
||||
Removing Velero from the cluster[](#removing-velero-from-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
### Uninstall Velero[](#uninstall-velero "Permalink to this headline")
|
||||
|
||||
To uninstall Velero release:
|
||||
|
||||
```
|
||||
helm uninstall velero-1721031498 --namespace velero
|
||||
|
||||
```
|
||||
|
||||
### To delete Velero namespace[](#to-delete-velero-namespace "Permalink to this headline")
|
||||
|
||||
```
|
||||
kubectl delete namespace velero
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now that Velero is up and running, you can integrate it into your routine. It will be useful in all classical backups scenarios – for disaster recovery, cluster and namespace migration, testing and development, application rollbacks, compliance and auditing and so on. Apart from these broad use cases, Velero will help with specific Kubernetes cluster tasks for backing up, such as:
|
||||
|
||||
> * backing up and restoring deployments, service, config maps and secrets,
|
||||
> * selective backups, say, only for specific namespaces or label selectors,
|
||||
> * volume shapshots using cloud provider APIs (AWS, Azure, GCP etc.)
|
||||
> * snapshots of persistent volumes for point-in-time recovery
|
||||
> * saving backup data to AWS S3, Google Cloud Storage, Azure Blob Storage etc.
|
||||
> * integration with **kubectl** command so that Custom Resource Definitions (CRDs) are used to define backup and restore configuration.
|
||||
@ -0,0 +1,242 @@
|
||||
CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image[](#ci-cd-pipelines-with-gitlab-on-brand-name-kubernetes-building-a-docker-image "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
GitLab provides an isolated, private code registry and space for collaboration on code by teams. It also offers a broad range of code deployment automation capabilities. In this article, we will explain how to automate building a Docker image of your app.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Add your public key to GitLab and access GitLab from your command line
|
||||
> * Create project in GitLab and add sample application code
|
||||
> * Define environment variables with your DockerHub coordinates in GitLab
|
||||
> * Create pipeline to build your app’s Docker image using Kaniko
|
||||
> * Trigger pipeline build
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Kubernetes cluster**
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Local version of GitLab available**
|
||||
|
||||
Your local instance of GitLab is available and properly accessible by your GitLab user.
|
||||
|
||||
In this article we assume the setup according to this article [Install GitLab on CloudFerro Cloud Kubernetes](Install-GitLab-on-CloudFerro-Cloud-Kubernetes.html). If you use a different instance of GitLab, there can be some differences e.g. where certain functionalities are located in the GUI.
|
||||
|
||||
In this article, we shall be using **gitlab.mysampledomain.info** as the gitlab instance. Be sure to replace it with your own domain.
|
||||
|
||||
No. 4 **git CLI operational**
|
||||
|
||||
**git** command installed locally. You may use it with [GitHub](https://github.com/git-guides/install-git), [GitLab](https://docs.gitlab.com/ee/topics/git/how_to_install_git/) and other source control platforms based on **git**.
|
||||
|
||||
No. 5 **Account at DockerHub**
|
||||
|
||||
> [Access to your DockerHub](https://hub.docker.com/) (or another container image registry).
|
||||
|
||||
No. 6 **Using Kaniko**
|
||||
|
||||
[kaniko](https://docs.gitlab.com/ee/ci/docker/using_kaniko.html)
|
||||
is a tool to build container images based on a provided Dockerfile. For more elaborate overview of kaniko refer to its documentation.
|
||||
|
||||
No. 7 **Private and public keys available**
|
||||
|
||||
To connect to our GitLab instance we need a combination of a private and a public key. You can use any key pair, one option is to use OpenStack Horizon to create one. For reference see:
|
||||
|
||||
See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html)
|
||||
|
||||
Here, we use the key pair to connect to GitLab instance that we previously installed in Prerequisite No. 3.
|
||||
|
||||
Step 1 Add your public key to GitLab and access GitLab from your command line[](#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to access your GitLab instance from the command line, GitLab uses SSH-based authentication. To ensure your console uses these keys for authentication by default, ensure your keys are stored in the **~/.ssh** folder and are called **id\_rsa** (private key) and **id\_rsa.pub** (public key).
|
||||
|
||||
The public key should then be added to the authorized keys in GitLab GUI. To add the public key, click on your avatar icon:
|
||||
|
||||

|
||||
|
||||
Then scroll to “Preferences”, choose “SSH Keys” from the left menu and paste the contents of your public key into the “Key” field.
|
||||
|
||||

|
||||
|
||||
If the GitLab instance you are using is hosted, say, on domain **mysampledomain.info**, you can use a command like this
|
||||
|
||||
```
|
||||
ssh -T [email protected]
|
||||
|
||||
```
|
||||
|
||||
to verify that you have access to GitLab from CLI interface.
|
||||
|
||||
You should see an output similar to the following:
|
||||
|
||||

|
||||
|
||||
Step 2 Create project in GitLab and add sample application code[](#step-2-create-project-in-gitlab-and-add-sample-application-code "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We will first add a sample application in GitLab. This is a minimal Python-Flask application, its code can be downloaded from this CloudFerro Cloud [GitHub repository accompanying this Knowledge Base](https://github.com/CloudFerro/K8s-samples/tree/main/HelloWorld-Docker-image-Flask).
|
||||
|
||||
As a first step in this section, we will initiate the GitLab remote origin. Login to GitLab GUI and enter the default screen, click on button “New Project”, then “Create blank project”. It will transfer you to the view below.
|
||||
|
||||

|
||||
|
||||
In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance.
|
||||
In the place denoted with a red rectangle, you should enter your user name; usually, it will be **root** but can be anything else.
|
||||
If there already are some users defined in GitLab, their names will appear in a drop-down menu.
|
||||
|
||||

|
||||
|
||||
Enter your preferred project name and slug, in our case “GitLabCI Sample” and “GitLabCI-sample”, respectively. Choose the visibility level to your preference. Uncheck box “Initialize repository with a README”, because we will initiate the repository from the existing code. (We are not initializing the repo, we are only establishing the project in the origin.)
|
||||
|
||||
After submitting the “Create project” form, you will receive a list of commands to work with your repo. Review them and switch to the CLI. Clone the entire CloudFerro K8s samples repo, then extract the sub-folder called *HelloWorld-Docker-image-Flask*. For clarity, we rename its contents to a new folder, **GitLabCI-sample**. Use
|
||||
|
||||
```
|
||||
mkdir ~/GitLabCI-sample
|
||||
|
||||
```
|
||||
|
||||
if this is the first time you are working through this article, so the folder would be ready for the following set of commands:
|
||||
|
||||
```
|
||||
git clone https://github.com/CloudFerro/K8s-samples
|
||||
mv ~/K8s-samples/HelloWorld-Docker-image-Flask/* ~/GitLabCI-sample
|
||||
rm K8s-samples/ -rf
|
||||
|
||||
```
|
||||
|
||||
After the above sequence of steps, we have folder **GitLabCI-sample** with 3 files:
|
||||
|
||||
> * **app.py** which is our Python Flask application code,
|
||||
> * a **Dockerfile** and
|
||||
> * the dependencies file **requirements.txt**.
|
||||
|
||||
We can then **cd** into this folder, initialize git repo, commit locally and push to the remote with the following commands (replace domain and username):
|
||||
|
||||
```
|
||||
cd GitLabCI-sample
|
||||
git init
|
||||
git remote add origin [email protected]:myusername/GitLabCI-sample.git
|
||||
git add .
|
||||
git commit -m "First commit"
|
||||
git push origin master
|
||||
|
||||
```
|
||||
|
||||
Most likely, the user name **myusername** here will be just **root**.
|
||||
|
||||
When we enter GitLab GUI, we can see that our changes are committed:
|
||||
|
||||

|
||||
|
||||
Step 3 Define environment variables with your DockerHub coordinates in GitLab[](#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We want to create a CI/CD pipeline that will, upon a new commit, build a Docker image of our app and push it to Docker Hub container registry. Let us use environment variables in GitLab to enable connection to the Docker registry. Use the following keys and values:
|
||||
|
||||
```
|
||||
CI_COMMIT_REF_SLUG=latest
|
||||
CI_REGISTRY=https://index.docker.io/v1/
|
||||
CI_REGISTRY_IMAGE=index.docker.io/yourdockerhubuser/gitlabci-sample
|
||||
CI_REGISTRY_USER=yourdockerhubuser
|
||||
CI_REGISTRY_PASSWORD=yourdockerhubrepo
|
||||
|
||||
```
|
||||
|
||||
The first two, **CI\_COMMIT\_REF\_SLUG** and **CI\_REGISTRY** are hardcoded for DockerHub. The other three are:
|
||||
|
||||
CI\_REGISTRY\_IMAGE
|
||||
: The name of Docker image to be created. Enter your user name for Docker Hub site (*yourdockerhubuser*). If, for instance, the user name is *paultur*, the image in Docker registry will be **/paultur/gitlabci-sample**, as seen at the end of this article.
|
||||
|
||||
CI\_REGISTRY\_USER
|
||||
: Enter *yourdockerhubuser* which, again, is your user name in Docker Hub.
|
||||
|
||||
CI\_REGISTRY\_PASSWORD
|
||||
: Enter \* *yourdockerhubrepo*, which can be your account password or a specially created *access token*. To create one such token, see option **Account Settings** –> **Security** in Docker site:
|
||||
|
||||

|
||||
|
||||
Back to GitLab UI, from menu **Settings** in project view, go to **CI/CD** submenu:
|
||||
|
||||

|
||||
|
||||
Scroll down to the section “Variables”and fill in the respective forms. In the GUI, this will look similar to this:
|
||||
|
||||

|
||||
|
||||
Now that the values of variables are set up, we will use them in our CI/CD pipeline.
|
||||
|
||||
Step 4 Create a pipeline to build your app’s Docker image using Kaniko[](#step-4-create-a-pipeline-to-build-your-app-s-docker-image-using-kaniko "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The CI/CD pipeline that we are creating in GitLab will have only one job that
|
||||
|
||||
> * builds the image and
|
||||
> * pushes it to the Docker image registry.
|
||||
|
||||
In real life scenarios, pipelines would also include additional jobs e.g. related to unit or integration tests.
|
||||
|
||||
GitLab recognizes that a repository/project is configured to implement a CI/CD pipeline by the presence of the **.gitlab-ci.yml** file at the root of the project. One could apply the CI/CD to the project also from GitLab GUI (CI/CD menu entry → Pipelines), using one of the provided default templates. However the result will be, similarly, adding a specifically configured **.gitlab-ci.yml** file to the root of the project.
|
||||
|
||||
Now create now a **.gitlab-ci.yml** file with the contents as below and place it into the folder **GitLabCI-sample**. The file contains the configuration of our pipeline and defines a single job called **docker\_image\_build**.
|
||||
|
||||
**.gitlab-ci.yml**
|
||||
|
||||
```
|
||||
docker_image_build:
|
||||
image:
|
||||
name: gcr.io/kaniko-project/executor:v1.14.0-debug
|
||||
entrypoint: [""]
|
||||
script:
|
||||
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\" }}}" > /kaniko/.docker/config.json
|
||||
- >-
|
||||
/kaniko/executor
|
||||
--context "${CI_PROJECT_DIR}"
|
||||
--cache=false
|
||||
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
|
||||
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}"
|
||||
|
||||
```
|
||||
|
||||
When changes to our project are committed to GitLab, the CI/CD pipeline is triggered to run automatically.
|
||||
|
||||
The jobs are executed by GitLab runner. If you are using GitLab instance by following Prerequisite No. 3 **Local version of GitLab available**, the default runner will have already been deployed in the cluster. In this case, the runner deploys a short-lived pod dedicated to running this specific pipeline. One of the containers running in the pod is based on Kaniko image and is used to build the Docker image of our app.
|
||||
|
||||
There are two key commands in the *script* key and they run when the Kaniko container starts. Both will take values after the environment variables we have previously entered into GitLab.
|
||||
|
||||
Fill in and save the contents of a standardized configuration file
|
||||
: The first command fills in and saves the contents of **config.json**, which is a standardized configuration file used for authenticating to DockerHub.
|
||||
|
||||
Build and publish the container image to DockerHub
|
||||
: The second command builds and publishes the container image to DockerHub.
|
||||
|
||||
Step 5 Trigger pipeline build[](#step-5-trigger-pipeline-build "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
A commit triggers the pipeline to run. After adding the file, publish changes to the repository with the following set of commands:
|
||||
|
||||
```
|
||||
git add .
|
||||
git commit -m "Add .gitlab-ci.yml"
|
||||
git push origin master
|
||||
|
||||
```
|
||||
|
||||
After this commit, if we switch to CI/CD screen of our project, we should see that the pipeline first is in running status, and completed afterwards:
|
||||
|
||||

|
||||
|
||||
Also when browsing our Docker registry, the image is published:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Add your unit and integration tests to this pipeline. They can be added as additional steps in the **gitlab-ci.yml** file. A complete reference can be found here: <https://docs.gitlab.com/ee/ci/yaml/>
|
||||
@ -0,0 +1,235 @@
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-horizon-and-cli-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================================
|
||||
|
||||
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Horizon and CLI commands. The configuration will limit access to your cluster through load balancer.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Prepare Your Environment
|
||||
> * Whitelist the load balancer via the CLI
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **List of IP addresses/ranges to whitelist**
|
||||
|
||||
This is the list of IP addresses that you want the load balancer to be able to listen to.
|
||||
|
||||
In this article, we will use the following two addresses to whitelist:
|
||||
|
||||
> * 10.0.0.0/8
|
||||
> * 10.95.255.0/24
|
||||
|
||||
No. 3 **Python Octavia Client**
|
||||
|
||||
To operate Load Balancers with CLI, the Python Octavia Client (python-octaviaclient) is required. It is a command-line client for the OpenStack Load Balancing service. Install the load-balancer (Octavia) plugin with the following command from the Terminal window, on Ubuntu 22.04:
|
||||
|
||||
```
|
||||
pip install python-octaviaclient
|
||||
|
||||
```
|
||||
|
||||
Or, if you have virtualenvwrapper installed:
|
||||
|
||||
```
|
||||
mkvirtualenv python-octaviaclient
|
||||
pip install python-octaviaclient
|
||||
|
||||
```
|
||||
|
||||
### Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
|
||||
|
||||
First of all, you have to find **id** of your load balancer and its listener.
|
||||
|
||||
#### Horizon:[](#horizon "Permalink to this headline")
|
||||
|
||||
To find a load balancer **id**, go to **Project** >> **Network** >> **Load Balancers** and find that one which is associated with your cluster (its name will be with prefix of your cluster name).
|
||||
|
||||

|
||||
|
||||
Click on load balancer name (in this case `lb-testing-ih347dstxyl2-api_lb_fixed-w2im3obvdv2p-loadbalancer_with_flavor-ykcmf6vvphld`) then go to Listeners pane. There you will have a listener associated with that load balancer.
|
||||
|
||||

|
||||
|
||||
#### CLI[](#cli "Permalink to this headline")
|
||||
|
||||
To use CLI to find the listener, you have to know the following two cluster parameters:
|
||||
|
||||
> * **Stack ID**
|
||||
> * **Cluster ID**
|
||||
|
||||
You can find them from Horizon commands **Container Infra** –> **Clusters** and then click on the name of the cluster:
|
||||
|
||||

|
||||
|
||||
At the bottom of the window, find the Stack ID:
|
||||
|
||||

|
||||
|
||||
Now execute the commands:
|
||||
|
||||
```
|
||||
openstack coe cluster show <your_cluster_id> \
|
||||
-f value -c stack_id \
|
||||
<stack_id for example 12345678-1234-1234-1234-123456789011>
|
||||
|
||||
```
|
||||
|
||||
To find **LB\_ID**
|
||||
|
||||
```
|
||||
openstack stack resource list <your_stack_id> \
|
||||
-n 5 -c resource_name -c physical_resource_id \
|
||||
| grep loadbalancer_with_flavor \
|
||||
| loadbalancer_with_flavor \
|
||||
| <flavor_id for example 12345678-1234-1234-1234-123456789011>
|
||||
|
||||
```
|
||||
|
||||
With that information, now we can check our **listener\_id**; it is to this component that we will attach the whitelist:
|
||||
|
||||
```
|
||||
openstack loadbalancer \
|
||||
show 2d6b335f-fb05-4496-8593-887f7e2c49cf \
|
||||
-c listeners \
|
||||
-f value \
|
||||
<listener_id for example 12345678-1234-1234-1234-123456789011>
|
||||
|
||||
```
|
||||
|
||||
### Whitelist the load balancer via the CLI[](#whitelist-the-load-balancer-via-the-cli "Permalink to this headline")
|
||||
|
||||
We now have the listener and the IP addresses which will be whitelisted. This is the command that will set up the whitelisting:
|
||||
|
||||
```
|
||||
openstack loadbalancer listener set \
|
||||
--allowed-cidr 10.0.0.0/8 \
|
||||
--allowed-cidr 10.95.255.0/24 \
|
||||
<listener_id for example 12345678-1234-1234-1234-123456789011>
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
State of Security: Before and After[](#state-of-security-before-and-after "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
|
||||
|
||||
> * Only specified IPs can access the load balancer.
|
||||
> * Unauthorized access attempts are denied.
|
||||
|
||||
Verification Tools[](#verification-tools "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
Various tools can ensure the protection is installed and active:
|
||||
|
||||
livez
|
||||
: Kubernetes monitoring endpoint.
|
||||
|
||||
nmap
|
||||
: (free): For port scanning and access verification.
|
||||
|
||||
curl
|
||||
: (free): To confirm access control from specific IPs.
|
||||
|
||||
Wireshark
|
||||
: (free): For packet-level analysis.
|
||||
|
||||
### Testing using curl and livez[](#testing-using-curl-and-livez "Permalink to this headline")
|
||||
|
||||
Here is how we could test it:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose
|
||||
|
||||
```
|
||||
|
||||
That command assumes that you have
|
||||
|
||||
curl
|
||||
: installed and operational
|
||||
|
||||
<KUBE\_API\_IP>
|
||||
: which you can see through Horizon commands **API Access** –> **View Credentials**
|
||||
|
||||
livez
|
||||
: which is a piece of software which will show what happens with the load balancer.
|
||||
|
||||
This would be a typical response before changes:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose
|
||||
[+]ping ok
|
||||
[+]log ok
|
||||
[+]etcd ok
|
||||
[+]poststarthook/start-kube-apiserver-admission-initializer ok
|
||||
[+]poststarthook/generic-apiserver-start-informers ok
|
||||
[+]poststarthook/priority-and-fairness-config-consumer ok
|
||||
[+]poststarthook/priority-and-fairness-filter ok
|
||||
[+]poststarthook/storage-object-count-tracker-hook ok
|
||||
[+]poststarthook/start-apiextensions-informers ok
|
||||
[+]poststarthook/start-apiextensions-controllers ok
|
||||
[+]poststarthook/crd-informer-synced ok
|
||||
[+]poststarthook/start-system-namespaces-controller ok
|
||||
[+]poststarthook/bootstrap-controller ok
|
||||
[+]poststarthook/rbac/bootstrap-roles ok
|
||||
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
|
||||
[+]poststarthook/priority-and-fairness-config-producer ok
|
||||
[+]poststarthook/start-cluster-authentication-info-controller ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
|
||||
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-legacy-token-tracking-controller ok
|
||||
[+]poststarthook/aggregator-reload-proxy-client-cert ok
|
||||
[+]poststarthook/start-kube-aggregator-informers ok
|
||||
[+]poststarthook/apiservice-registration-controller ok
|
||||
[+]poststarthook/apiservice-status-available-controller ok
|
||||
[+]poststarthook/kube-apiserver-autoregistration ok
|
||||
[+]autoregister-completion ok
|
||||
[+]poststarthook/apiservice-openapi-controller ok
|
||||
[+]poststarthook/apiservice-openapiv3-controller ok
|
||||
[+]poststarthook/apiservice-discovery-controller ok
|
||||
livez check passed
|
||||
|
||||
```
|
||||
|
||||
And, this would be a typical response after the changes:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose -m 5
|
||||
curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
```
|
||||
|
||||
Whitelisting prevents traffic from all IP addresses apart from those that are allowed by **--allowed-cidr**.
|
||||
|
||||
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
|
||||
|
||||
To test with **nmap**:
|
||||
|
||||
```
|
||||
nmap -p <PORT> <LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with curl directly[](#testing-with-curl-directly "Permalink to this headline")
|
||||
|
||||
To test with **curl**:
|
||||
|
||||
```
|
||||
curl http://<LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can wrap up this procedure with Terraform and apply to a larger number of load balancers. See [Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-CloudFerro-Cloud.html)
|
||||
|
||||
Also, compare with [Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html)
|
||||
@ -0,0 +1,274 @@
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-terraform-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================================
|
||||
|
||||
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Terraform. The configuration will limit access to your cluster through load balancer.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Get necessary load balancer and cluster data from the Prerequisites
|
||||
> * Create the Terraform Configuration
|
||||
> * Import Existing Load Balancer Listener
|
||||
> * Run terraform
|
||||
> * Test and verify that protection of load balancer via whitelisting works
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Basic parameters already defined for whitelisting**
|
||||
|
||||
See article [Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-CloudFerro-Cloud.html) for definition of basic notions and parameters.
|
||||
|
||||
No. 3 **Terraform installed**
|
||||
|
||||
You will need version 1.50 or higher to be operational.
|
||||
|
||||
For complete introduction and installation of Terrafom on OpenStack see article [Generating and authorizing Terraform using Keycloak user on CloudFerro Cloud](../openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 4 **Unrestricted application credentials**
|
||||
|
||||
You need to have OpenStack application credentials with unrestricted checkbox. Check article [How to generate or use Application Credentials via CLI on CloudFerro Cloud](../cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-CloudFerro-Cloud.html)
|
||||
|
||||
The first part of that article describes how to have installed OpenStack client and connect it to the cloud. With that provision, the quickest way to create an unrestricted application credential is to apply the command like this:
|
||||
|
||||
```
|
||||
openstack application credential create cred_unrestricted --unrestricted
|
||||
|
||||
```
|
||||
|
||||
That would create an unrestricted credential called **cred\_unrestricted**.
|
||||
|
||||
You can also use Horizon commands **Identity** –> **Application Credentials** –> **Create Application Credential** and check the appropriate box on:
|
||||
|
||||

|
||||
|
||||
Log in to your account using this unrestricted credential.
|
||||
|
||||
Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
Work through article in Prerequisite No. 2 from which we will derive all the input parameters, using Horizon and CLI commands.
|
||||
|
||||
Also, authenticate through application credential you got from Prerequisite No. 4.
|
||||
|
||||
Configure Terraform for whitelisting[](#configure-terraform-for-whitelisting "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Instead of performing the whitelisting procedure manually, we can use Terraform and store the procedure in the remote repo.
|
||||
|
||||
Create file **openstack\_auth.sh**
|
||||
|
||||
```
|
||||
export OS_AUTH_URL="https://your-openstack-url:5000/v3"
|
||||
export OS_PROJECT_NAME="your-project"
|
||||
export OS_USERNAME="your-username"
|
||||
export OS_PASSWORD="your-password"
|
||||
export OS_REGION_NAME="your-region"
|
||||
|
||||
```
|
||||
|
||||
Create a new directory for your Terraform configuration and create the following files:
|
||||
|
||||
Note
|
||||
|
||||
This example is created for brand new Magnum cluster. You might have to adjust it a bit to suit your needs.
|
||||
|
||||
Create Terraform file:
|
||||
|
||||
**main.tf**
|
||||
|
||||
```
|
||||
terraform {
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
version = "1.47.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "openstack" {
|
||||
use_octavia = true # Required for Load Balancer v2 API
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
**variables.tf**
|
||||
|
||||
```
|
||||
variable "ID_OF_LOADBALANCER" {
|
||||
type = string
|
||||
description = "ID of the existing OpenStack Load Balancer"
|
||||
}
|
||||
|
||||
variable "allowed_cidrs" {
|
||||
type = list(string)
|
||||
description = "List of IP ranges in CIDR format to whitelist"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
**terraform.tfvars**
|
||||
|
||||
```
|
||||
ID_OF_LOADBALANCER = "your-lb-id"
|
||||
allowed_cidrs = [
|
||||
"10.0.0.1/32", # Single IP address
|
||||
"192.168.1.0/24", # IP range
|
||||
"172.16.0.0/16" # Larger subnet
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
**lb.tf**
|
||||
|
||||
```
|
||||
resource "openstack_lb_listener_v2" "k8s_api_listener" {
|
||||
loadbalancer_id = var.ID_OF_LOADBALANCER
|
||||
allowed_cidrs = var.allowed_cidrs
|
||||
protocol_port = "6443"
|
||||
protocol = "TCP"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Import Existing Load Balancer Listener[](#import-existing-load-balancer-listener "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Since Terraform 1.5 can import your resource in declarative way.
|
||||
|
||||
**import.tf**
|
||||
|
||||
```
|
||||
import {
|
||||
to = openstack_lb_listener_v2.k8s_api_listener
|
||||
id = "your-listener-id"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Or you can do it in an imperative way:
|
||||
|
||||
```
|
||||
terraform import openstack_lb_listener_v2.k8s_api_listener "<your-listener-id>"
|
||||
|
||||
```
|
||||
|
||||
Run Terraform[](#run-terraform "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
**Terraform Execute**
|
||||
|
||||
```
|
||||
terraform init
|
||||
terraform plan -out=generated_listener.tf
|
||||
terraform apply generated_listener.tf
|
||||
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
|
||||
**teraform output**
|
||||
|
||||
```
|
||||
Terraform apply generated_listener.tf
|
||||
openstack_lb_listener_v2.k8s_api_listener: Preparing import... [id=bbf39f1c-6936-4344-9957-7517d4a979b6]
|
||||
openstack_lb_listener_v2.k8s_api_listener: Refreshing state... [id=bbf39f1c-6936-4344-9957-7517d4a979b6]
|
||||
|
||||
Terraform used the selected providers to generate the following execution
|
||||
plan. Resource actions are indicated with the following symbols:
|
||||
~ update in-place
|
||||
|
||||
Terraform will perform the following actions:
|
||||
|
||||
# openstack_lb_listener_v2.k8s_api_listener will be updated in-place
|
||||
# (imported from "bbf39f1c-6936-4344-9957-7517d4a979b6")
|
||||
~ resource "openstack_lb_listener_v2" "k8s_api_listener" {
|
||||
admin_state_up = true
|
||||
~ allowed_cidrs = [
|
||||
+ "10.0.0.1/32",
|
||||
]
|
||||
connection_limit = -1
|
||||
default_pool_id = "5991eacc-5869-4205-a646-d27646ccb216"
|
||||
default_tls_container_ref = null
|
||||
description = null
|
||||
id = "bbf39f1c-6936-4344-9957-7517d4a979b6"
|
||||
insert_headers = {}
|
||||
loadbalancer_id = "2d6b335f-fb05-4496-8593-887f7e2c49cf"
|
||||
name = "lb-testing-ih347dstxyl2-api_lb_fixed-w2im3obvdv2p-listener-t36tocd4onxk"
|
||||
protocol = "TCP"
|
||||
protocol_port = 6443
|
||||
region = "<concealed by 1Password>"
|
||||
sni_container_refs = []
|
||||
tenant_id = "<concealed by 1Password>"
|
||||
timeout_client_data = 50000
|
||||
timeout_member_connect = 5000
|
||||
timeout_member_data = 50000
|
||||
timeout_tcp_inspect = 0
|
||||
|
||||
- timeouts {}
|
||||
}
|
||||
|
||||
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
|
||||
|
||||
```
|
||||
|
||||
Tests[](#tests "Permalink to this headline")
|
||||
---------------------------------------------
|
||||
|
||||
By default, Magnum LB does not have any access restrictions.
|
||||
|
||||
Before changes:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose
|
||||
[+]ping ok
|
||||
[+]log ok
|
||||
[+]etcd ok
|
||||
[+]poststarthook/start-kube-apiserver-admission-initializer ok
|
||||
[+]poststarthook/generic-apiserver-start-informers ok
|
||||
[+]poststarthook/priority-and-fairness-config-consumer ok
|
||||
[+]poststarthook/priority-and-fairness-filter ok
|
||||
[+]poststarthook/storage-object-count-tracker-hook ok
|
||||
[+]poststarthook/start-apiextensions-informers ok
|
||||
[+]poststarthook/start-apiextensions-controllers ok
|
||||
[+]poststarthook/crd-informer-synced ok
|
||||
[+]poststarthook/start-system-namespaces-controller ok
|
||||
[+]poststarthook/bootstrap-controller ok
|
||||
[+]poststarthook/rbac/bootstrap-roles ok
|
||||
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
|
||||
[+]poststarthook/priority-and-fairness-config-producer ok
|
||||
[+]poststarthook/start-cluster-authentication-info-controller ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
|
||||
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-legacy-token-tracking-controller ok
|
||||
[+]poststarthook/aggregator-reload-proxy-client-cert ok
|
||||
[+]poststarthook/start-kube-aggregator-informers ok
|
||||
[+]poststarthook/apiservice-registration-controller ok
|
||||
[+]poststarthook/apiservice-status-available-controller ok
|
||||
[+]poststarthook/kube-apiserver-autoregistration ok
|
||||
[+]autoregister-completion ok
|
||||
[+]poststarthook/apiservice-openapi-controller ok
|
||||
[+]poststarthook/apiservice-openapiv3-controller ok
|
||||
[+]poststarthook/apiservice-discovery-controller ok
|
||||
livez check passed
|
||||
|
||||
```
|
||||
|
||||
**After:**
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose -m 5
|
||||
curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Compare with [Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html)
|
||||
@ -0,0 +1,165 @@
|
||||
Create and access NFS server from Kubernetes on CloudFerro Cloud[](#create-and-access-nfs-server-from-kubernetes-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================
|
||||
|
||||
In order to enable simultaneous read-write storage to multiple pods running on a Kubernetes cluster, we can use an NFS server.
|
||||
|
||||
In this guide we will create an NFS server on a virtual machine, create file share on this server and demonstrate accessing it from a Kubernetes pod.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Set up an NFS server on a VM
|
||||
> * Set up a share folder on the NFS server
|
||||
> * Make the share available
|
||||
> * Deploy a test pod on the cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at <https://portal.cloudferro.com/>.
|
||||
|
||||
No. 2 **Familiarity with Linux and cloud management**
|
||||
|
||||
We assume you know the basics of Linux and CloudFerro Cloud cloud management:
|
||||
|
||||
* Creating, accessing and using virtual machines
|
||||
[How to create new Linux VM in OpenStack Dashboard Horizon on CloudFerro Cloud](../cloud/How-to-create-new-Linux-VM-in-OpenStack-Dashboard-Horizon-on-CloudFerro-Cloud.html)
|
||||
|
||||
* Creating security groups [How to use Security Groups in Horizon on CloudFerro Cloud](../cloud/How-to-use-Security-Groups-in-Horizon-on-CloudFerro-Cloud.html)
|
||||
|
||||
* Attaching floating IPs [How to Add or Remove Floating IP’s to your VM on CloudFerro Cloud](../networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 3 **A running Kubernetes cluster**
|
||||
|
||||
You will also need a Kubernetes cluster to try out the commands. To create one from scratch, see [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **kubectl access to the Kubernetes cloud**
|
||||
|
||||
As usual when working with Kubernetes clusters, you will need to use the **kubectl** command: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
1. Set up NFS server on a VM[](#set-up-nfs-server-on-a-vm "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
As a prerequisite to create an NFS server on a VM, first from the Network tab in Horizon create a security group allowing ingress traffic from port **2049**.
|
||||
|
||||
Then create an Ubuntu VM from Horizon. During the *Network* selection dialog, connect the VM to the network of your Kubernetes cluster. This ensures that cluster nodes have access to the NFS server over private network. Then add that security group with port **2049** open.
|
||||
|
||||

|
||||
|
||||
When the VM is created, you can see that it has private address assigned. For this occasion, let the private address be **10.0.0.118**. Take note of this address to later use it in NFS configuration.
|
||||
|
||||
Set up floating IP on the VM server, just to enable SSH to this VM.
|
||||
|
||||
2. Set up a share folder on the NFS server[](#set-up-a-share-folder-on-the-nfs-server "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
SSH to the VM, then run:
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
sudo apt-get install nfs-kernel-server
|
||||
|
||||
```
|
||||
|
||||
In the NFS server VM create a share folder:
|
||||
|
||||
```
|
||||
sudo mkdir /mnt/myshare
|
||||
|
||||
```
|
||||
|
||||
Change the owner of the share so that *nobody* is owner. Thus any user on the client can access the share folder. More restrictive settings can be applied.
|
||||
|
||||
```
|
||||
sudo chown nobody:nogroup /mnt/myshare
|
||||
|
||||
```
|
||||
|
||||
Also change the permissions of the folder, so that anyone can modify the files:
|
||||
|
||||
```
|
||||
sudo chmod 777 /mnt/myshare
|
||||
|
||||
```
|
||||
|
||||
Edit the */etc/exports* file and add the following line:
|
||||
|
||||
```
|
||||
/mnt/myshare 10.0.0.0/24(rw,sync,no_subtree_check)
|
||||
|
||||
```
|
||||
|
||||
This indicates that all nodes on the cluster network can access this share, with subfolders, in read-write mode.
|
||||
|
||||
3. Make the share available[](#make-the-share-available "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------
|
||||
|
||||
Run the below command to make the share available:
|
||||
|
||||
```
|
||||
sudo exportfs -a
|
||||
|
||||
```
|
||||
|
||||
Then restart the NFS server with:
|
||||
|
||||
```
|
||||
sudo systemctl restart nfs-kernel-server
|
||||
|
||||
```
|
||||
|
||||
Exit from the NFS server VM.
|
||||
|
||||
4. Deploy a test pod on the cluster[](#deploy-a-test-pod-on-the-cluster "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------
|
||||
|
||||
Ensure you can access your cluster with **kubectl**. Have a file *test-pod.yaml* with the following contents:
|
||||
|
||||
**test-pod.yaml**
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /my-nfs-data
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
nfs:
|
||||
server: 10.0.0.118
|
||||
path: /mnt/myshare
|
||||
|
||||
```
|
||||
|
||||
The NFS server block refers to private IP address of the NFS server machine, which is on our cluster network. Apply the *yaml* manifest with:
|
||||
|
||||
```
|
||||
kubectl apply -f test-pod.yaml
|
||||
|
||||
```
|
||||
|
||||
We can then enter the shell of the *test-pod* with the below command:
|
||||
|
||||
```
|
||||
kubectl exec -it test-pod -- sh
|
||||
|
||||
```
|
||||
|
||||
and see that the *my-nfs-data* folder got mounted properly:
|
||||
|
||||

|
||||
|
||||
To verify, create a file *testfile* in this folder, then exit the container. You can then SSH back to the NFS server and verify that *testfile* is available in */mnt/myshare* folder.
|
||||
|
||||

|
||||
@ -0,0 +1,254 @@
|
||||
Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#creating-additional-nodegroups-in-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================
|
||||
|
||||
The Benefits of Using Nodegroups[](#the-benefits-of-using-nodegroups "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
A *nodegroup* is a group of nodes from a Kubernetes cluster that have the same configuration and run the user’s containers. One and the same cluster can have various nodegroups within it, so instead of creating several independent clusters, you may create only one and then separate the groups into the nodegroups.
|
||||
|
||||
A nodegroup separates the roles within the cluster and can
|
||||
|
||||
> * limit the scope of damage if a given group is compromised,
|
||||
> * regulate the number of API requests originating from a certain group, and
|
||||
> * create scopes of privileges to specific node types and related workloads.
|
||||
|
||||
Other uses of nodegroup roles also include:
|
||||
|
||||
> * for testing purposes,
|
||||
> * if your Kubernetes environment is small on resources, you can create a minimal Kubernetes cluster and later on add nodegroups and thus enhance the number of control and worker nodes.
|
||||
> * Nodes in a group can be created, upgraded and deleted individually, without affecting the rest of the cluster.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The structure of command **openstack coe nodelist**
|
||||
> * How to produce manageable output from **nodelist** set of commands
|
||||
> * How to **list** what nodegroups are available in a cluster
|
||||
> * How to **show** the contents of one particular *nodegroup* in a cluster
|
||||
> * How to **create** a new *nodegroup*
|
||||
> * How to **delete** an existing *nodegroup*
|
||||
> * How to **update** *nodegroups*
|
||||
> * How to resize a nodegroup
|
||||
> * The benefits of using nodegroups in Kubernetes clusters
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Creating clusters with CLI**
|
||||
|
||||
The article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) will introduce you to creation of clusters using a command line interface.
|
||||
|
||||
No. 3 **Connect openstack client to the cloud**
|
||||
|
||||
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html)
|
||||
|
||||
No. 4 **Check available quotas**
|
||||
|
||||
Before creating additional node groups check the state of the resources with Horizon commands **Computer** => **Overview**. See [Dashboard Overview – Project Quotas And Flavors Limits on CloudFerro Cloud](../cloud/Dashboard-Overview-Project-Quotas-And-Flavors-Limits-on-CloudFerro-Cloud.html).
|
||||
|
||||
Nodegroup Subcommands[](#nodegroup-subcommands "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Once you create a Kubernetes cluster on OpenStack Magnum, there are five *nodegroup* commands at your disposal:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create
|
||||
|
||||
openstack coe nodegroup delete
|
||||
|
||||
openstack coe nodegroup list
|
||||
|
||||
openstack coe nodegroup show
|
||||
|
||||
openstack coe nodegroup update
|
||||
|
||||
```
|
||||
|
||||
With this, you can repurpose the cluster to include various images, change volume access, set up max and min values for the number of nodes and so on.
|
||||
|
||||
Step 1 Access the Current State of Clusters and Their Nodegroups[](#step-1-access-the-current-state-of-clusters-and-their-nodegroups "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is which clusters are available in the system:
|
||||
|
||||
```
|
||||
openstack coe cluster list --max-width 120
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
The default process of creating Kubernetes clusters on OpenStack Magnum produces two nodegroups, **default-master** and **default-worker**. Use commands
|
||||
|
||||
```
|
||||
openstack coe nodegroup list kubelbtrue
|
||||
|
||||
openstack coe nodegroup list k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
to list default nodegroups for those two clusters, *kubelbtrue* and *k8s-cluster*.
|
||||
|
||||

|
||||
|
||||
The **default-worker** node group cannot be removed or reconfigured so plan ahead when creating the base cluster.
|
||||
|
||||
Step 2 How to Create a New Nodegroup[](#step-2-how-to-create-a-new-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you learn about the parameters available for the **nodegroup create** command. This is the general structure:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create [-h]
|
||||
[--docker-volume-size <docker-volume-size>]
|
||||
[--labels <KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...>]
|
||||
[--node-count <node-count>]
|
||||
[--min-nodes <min-nodes>]
|
||||
[--max-nodes <max-nodes>]
|
||||
[--role <role>]
|
||||
[--image <image>]
|
||||
[--flavor <flavor>]
|
||||
[--merge-labels]
|
||||
<cluster> <name>
|
||||
|
||||
```
|
||||
|
||||
You will now create a nodegroup of two members, it will be called *testing*, the role will be called *test*, and add it to the cluster *k8s-cluster*:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create \
|
||||
--node-count 2 \
|
||||
--role test \
|
||||
k8s-cluster testing
|
||||
|
||||
```
|
||||
|
||||
Then use the command
|
||||
|
||||
```
|
||||
openstack coe nodegroup list k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
to list the nodegroups twice. The first time, it will be in status of creating, the second time, after a few seconds, it will have been created already.
|
||||
|
||||

|
||||
|
||||
In Horizon, use command **Orchestration** => **Stacks** to list the mechanisms that create new instances. In this case, the stack looks like this:
|
||||
|
||||

|
||||
|
||||
Still in Horizon, click on commands **Contaner Infra** => **Clusters** => **k8s-clusters** and see that there are now five nodes in total:
|
||||
|
||||

|
||||
|
||||
Step 3 Using **role** to Filter Nodegroups in the Cluster[](#step-3-using-role-to-filter-nodegroups-in-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
It is possible to filter node groups according to the role. Here is the command to show only the *test* nodegroup:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list k8s-cluster --role test
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Several node groups can share the same role name.
|
||||
|
||||
The roles can be used to schedule the nodes when using the **kubectl** command directly on the cluster.
|
||||
|
||||
Step 4 Show Details of the Nodegroup Created[](#step-4-show-details-of-the-nodegroup-created "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Command **show** presents the details of a nodegroup in various formats – *json*, *table*, *shell*, *value* or *yaml*. The default is *table* but use parameter **–max-width** to limit the number of columns in it:
|
||||
|
||||
```
|
||||
openstack coe nodegroup show --max-width 80 k8s-cluster testing
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Step 5 Delete the Existing Nodegroup[](#step-5-delete-the-existing-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you shall try to create a nodegroup with small footprint:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create \
|
||||
--node-count 2 \
|
||||
--role test \
|
||||
--image cirros-0.4.0-x86_64-2 \
|
||||
--flavor eo1.xsmall \
|
||||
k8s-cluster cirros
|
||||
|
||||
```
|
||||
|
||||
After one hour, the command was cancelled and the creation has failed. The resources will, however, stay frozen in the system so here is how to delete them.
|
||||
|
||||
One way is to use the CLI **delete** subcommand, like this:
|
||||
|
||||
```
|
||||
openstack coe nodegroup delete k8s-cluster cirros
|
||||
|
||||
```
|
||||
|
||||
The status will be changed to DELETE\_IN\_PROGRESS.
|
||||
|
||||
Another way is to find the instances of those created nodes and delete them through the Horizon interface. Find the existing instances with commands **Compute** => **Instance** and filter by *Instance Name*, with text *k8s-cluster-cirros-*. It may look like this:
|
||||
|
||||

|
||||
|
||||
and then delete them by clicking on red button **Delete Instances**.
|
||||
|
||||
You will get a confirmation text in cloud in the upper right corner.
|
||||
|
||||
Regardless of the way, the instances will not be deleted immediately, but rather *scheduled* to be deleted in some near future.
|
||||
|
||||
The default master and worker node groups cannot be deleted but all the others can.
|
||||
|
||||
Step 6 Update the Existing Nodegroup[](#step-6-update-the-existing-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you will directly update the existing nodegroup, rather than adding and deleting them in a row. The example command is:
|
||||
|
||||
```
|
||||
openstack coe nodegroup update k8s-cluster testing replace min_node_count=1
|
||||
|
||||
```
|
||||
|
||||
Instead of **replace**, it is also possible to use verbs **add** and **delete**.
|
||||
|
||||
In the above example, you are setting up the minimum value of nodes to 1. (Previously it was **0** as parameter **min\_node\_count** was not specified and its default value is **0**.)
|
||||
|
||||
Step 7 Resize the Nodegroup[](#step-7-resize-the-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Resizing the *nodegroup* is similar to resizing the cluster, with the addition of parameter **–nodegroup**. Currently, the number of nodes in group *testing* is 2. Make it **1**:
|
||||
|
||||
```
|
||||
openstack coe cluster resize k8s-cluster --nodegroup testing 1
|
||||
|
||||
```
|
||||
|
||||
To see the result, apply the command
|
||||
|
||||
```
|
||||
openstack coe nodegroup list --max-width 120 k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
and get:
|
||||
|
||||

|
||||
|
||||
Cluster cannot be scaled outside of min-nodes/max-nodes set when nodegroup was created.
|
||||
|
||||
Here is what the state of the networks looks like after all these changes (commands **Network** => **Network Topology** => **Small** in Horizon interface):
|
||||
|
||||

|
||||
@ -0,0 +1,185 @@
|
||||
Default Kubernetes cluster templates in CloudFerro Cloud Cloud[](#default-kubernetes-cluster-templates-in-brand-name-cloud "Permalink to this headline")
|
||||
=========================================================================================================================================================
|
||||
|
||||
In this article we shall list Kubernetes cluster templates available on CloudFerro Cloud and explain the differences among them.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * List available templates on your cloud
|
||||
> * Explain the difference between *calico* and *cilium* network drivers
|
||||
> * How to choose proper template
|
||||
> * Overview and benefits of *localstorage* templates
|
||||
> * Example of creating *localstorage* template using HMD and HMAD flavors
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Private and public keys**
|
||||
|
||||
To create a cluster, you will need an available SSH key pair. If you do not have one already, follow this article to create it in the OpenStack dashboard: [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html).
|
||||
|
||||
No. 3 **Documentation for standard templates**
|
||||
|
||||
Documentation for all **1.23.16** drivers is [here](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v12316).
|
||||
|
||||
Documentation for *localstorage* templates:
|
||||
|
||||
> | | |
|
||||
> | --- | --- |
|
||||
> | k8s-stable-localstorage-1.21.5 | [Kubernetes release 1.21](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/) |
|
||||
> | k8s-stable-localstorage-1.22.5 | [Kubernetes release 1.22](https://kubernetes.io/blog/2021/08/04/kubernetes-1-22-release-announcement/) |
|
||||
> | k8s-stable-localstorage-1.23.5 | [Kubernetes release 1.23](https://kubernetes.io/blog/2021/12/07/kubernetes-1-23-release-announcement/) |
|
||||
|
||||
No. 4 **How to create Kubernetes clusters**
|
||||
|
||||
The general procedure is explained in [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 5 **Using vGPU in Kubernetes clusters**
|
||||
|
||||
If template name contains “vgpu”, this template can be used to create so-called “vGPU-first” clusters.
|
||||
|
||||
To learn how to set up vGPU in Kubernetes clusters on CloudFerro Cloud cloud, see [Deploying vGPU workloads on CloudFerro Cloud Kubernetes](Deploying-vGPU-workloads-on-CloudFerro-Cloud-Kubernetes.html).
|
||||
|
||||
Templates available on your cloud[](#templates-available-on-your-cloud "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.
|
||||
|
||||
WAW4-1
|
||||
: These are the default Kubernetes cluster templates on WAW4-1 cloud:
|
||||
|
||||

|
||||
|
||||
WAW3-1
|
||||
: These are the default Kubernetes cluster templates on WAW3-1 cloud:
|
||||
|
||||

|
||||
|
||||
WAW3-2
|
||||
: Default templates for WAW3-2 cloud:
|
||||
|
||||

|
||||
|
||||
FRA1-2
|
||||
: Default templates for FRA1-2 cloud:
|
||||
|
||||

|
||||
|
||||
The converse is also true, you may want to select the cloud that you want to use according to the type of cluster that you would want to use. For instance, you would have to select WAW3-1 cloud if you wanted to use vGPU on your cluster.
|
||||
|
||||
How to choose a proper template[](#how-to-choose-a-proper-template "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
**Standard templates**
|
||||
|
||||
Standard templates are general in nature and you can use them for any type of Kubernetes cluster. Each will produce a working Kubernetes cluster on CloudFerro Cloud OpenStack Magnum hosting. The default network driver is *calico*. Template that does not specify calico, k8s-1.23.16-v1.0.3, and is identical to the template that does specify *calico* in its name. Both are placed in the left column in the following table:
|
||||
|
||||
| calico | cilium |
|
||||
| --- | --- |
|
||||
| k8s-1.23.16-v1.0.3 | k8s-1.23.16-cilium-v1.0.3 |
|
||||
| k8s-1.23.16-calico-v1.0.3 | |
|
||||
|
||||
|
||||
|
||||
Standard templates can also use vGPU hardware if available in the cloud. Using vGPU with Kubernetes clusters is explained in Prerequisite No. 5.
|
||||
|
||||
**Templates with vGPU**
|
||||
|
||||
| calico vGPU | cilium vGPU |
|
||||
| --- | --- |
|
||||
| k8s-1.23.16-vgpu-v1.0.0 | k8s-1.23.16-cilium-vgpu-v1.0.0 |
|
||||
| k8s-1.23.16-calico-vgpu-v1.0.0 | |
|
||||
|
||||
|
||||
|
||||
Again, the templates in the left column are identical.
|
||||
|
||||
If the application does not require a great many operations, then a standard template should be sufficient.
|
||||
|
||||
You can also dig deeper and choose the template according to the the network plugin used.
|
||||
|
||||
### Network plugins for Kubernetes clusters[](#network-plugins-for-kubernetes-clusters "Permalink to this headline")
|
||||
|
||||
Kubernetes cluster templates at CloudFerro Cloud cloud use *calico* or *cilium* plugins for controlling network traffic. Both are [CNI](https://www.cncf.io/projects/kubernetes/) compliant. *Calico* is the default plugin, meaning that if the template name does not specify the plugin, the *calico* driver is used. If the template name specifies *cilium* then, of course, the *cilium* driver is used.
|
||||
|
||||
### Calico (the default)[](#calico-the-default "Permalink to this headline")
|
||||
|
||||
[Calico](https://projectcalico.docs.tigera.io/about/about-calico) uses BGP protocol to move network packets towards IP addresses of the pods. *Calico* can be faster then its competitors but its most remarkable feature is support for *network policies*. With those, you can define which pods can send and receive traffic and also manage the security of the network.
|
||||
|
||||
*Calico* can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. *Calico* policies can be used on its own or together with the Kubernetes network policies.
|
||||
|
||||
### Cilium[](#cilium "Permalink to this headline")
|
||||
|
||||
[Cilium](https://cilium.io/) is drawing its power from a technology called *eBPF*. It exposes programmable hooks to the network stack in Linux kernel. *eBPF* uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, *eBPF* enables Linux to watch over Kubernetes and react appropriately.
|
||||
|
||||
With *Cilium*, the relationships amongst various cluster parts are as follows:
|
||||
|
||||
> * pods in the cluster (as well as the *Cilium* driver itself) are using *eBPF* instead of using Linux kernel directly,
|
||||
> * kubelet uses *Cilium* driver through the CNI compliance and
|
||||
> * the *Cilium* driver implements network policy, services and load balancing, flow and policy logging, as well as computing various metrics.
|
||||
|
||||
Using *Cilium* especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.
|
||||
|
||||
Overview and benefits of *localstorage* templates[](#overview-and-benefits-of-localstorage-templates "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Compared to standard templates, the *localstorage* templates may be a better fit for *resources intensive* apps.
|
||||
|
||||
NVMe stands for *Nonvolatile Memory Express* and is a newer storage access and transport protocol for flash and solid-state drives (SSDs). *localstorage* templates provision the cluster with Virtual Machine flavors which have NVMe storage available.
|
||||
|
||||
Each cluster contains an instance of **etcd** volume, which serves as its external database. Using NVMe storage will speed up access to **etcd** and it will, by definition, speed up cluster operations.
|
||||
|
||||
Applications such as day trading, personal finances, AI and the similar, may have so many transactions that using *localstorage* templates may become a viable option.
|
||||
|
||||
In WAW3-1 cloud, virtual machine flavors with NVMe have the prefix of HMD and they are resource-intensive:
|
||||
|
||||
```
|
||||
openstack flavor list
|
||||
+--------------+--------+------+-----------+-------+
|
||||
| Name | RAM | Disk | Ephemeral | VCPUs |
|
||||
+--------------+--------+------+-----------+-------+
|
||||
| hmd.xlarge | 65536 | 200 | 0 | 8 |
|
||||
| hmd.medium | 16384 | 50 | 0 | 2 |
|
||||
| hmd.large | 32768 | 100 | 0 | 4 |
|
||||
|
||||
```
|
||||
|
||||
You would use an HMD flavor mainly for the master node(s) in the cluster.
|
||||
|
||||
In WAW3-2 cloud, you would use flavors starting with HMAD instead of HMD.
|
||||
|
||||
Example parameters to create a new cluster with localstorage and NVMe[](#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
For general discussion of parameters, see Prerequisite No. 4. What follows is a simplified example, geared to creation of cluster using *localstorage*.
|
||||
|
||||
We shall use WAW3-1 with HMD flavors in the example but you can, of course, supply HMAD flavors for WAW3-2 and so on.
|
||||
|
||||
The only deviation from the usual procedure is that it is mandatory to add label **etcd\_volume\_size=0** in the **Advanced** window. Without it, *localstorage* template won’t work.
|
||||
|
||||
Start creating a cluster with the usual chain of commands **Container Infra** -> **Clusters** -> **+ Create New Cluster**.
|
||||
|
||||
In the screenshot below, we selected *k8s-stable-localstorage-1.23.5* as our local storage template of choice, in mandatory field **Cluster Template**.
|
||||
|
||||
For field **Keypair** use SSH key that you already have and if you do not have it yet, use Prerequisite No. 2 to obtain it.
|
||||
|
||||

|
||||
|
||||
Let master nodes use one of the HMD flavors:
|
||||
|
||||

|
||||
|
||||
Proceed to enter the usual parameters into the Network and Management windows.
|
||||
|
||||
The last window, **Advanced**, is the place to add label **etcd\_volume\_size=0**.
|
||||
|
||||

|
||||
|
||||
The result will be a formed cluster NVMe:
|
||||
|
||||

|
||||
@ -0,0 +1,300 @@
|
||||
Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud[](#deploy-keycloak-on-kubernetes-with-a-sample-app-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================
|
||||
|
||||
[Keycloak](https://www.keycloak.org/) is a large Open-Source Identity Management suite capable of handling a wide range of identity-related use cases.
|
||||
|
||||
Using Keycloak, it is straightforward to deploy a robust authentication/authorization solution for your applications. After the initial deployment, you can easily configure it to meet new identity-related requirements, e.g. multi-factor authentication, federation to social-providers, custom password policies, and many others.
|
||||
|
||||
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
> * Deploy Keycloak on a Kubernetes cluster
|
||||
> * Configure Keycloak: create a realm, client and a user
|
||||
> * Deploy a sample Python web application using Keycloak for authentication
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **A running Kubernetes cluster and kubectl activated**
|
||||
|
||||
A Kubernetes cluster, to create one refer to: [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html). To activate **kubectl**, see [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 3 **Basic knowledge of Python and pip package management**
|
||||
|
||||
Basic knowledge of Python and pip package management is expected. Python 3 and pip should be already installed and available on your local machine.
|
||||
|
||||
No. 4 **Familiarity with OpenID Connect (OIDC) terminology**
|
||||
|
||||
Certain familiarity with OpenID Connect (OIDC) terminology is required. Some key terms will be briefly explained in this article.
|
||||
|
||||
Step 1 Deploy Keycloak on Kubernetes[](#step-1-deploy-keycloak-on-kubernetes "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s first create a dedicated Kubernetes namespace for Keycloak. This is optional, but good practice:
|
||||
|
||||
```
|
||||
kubectl create namespace keycloak
|
||||
|
||||
```
|
||||
|
||||
Then deploy Keycloak into this namespace:
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml -n keycloak
|
||||
|
||||
```
|
||||
|
||||
Keycloak, by default, gets exposed as a Kubernetes service of a type LoadBalancer, on port 8080. You need to find out the service public IP with the following command (note it might take a couple minutes to populate):
|
||||
|
||||
```
|
||||
kubectl get services -n keycloak
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
keycloak LoadBalancer 10.254.8.94 64.225.128.216 8080:31228/TCP 23h
|
||||
|
||||
```
|
||||
|
||||
Note
|
||||
|
||||
In our case, the external IP address is **64.225.128.216** so that is what we are going to use in this article. Be sure to replace it with the IP address of your own.
|
||||
|
||||
So, enter **http://64.225.128.216:8080/** to browser to access Keycloak:
|
||||
|
||||

|
||||
|
||||
Next, click on **Administration Console** and you will get redirected to the login screen, where you can sign-in as an admin (login/password *admin/admin* )
|
||||
|
||||

|
||||
|
||||
This is full screen view of the Keycloak window:
|
||||
|
||||

|
||||
|
||||
Step 2 Create Keycloak realm[](#step-2-create-keycloak-realm "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
In Keycloak terminology, a *realm* is a dedicated space for managing an isolated subset of users, roles and other related entities. Keycloak has initially a master realm used for administration of Keycloak itself.
|
||||
|
||||
Our next step is to create our own realm and start operating within its context. To create it, click first on **master** field in the upper left corner, then click on **Create Realm**.
|
||||
|
||||

|
||||
|
||||
We will just enter the realm name *myrealm*, leaving the rest unchanged:
|
||||
|
||||

|
||||
|
||||
When the realm is created (and selected), we operate within this realm:
|
||||
|
||||

|
||||
|
||||
In the left upper corner, instead of **master** now is the name of the selected realm, **myrealm**.
|
||||
|
||||
Step 3 Create and configure Keycloak client[](#step-3-create-and-configure-keycloak-client "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Clients are entities in Keycloak that can request Keycloak to authenticate users. In practical terms, they can be thought of as representation of individual applications that want to utilize Keycloak-managed authentication/authorization.
|
||||
|
||||
Within the *myrealm* realm, we will now create a client *myapp* that will represent the web application which we will create in one of the further steps. To create one such client, click on the Clients panel on the left menu, and then on **Create Client** button.
|
||||
|
||||
You will enter a wizard consisting of 3 steps. In the first step we just enter the ID of the client (which, in our case, is *myapp*), leaving other settings unchanged:
|
||||
|
||||

|
||||
|
||||
The next screen involves selecting some crucial settings relating to the authentication/authorization requirements of your specific application.
|
||||
|
||||

|
||||
|
||||
The options you choose will depend on your particular scenario:
|
||||
|
||||
Scenario 1 Traditional server applications
|
||||
: For the purpose of this article and our demo app, we use a traditional client server application. We then need to turn on the “Client Authentication” toggle.
|
||||
|
||||
Scenario 2 SPA
|
||||
: For single page applications, you can stay with the default where “Client Authentication” toggle is off.
|
||||
|
||||
For our demo app, we will require authentication via secret, so be sure to activate option **Client Authentication**. Once it is turned on, we will be able to the obtain the value of *secret* later on, in Step 5.
|
||||
|
||||
The last step of the Wizard involves setting some key coordinates of our client application. The ones we modify involve:
|
||||
|
||||

|
||||
|
||||
root URL
|
||||
: In our case, we want to deploy the app locally so we set up the root as <http://localhost> . You will need to change this if your app will be exposed as a public service.
|
||||
|
||||
Valid redirect URIs
|
||||
: This setting represents a route in our app, to which a user will be redirected after a successful login from Keycloak. In our case, we leave this setting very permissive with a “\*”, allowing redirect to any path in our application. For production, you should make this more explicit, using a dedicated route, say, */callback*, for this purpose.
|
||||
|
||||
Web origins
|
||||
: This setting specifies hosts that can send requests to Keycloak. Requests from other hosts will not pass the cross-origin check and will be rejected. Also here we are very permissive by setting a “\*”. Similarly as above, strongly consider changing this setting for production, and limit to trusted sources only.
|
||||
|
||||
After hitting **Save**, your client is created. You can then modify the previously selected settings of the created client, and add new, more specific ones. There are vast possibilities for further customization depending on your app specifics, this is however beyond the scope of this article.
|
||||
|
||||
Step 4 Create a User in Keycloak[](#step-4-create-a-user-in-keycloak "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
After creating the Client, we will proceed to creating our first User in Keycloak. In order to do so, click on the Users tab on the left and then **Create New User**:
|
||||
|
||||
We will again be very selective and only choose *test* as the username, leaving other options intact:
|
||||
|
||||

|
||||
|
||||
Next, we will set up password credentials for the newly created user. Select **Credentials** tab and then **Set password**, type in the password with confirmation in the form and hit Save:
|
||||
|
||||

|
||||
|
||||
Step 5 Retrieve client secret from Keycloak[](#step-5-retrieve-client-secret-from-keycloak "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once we have the Keycloak set up, we will need to extract the client *secret*, so that Keycloak establishes trust with our application.
|
||||
|
||||
The *client\_secret* can be extracted by going into *myrealm* realm, selecting *myapp* as the client and then taking the client secret with the following chain of commands:
|
||||
|
||||
> **Clients** –> **Client detail** –> **Credentials**
|
||||
|
||||
Once in tab **Credentials**, the secret will become accessible through field **Client secret**:
|
||||
|
||||

|
||||
|
||||
For privacy reasons, in the screeshot above, it is painted yellow. In your case, take note of its value, as in the next step you will need to paste it into the application code.
|
||||
|
||||
Step 6 Create a Flask web app utilizing Keycloak authentication[](#step-6-create-a-flask-web-app-utilizing-keycloak-authentication "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To build the app, we will use Flask, which is a lightweight Python-based web framework. Keycloak supports wide range of other technologies as well. We will use Flask-OIDC library, which expands Flask with capability to run OpenID Connect authentication/authorization scenarios.
|
||||
|
||||
As a prerequisite, you need to install the following pip packages to cover the dependency chain. Best run the commands from an already preinstalled Python virtual environment:
|
||||
|
||||
```
|
||||
pip install Werkzeug==2.3.8
|
||||
pip install Flask==2.0.1
|
||||
pip install wheel==0.40.0
|
||||
pip install flask-oidc==1.4.0
|
||||
pip install itsdangerous==2.0.1
|
||||
|
||||
```
|
||||
|
||||
Then you will need to create 2 files: *app.py* and *keycloak.json*. You will need the following changes in these files:
|
||||
|
||||
Replace the IP address
|
||||
: In *keycloak.json*, replace *64.225.128.216* with your own external IP from **Step 1**.
|
||||
|
||||
Replace client\_secret
|
||||
: Again in *keycloak.json*, replace value of variable client\_secret with the secret from **Step 5**.
|
||||
|
||||
Replace client\_secret
|
||||
: In file *app.py*, replace value of *SECRET\_KEY* with the same secret from **Step 5**.
|
||||
|
||||
Create a new file called *app.py* and paste in the following contents:
|
||||
|
||||
```
|
||||
from flask import Flask, g
|
||||
from flask_oidc import OpenIDConnect
|
||||
import json
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
app.config.update(
|
||||
SECRET_KEY='XXXXXX',
|
||||
OIDC_CLIENT_SECRETS='keycloak.json',
|
||||
OIDC_INTROSPECTION_AUTH_METHOD='client_secret_post',
|
||||
OIDC_TOKEN_TYPE_HINT='access_token',
|
||||
OIDC_SCOPES=['openid','email','profile'],
|
||||
OIDC_OPENID_REALM='myrealm'
|
||||
)
|
||||
|
||||
oidc = OpenIDConnect(app)
|
||||
|
||||
@app.route('/')
|
||||
def index():
|
||||
if oidc.user_loggedin:
|
||||
info = oidc.user_getinfo(["preferred_username", "email", "sub"])
|
||||
return 'Welcome %s' % info.get("preferred_username")
|
||||
else:
|
||||
return '<h1>Not logged in</h1>'
|
||||
|
||||
@app.route('/login')
|
||||
@oidc.require_login
|
||||
def login():
|
||||
token = oidc.get_access_token()
|
||||
info = oidc.user_getinfo(["preferred_username", "email", "sub"])
|
||||
username = info.get("preferred_username")
|
||||
return "Token: " + token + "<br/><br/> Username: " + username
|
||||
|
||||
@app.route('/logout')
|
||||
def logout():
|
||||
oidc.logout()
|
||||
return '<h2>Hi, you have been logged out! <a href="/">Return</a></h2>'
|
||||
|
||||
```
|
||||
|
||||
The application code bootstraps the Flask application and provides the configurations necessary for *flask\_oidc*. We need to configure the
|
||||
|
||||
> * name of our realm, the
|
||||
> * client *secret\_key* and the
|
||||
> * additional settings that reflect our specific sample flow.
|
||||
|
||||
Also, this configuration points to another configuration file, *keycloak.json*, which reflects further settings of our Keycloak realm. Specifically, in it you will find the client ID and the secret, as well as the endpoints where Keycloak makes available further information about the realm settings.
|
||||
|
||||
Create the required file *keycloak.json*, in the same working folder as the *app.py* file:
|
||||
|
||||
```
|
||||
{
|
||||
"web": {
|
||||
"client_id": "myapp",
|
||||
"client_secret": "XXXXXX",
|
||||
"auth_uri": "http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/auth",
|
||||
"token_uri": "http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/token",
|
||||
"issuer": "http://64.225.128.216:8080/realms/myrealm",
|
||||
"userinfo_uri": "http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/userinfo",
|
||||
"token_introspection_uri": "http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/token/introspect",
|
||||
"redirect_uris": [
|
||||
"http://localhost:5000/*"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Note that *app.py* creates 3 routes:
|
||||
|
||||
/
|
||||
: In this route, a page is served that provides the name of a logged in user. Alternatively, if the user is not logged in yet, it prompts to do so.
|
||||
|
||||
`/login`
|
||||
: This route redirects the user to the Keycloak login page and upon successful authentication provides user name and token
|
||||
|
||||
`/logout`
|
||||
: Entering this route logs the user out.
|
||||
|
||||
Step 7 Test the application[](#step-7-test-the-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To test the application, execute the following command from the working directory in which file *app.py* is placed:
|
||||
|
||||
```
|
||||
flask run
|
||||
|
||||
```
|
||||
|
||||
This is the result, in a CLI window:
|
||||
|
||||

|
||||
|
||||
We now know that the *localhost* is running flask server on port 5000. Enter *localhost:5000* into the browser address bar and it will display the site served on the base route: */* . We have not logged in our user yet, hence the respective message:
|
||||
|
||||

|
||||
|
||||
The next step is to enter the */login* route. Enter *localhost:5000/login* into the browser address bar. Doing so, redirects to Keycloak prompting to log in to *myapp*:
|
||||
|
||||

|
||||
|
||||
To authenticate, enter the username of the user we created in step 3 (username: *test*), and the password you used to create this user. With default settings, you might be asked to change the password after first login, then just proceed accordingly. After logging in, our username and token get displayed (for security reasons, parts of the token are painted in yellow):
|
||||
|
||||

|
||||
|
||||
The last route to test is */logout* . When entering *localhost:5000/logout* to the browser, we can see the screen below. Entering this route calls the *flask-oidc* method that logs the user out, also clearing the session cookie under the hood.
|
||||
|
||||

|
||||
@ -0,0 +1,304 @@
|
||||
Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud[](#deploying-https-services-on-magnum-kubernetes-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
======================================================================================================================================================================================
|
||||
|
||||
Kubernetes makes it very quick to deploy and publicly expose an application, for example using the LoadBalancer service type. Sample deployments, which demonstrate such capability, are usually served with HTTP. Deploying a production-ready service, secured with HTTPS, can also be done smoothly, by using additional tools.
|
||||
|
||||
In this article, we show how to deploy a sample HTTPS-protected service on CloudFerro Cloud cloud.
|
||||
|
||||
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Cert Manager’s Custom Resource Definitions
|
||||
> * Install Cert Manager Helm chart
|
||||
> * Create a Deployment and a Service
|
||||
> * Create and Deploy an Issuer
|
||||
> * Associate the domain with NGINX Ingress
|
||||
> * Create and Deploy an Ingress Resource
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Kubernetes cluster deployed on** **cloud, with NGINX Ingress enabled**
|
||||
|
||||
See this article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Familiarity with kubectl**
|
||||
|
||||
For further instructions refer to [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **Familiarity with Kubernetes Ingress feature**
|
||||
|
||||
It is explained in article [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 5 **Familiarity with deploying Helm charts**
|
||||
|
||||
See this article:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 6 **Must have domain purchased from a registrar**
|
||||
|
||||
You also must own a domain purchased from any registrar (domain reseller). Obtaining a domain from registrars is not covered in this article.
|
||||
|
||||
No. 7 **Use DNS command Horizon to connect to the domain name**
|
||||
|
||||
This is optional. Here is the article with detailed information:
|
||||
|
||||
[DNS as a Service on CloudFerro Cloud Hosting](../cloud/DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html)
|
||||
|
||||
Step 1 Install Cert Manager’s Custom Resource Definitions (CRDs)[](#step-1-install-cert-manager-s-custom-resource-definitions-crds "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We assume you have your
|
||||
|
||||
> * Magnum cluster up and running and
|
||||
> * **kubectl** pointing to your cluster *config* file.
|
||||
|
||||
As a pre-check, you can list the nodes on your cluster:
|
||||
|
||||
```
|
||||
# export KUBECONFIG=<your-kubeconfig-file-location>
|
||||
kubectl get nodes
|
||||
|
||||
```
|
||||
|
||||
CertManager Helm chart utilizes a few of Custom Resource Definitions (CRDs) which we will need to deploy on our cluster. Aside from multiple default Kubernetes-available resources (e.g., Pods, Deployments or Services), CRDs enable to deploy custom resources defined by third party developers to satisfy further customized use cases. Let’s add CRDs to our cluster with the following command:
|
||||
|
||||
```
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.2/cert-manager.crds.yaml
|
||||
|
||||
```
|
||||
|
||||
The list of the resources will be displayed after running the command. If we want to later refer to them we can also use the following **kubectl** command:
|
||||
|
||||
```
|
||||
kubectl get crd -l app.kubernetes.io/name=cert-manager
|
||||
...
|
||||
NAME CREATED AT
|
||||
certificaterequests.cert-manager.io 2022-12-18T11:15:08Z
|
||||
certificates.cert-manager.io 2022-12-18T11:15:08Z
|
||||
challenges.acme.cert-manager.io 2022-12-18T11:15:08Z
|
||||
clusterissuers.cert-manager.io 2022-12-18T11:15:08Z
|
||||
issuers.cert-manager.io 2022-12-18T11:15:08Z
|
||||
orders.acme.cert-manager.io 2022-12-18T11:15:08Z
|
||||
|
||||
```
|
||||
|
||||
Warning
|
||||
|
||||
Magnum introduces a few pod security policies (PSP) which provide some extra safety precautions for the cluster, but will cause conflict with the CertManager Helm chart. PodSecurityPolicy is deprecated until Kubernetes v. 1.25, but still supported in version of Kubernetes 1.21 to 1.23 available on CloudFerro Cloud cloud. The commands below may produce warnings about deprecation but the installation should continue nevertheless.
|
||||
|
||||
Step 2 Install CertManager Helm chart[](#step-2-install-certmanager-helm-chart "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We assume you have installed Helm according to the article mentioned in Prerequisite No. 5. The result of that article will be file *my-values.yaml* and in order to ensure correct deployment of CertManager Helm chart, we will need to
|
||||
|
||||
> * override it and
|
||||
> * insert the appropriate content into it:
|
||||
|
||||
**my-values.yaml**
|
||||
|
||||
```
|
||||
global:
|
||||
podSecurityPolicy:
|
||||
enabled: true
|
||||
useAppArmor: false
|
||||
|
||||
```
|
||||
|
||||
The following code will both install the CertManager Helm chart into a namespace *cert-manager* and use *my-values.yaml* at the same time:
|
||||
|
||||
```
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.2 --values my-values.yaml
|
||||
|
||||
```
|
||||
|
||||
This is the result:
|
||||
|
||||
```
|
||||
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.2 --values my-values.yaml
|
||||
W0208 10:16:08.364635 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:08.461599 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:08.502602 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:11.489377 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:11.489925 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:11.524300 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:13.949045 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:16:15.038803 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
W0208 10:17:36.084859 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
|
||||
NAME: cert-manager
|
||||
LAST DEPLOYED: Wed Feb 8 10:16:07 2023
|
||||
NAMESPACE: cert-manager
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
cert-manager v1.9.2 has been deployed successfully!
|
||||
|
||||
In order to begin issuing certificates, you will need to set up a ClusterIssuer
|
||||
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
|
||||
|
||||
```
|
||||
|
||||
We see that *cert-manager* is deployed successfully but also get a hint that *ClusterIssuer* or an *Issuer* resource has to be installed as well. Our next step is to install a sample service into the cluster and then continue with creation and deployment of an *Issuer*.
|
||||
|
||||
Step 3 Create a Deployment and a Service[](#step-3-create-a-deployment-and-a-service "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s deploy NGINX service as a standard example of a Kubernetes app. First we create a standard Kubernetes deployment and then a service of type *NodePort*. Write the following contents to file *my-nginx.yaml* :
|
||||
|
||||
**my-nginx.yaml**
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-nginx-deployment
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
run: my-nginx
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: my-nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-nginx-service
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
selector:
|
||||
run: my-nginx
|
||||
|
||||
```
|
||||
|
||||
Deploy with the following command:
|
||||
|
||||
```
|
||||
kubectl apply -f my-nginx.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 4 Create and Deploy an Issuer[](#step-4-create-and-deploy-an-issuer "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
Now install an *Issuer*. It is a custom Kubernetes resource and represents Certificate Authority (CA), which ensures that our HTTPS are signed and therefore trusted by the browsers. CertManager supports different issuers, in our example we will use Let’s Encrypt, that uses ACME protocol.
|
||||
|
||||
Create a new file called *my-nginx-issuer.yaml* and paste the following content into it. Change the email address *XXXXXXXXX@YYYYYYYYY.com* to your own and real email address.
|
||||
|
||||
**my-nginx-issuer.yaml**
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: my-nginx-issuer
|
||||
spec:
|
||||
acme:
|
||||
email: [email protected]
|
||||
server: https://acme-v02.api.letsencrypt.org/directory # production
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-secret # different secret name than for ingress
|
||||
solvers:
|
||||
# HTTP-01 challenge provider, creates additional ingress, refer to CertManager documentation for detailed explanation
|
||||
- http01:
|
||||
ingress:
|
||||
class: nginx
|
||||
|
||||
```
|
||||
|
||||
Then deploy on the cluster:
|
||||
|
||||
```
|
||||
kubectl apply -f my-nginx-issuer.yaml
|
||||
|
||||
```
|
||||
|
||||
As a result, the *Issuer* gets deployed, and a *Secret* called *letsencrypt-secret* with a private key is deployed as well.
|
||||
|
||||
Step 5 Associate the Domain with NGINX Ingress[](#step-5-associate-the-domain-with-nginx-ingress "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To see the site in browser, your HTTPS certificate will need to be associated with a specific domain. To follow along, you should have a real domain already registered at a domain registrar.
|
||||
|
||||
When you deployed your cluster with NGINX ingress, behind the scenes a LoadBalancer was deployed, with a public IP address exposed. You can obtain this address by looking it up in the Horizon web interface. If your list or floating IPs is longer, it can be easily recognized by name:
|
||||
|
||||

|
||||
|
||||
Now, at your domain registrar you need to associate the A record of the domain with the floating IP address of the ingress, where your application will be exposed. The way to achieve this will vary by the specific registrar, so we will not provide detailed instructions here.
|
||||
|
||||
You can also use the DNS command in Horizon to connect the domain name you have with the cluster. See Prerequisite No. 7 for additional details.
|
||||
|
||||
Step 6 Create and Deploy an Ingress Resource[](#step-6-create-and-deploy-an-ingress-resource "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The final step is to deploy the *Ingress* resource. This will perform the necessary steps to initiate the certificate signing request with the CA and ultimately provide the HTTPS certificate for your service. In order to proceed, place the contents below into file *my-nginx-ingress.yaml*. Replace **mysampledomain.eu** with your domain.
|
||||
|
||||
**my-nginx-ingress.yaml**
|
||||
|
||||
```
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: my-nginx-ingress
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
# below annotation is for using cert manager's "ingress shim", refer to CertManager documentation
|
||||
cert-manager.io/issuer: my-nginx-issuer # use the name of the issuer here
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
tls:
|
||||
- hosts:
|
||||
- mysampledomain.eu #change to own domain
|
||||
secretName: my-nginx-secret
|
||||
rules:
|
||||
- host: mysampledomain.eu #change to own domain
|
||||
http:
|
||||
paths:
|
||||
- path: /*
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: my-nginx-service
|
||||
port:
|
||||
number: 80
|
||||
|
||||
```
|
||||
|
||||
Then deploy with:
|
||||
|
||||
```
|
||||
kubectl apply -f my-nginx-ingress.yaml
|
||||
|
||||
```
|
||||
|
||||
If all works well, the effort is complete and after a couple of minutes we should see the lock sign in front of our IP address. The service is now HTTPS-secured, and you can verify the details of the certificate by clicking on the lock icon.
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
The article [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html) shows how to create an HTTP based service or a site.
|
||||
|
||||
If you need additional information on Helm charts: [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html).
|
||||
@ -0,0 +1,241 @@
|
||||
Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud[](#deploying-helm-charts-on-magnum-kubernetes-clusters-on-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
==================================================================================================================================================================================================
|
||||
|
||||
Kubernetes is a robust and battle-tested environment for running apps and services, yet it could be time consuming to manually provision all resources required to run a production-ready deployment. This article introduces [Helm](https://helm.sh/) as a package manager for Kubernetes. With it, you will be able to quickly deploy complex Kubernetes applications, consisting of code, databases, user interfaces and more.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Background - How Helm works
|
||||
> * Install Helm
|
||||
> * Add a Helm repository
|
||||
> * Helm chart repositories
|
||||
> * Deploy Helm chart on a cluster
|
||||
> * Customize chart deployment
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Basic understanding of Kubernetes**
|
||||
|
||||
We assume you have basic understanding of Kubernetes, its notions and ways of working. Explaining them is out of scope of this article.
|
||||
|
||||
No. 3 **A cluster created on** **cloud**
|
||||
|
||||
For trying out Helm installation and deployment in an actual environment, create a cluster on cloud using OpenStack Magnum [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 4 **Active connection to the cloud**
|
||||
|
||||
For Kubernetes, that means a **kubectl** command line tool installed and **kubeconfig** pointing to a cluster. Instructions are provided in this article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 5 **Access to Ubuntu to run code on**
|
||||
|
||||
Code samples in this article assume you are running Ubuntu 20.04 LTS or similar Linux system. You can run them on
|
||||
|
||||
> * Windows with Linux subsystem,
|
||||
> * genuine desktop Ubuntu operating system or you can also
|
||||
> * create a virtual machine in the CloudFerro Cloud cloud and run the examples from there. These articles will provide technical know-how if you need it:
|
||||
|
||||
[How to create a Linux VM and access it from Windows desktop on CloudFerro Cloud](../cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-CloudFerro-Cloud.html)
|
||||
|
||||
[How to create a Linux VM and access it from Linux command line on CloudFerro Cloud](../cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-CloudFerro-Cloud.html)
|
||||
|
||||
Background - How Helm works[](#background-how-helm-works "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
A usual sequence of deploying an application on Kubernetes entails:
|
||||
|
||||
> * having one or more containerized application images available in an image registry
|
||||
> * deploying one or more Kubernetes resources, in the form of manifest YAML files, onto a Kubernetes cluster
|
||||
|
||||
The Kubernetes resources, directly or indirectly, point to the container images. They can also contain additional information required by these images to run. In a very minimal setup, we would have e.g., an NGINX container image deployed with a **deployment** Kubernetes resource, and exposed on a network via a **service** resource. A production-grade Kubernetes deployment of a larger application usually requires a set of several, or more, Kubernetes resources to be deployed on the cluster.
|
||||
|
||||
For each standard deployment of an application on Kubernetes (e.g. a database, a CMS system, a monitoring application), the boilerplate YAML manifests would mostly be the same and only vary based on the specific values assigned (e.g. ports, endpoints, image registry, version, etc.).
|
||||
|
||||
**Helm**, therefore, automates the process of provisioning a Kubernetes deployment. The person in charge of the deployment does not have to write each resource from the scratch or consider the links between the resources. Instead, they download a **Helm chart**, which provides predefined resource templates. The values for the templates are read from a central configuration file called *values.yaml*.
|
||||
|
||||
Helm charts are designed to cover a broad set of use cases required for deploying an application. The application can be then simply launched on a cluster with a few commands within seconds. Some specific customizations for an individual deployment can be then easily adjusted by overriding the default *values.yaml* file.
|
||||
|
||||
Install Helm[](#install-helm "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
You can install Helm on your own development machine. To install, download the installer file from the Helm release page, change file permission, and run the installation:
|
||||
|
||||
```
|
||||
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
|
||||
chmod 700 get_helm.sh
|
||||
./get_helm.sh
|
||||
|
||||
```
|
||||
|
||||
You can verify the installation by running:
|
||||
|
||||
```
|
||||
$ helm version
|
||||
|
||||
```
|
||||
|
||||
For other operating systems, use the [link to download Helm installation files](https://phoenixnap.com/kb/install-helm) and proceed analogously.
|
||||
|
||||
Add a Helm repository[](#add-a-helm-repository "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Helm charts are distributed using repositories. For example, a single repository can host several Helm charts from a certain provider. For the purpose of this article, we will add the Bitnami repository that contains their versions of multiple useful Helm charts e.g. Redis, Grafana, Elasticsearch, or others. You can run it using the following command:
|
||||
|
||||
```
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
|
||||
```
|
||||
|
||||
Then verify the available charts in this repository by running:
|
||||
|
||||
```
|
||||
helm search repo
|
||||
|
||||
```
|
||||
|
||||
The following image shows just a start of all the available apps from *bitnami* repository to install with Helm:
|
||||
|
||||

|
||||
|
||||
Helm chart repositories[](#helm-chart-repositories "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
In the above example, we knew where to find a repository with Helm charts. There are other repositories and they are usually hosted on GitHub or ArtifactHub. Let us have a look at the [apache page in ArtifactHUB](https://artifacthub.io/packages/helm/bitnami/apache):
|
||||
|
||||

|
||||
|
||||
Click on the DEFAULT VALUES option (yellow highlight) and see contents of the default *values.yaml* file.
|
||||
|
||||

|
||||
|
||||
In this file (or in additional tabular information on the chart page), you can check which parameters are enabled for customization, and which are their default values.
|
||||
|
||||
Check whether kubectl has access to the cluster[](#check-whether-kubectl-has-access-to-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To proceed further, verify that you have your KUBECONFIG environment variable exported and pointing to a running cluster’s *kubeconfig* file (see Prerequisite No. 4). If there is need, export this environment variable:
|
||||
|
||||
```
|
||||
export KUBECONFIG = <location-of-your-kubeconfig-file>
|
||||
|
||||
```
|
||||
|
||||
If your kubectl is properly installed, you should be then able to list the nodes on your cluster:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
|
||||
```
|
||||
|
||||
That will serve as the confirmation that you have access to the cluster.
|
||||
|
||||
Deploy a Helm chart on a cluster[](#deploy-a-helm-chart-on-a-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that we know where to find repositories with hundreds of charts to choose from, let’s deploy one of them to our cluster.
|
||||
|
||||
We will install an Apache web server Helm chart. In order to install it with a default configuration, we need to run a single command:
|
||||
|
||||
```
|
||||
helm install my-apache bitnami/apache
|
||||
|
||||
```
|
||||
|
||||
Note that *my-apache* refers to the concrete release, that is, the concrete deployment running on our cluster. We can adjust this name to our liking. Upon running the above command, the chart gets deployed and some insight about our release is provided:
|
||||
|
||||
```
|
||||
NAME: my-apache
|
||||
LAST DEPLOYED: Tue Jan 31 10:48:07 2023
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
CHART NAME: apache
|
||||
CHART VERSION: 9.2.11
|
||||
APP VERSION: 2.4.55
|
||||
....
|
||||
|
||||
```
|
||||
|
||||
As a result, several Kubernetes resources get deployed on the cluster. One of them is the Kubernetes service, which by default gets deployed as a LoadBalancer type. This way your Apache deployment gets immediately publicly exposed with a floating IP available in the <EXTERNAL-IP> cell on the default port 80:
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
...
|
||||
my-apache LoadBalancer 10.254.147.21 64.225.131.111 80:32654/TCP,443:32725/TCP 5m
|
||||
|
||||
```
|
||||
|
||||
Note that the floating IP generation can take a couple of minutes to appear. After this time, once you enter the floating IP into the browser you shall see the service available from the Internet:
|
||||
|
||||

|
||||
|
||||
Customizing the chart deployment[](#customizing-the-chart-deployment "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
We just saw how quick it was to deploy a Helm chart with the default settings. Usually, before running the chart in production, you will need to adjust a few settings to meet your requirements.
|
||||
|
||||
To customize the deployment, a quick and dirty approach would be to provide flags on the Helm command line to adjust specific parameters. The problem is that each command line option will have 10-20 available flags, so this approach may not be the best in the long run.
|
||||
|
||||
A more universal approach, however, is to customize the *values.yaml* file. There are two main ways of doing it:
|
||||
|
||||
**Copy the entire values.yaml file**
|
||||
: Here you only adjust the value of a specific parameter.
|
||||
|
||||
**Create new values.yaml file from scratch**
|
||||
: It would contain only the adjusted parameters, with their overridden values.
|
||||
|
||||
In both scenarios, all defaults, apart from the overridden ones, will be preserved.
|
||||
|
||||
**As an example of customizing the chart**, let us expose Apache web server on port **8080** instead of the default **80**. We will use the second approach and provide a minimal *my-values.yaml* file for the overrides. The contents of this file will be the following:
|
||||
|
||||
**my-values.yaml**
|
||||
|
||||
```
|
||||
service:
|
||||
ports:
|
||||
http: 8080
|
||||
|
||||
```
|
||||
|
||||
With these customizations, make sure to follow the indentation and follow the YAML structure indicating also the respective parent blocks in the tree.
|
||||
|
||||
A separate adjustment that we will make is to create a dedicated namespace *apache* for our Helm release and instruct Helm to use this namespace. Such an adjustment is quite usual, in order to separate the artifacts related to a specific release/application.
|
||||
|
||||
Apply the mentioned customizations to the *my-custom-apache* release, using the following command:
|
||||
|
||||
```
|
||||
helm install my-custom-apache bitnami/apache --values my-values.yaml --namespace custom-apache --create-namespace
|
||||
|
||||
```
|
||||
|
||||
Similarly, as in the earlier example, the service gets exposed. This time, to access the service’s floating IP, refer to the newly created *custom-apache* namespace:
|
||||
|
||||
```
|
||||
kubectl get services -n custom-apache
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-custom-apache LoadBalancer 10.254.230.171 64.225.135.161 8080:31150/TCP,443:30139/TCP 3m51s
|
||||
|
||||
```
|
||||
|
||||
We can see that the application is now exposed to a new port 8080, which can be verified in the browser as well:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Deploy other useful services using Helm charts: [Argo Workflows](https://artifacthub.io/packages/helm/bitnami/argo-workflows), [JupyterHub](https://artifacthub.io/packages/helm/jupyterhub/jupyterhub), [Vault](https://artifacthub.io/packages/helm/hashicorp/vault) amongst many others that are available.
|
||||
|
||||
Remember that a chart deployed with Helm is, in the end, just a set of Kubernetes resources. Usually, there is a hefty amount of configurable settings in the available Open Source charts. Just as well, you can edit other parameters on an already deployed cluster and you can even modify the templates to your specific use case.
|
||||
|
||||
The following article will show how to use JetStack repo to install CertManager, with which you can deploy HTTPS services on Kubernetes cloud:
|
||||
|
||||
[Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud](Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-CloudFerro-Cloud-Cloud.html)
|
||||
@ -0,0 +1,388 @@
|
||||
Deploying vGPU workloads on CloudFerro Cloud Kubernetes[](#deploying-vgpu-workloads-on-brand-name-kubernetes "Permalink to this headline")
|
||||
===========================================================================================================================================
|
||||
|
||||
Utilizing GPU (Graphical Processing Units) presents a highly efficient alternative for fast, highly parallel processing of demanding computational tasks such as image processing, machine learning and many others.
|
||||
|
||||
In cloud environment, virtual GPU units (vGPU) are available with certain Virtual Machine flavors. This guide provides instructions how to attach such VMs with GPU as Kubernetes cluster nodes and utilize vGPU from Kubernetes pods.
|
||||
|
||||
We will present three alternative ways for adding vGPU capability to your Kubernetes cluster, based on your required scenario. For each, you should be able to verify the vGPU installation and test it by running vGPU workload.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * **Scenario No. 1** - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created **after** June 21st 2023
|
||||
> * **Scenario No. 2** - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created **before** June 21st 2023
|
||||
> * **Scenario No. 3** - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup
|
||||
> * Verify the vGPU installation
|
||||
> * Test vGPU workload
|
||||
> * Add non-GPU nodegroup to a GPU-first cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Knowledge of RC files and CLI commands for Magnum**
|
||||
|
||||
You should be familiar with utilizing OpenStack CLI and Magnum CLI. Your RC file should be sourced and pointing to your project in OpenStack. See article
|
||||
|
||||
[How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html).
|
||||
|
||||
Note
|
||||
|
||||
If you are using CLI when creating vGPU nodegroups and are being authenticated with application credentials, please ensure the credential is created with setting
|
||||
|
||||
**unrestricted: true**
|
||||
|
||||
No. 3 **Cluster and kubectl should be operational**
|
||||
|
||||
To connect to the cluster via **kubectl** tool, see this article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 4 **Familiarity with the notion of nodegroups**
|
||||
|
||||
[Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
vGPU flavors per cloud[](#vgpu-flavors-per-cloud "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Below is the list of GPU flavors in each cloud, applicable for using with Magnum Kubernetes service.
|
||||
|
||||
WAW3-1
|
||||
: WAW3-1 supports both four GPU flavors and the Kubernetes, through OpenStack Magnum.
|
||||
|
||||
> | | | | |
|
||||
> | --- | --- | --- | --- |
|
||||
> | Name | RAM (MB) | Disk (GB) | VCPUs |
|
||||
> | **vm.a6000.1** | 14336 | 40 | 2 |
|
||||
> | **vm.a6000.2** | 28672 | 80 | 4 |
|
||||
> | **vm.a6000.3** | 57344 | 160 | 8 |
|
||||
> | **vm.a6000.4** | 114688 | 320 | 16 |
|
||||
|
||||
WAW3-2
|
||||
: These are the vGPU flavors for WAW3-2 and Kubernetes, through OpenStack Magnum:
|
||||
|
||||
> | | | | | |
|
||||
> | --- | --- | --- | --- | --- |
|
||||
> | Name | VCPUS | RAM | Total Disk | Public |
|
||||
> | **vm.l40s.1** | 4 | 14.9 GB | 40 GB | Yes |
|
||||
> | **vm.l40s.8** | 32 | 119.22 GB | 320 GB | Yes |
|
||||
> | **gpu.l40sx2** | 64 | 238.44 GB | 512 GB | Yes |
|
||||
> | **gpu.l40sx8** | 254 | 953.75 GB | 1000 GB | Yes |
|
||||
|
||||
FRA1-2
|
||||
: FRA1-2 Supports L40S and the Kubernetes, through OpenStack Magnum.
|
||||
|
||||
> | | | | | |
|
||||
> | --- | --- | --- | --- | --- |
|
||||
> | Name | VCPUS | RAM | Total Disk | Public |
|
||||
> | **vm.l40s.2** | 8 | 29.8 GB | 80 GB | Yes |
|
||||
> | **vm.l40s.8** | 32 | 119.22 GB | 320 GB | Yes |
|
||||
|
||||
Hardware comparison between RTX A6000 and NVIDIA L40S[](#hardware-comparison-between-rtx-a6000-and-nvidia-l40s "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The NVIDIA L40S is designed for 24x7 enterprise data center operations and optimized to deploy at scale. As compared to A6000, NVIDIA L40S is better for
|
||||
|
||||
> * parallel processing tasks
|
||||
> * AI workloads,
|
||||
> * real-time ray tracing applications and is
|
||||
> * faster for in memory-intensive tasks.
|
||||
|
||||
Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[](#id1 "Permalink to this table")
|
||||
|
||||
| Specification | NVIDIA RTX A60001 | NVIDIA L40S1 |
|
||||
| --- | --- | --- |
|
||||
| **Architecture** | Ampere | Ada Lovelace |
|
||||
| **Release Date** | 2020 | 2023 |
|
||||
| **CUDA Cores** | 10,752 | 18,176 |
|
||||
| **Memory** | 48 GB GDDR6 (768 GB/s bandwidth) | 48 GB GDDR6 (864 GB/s bandwidth) |
|
||||
| **Boost Clock Speed** | Up to 1,800 MHz | Up to 2,520 MHz |
|
||||
| **Tensor Cores** | 336 (3rd generation) | 568 (4th generation) |
|
||||
| **Performance** | Strong performance for diverse workloads | Superior AI and machine learning performance |
|
||||
| **Use Cases** | 3D rendering, video editing, AI development | Data center, large-scale AI, enterprise applications |
|
||||
|
||||
Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023[](#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023 "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to create a new nodegroup, called **gpu**, with one node vGPU flavor, say, **vm.a6000.2**, we can use the following Magnum CLI command:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create $CLUSTER_ID gpu \
|
||||
--labels "worker_type=gpu" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
--flavor vm.a6000.2 \
|
||||
--node-count 1
|
||||
|
||||
```
|
||||
|
||||
Adjust the *node-count* and *flavor* to your preference, adjust the $CLUSTER\_ID to the one of your clusters (this can be taken from Clusters view in Horizon UI), and ensure the role is set as *worker*.
|
||||
|
||||
The key setting is adding a label **worker\_type=gpu**:
|
||||
|
||||
Your request will be accepted:
|
||||
|
||||

|
||||
|
||||
Now list the available nodegroups:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_RECENT \
|
||||
--max-width 120
|
||||
|
||||
```
|
||||
|
||||
We get:
|
||||
|
||||

|
||||
|
||||
The result is that a new nodegroup called **gpu** is created in the cluster and that it is using the GPU flavor.
|
||||
|
||||
Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023[](#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023 "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The instructions are the same as in the previous scenario, with the exception of adding an additional label:
|
||||
|
||||
```
|
||||
existing_helm_handler_master_id=$MASTER_0_SERVER_ID
|
||||
|
||||
```
|
||||
|
||||
where **$MASTER\_0\_SERVER\_ID** is the ID of the **master0** VM from your cluster. The **uuid** value can be obtained
|
||||
|
||||
> * in Horizon, through the Instances view
|
||||
> * or using a CLI command to isolate the *uuid* for the master node:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_OLDER \
|
||||
-c uuid \
|
||||
-c name \
|
||||
-c status \
|
||||
-c role
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
In this example, **uuid** is **413c7486-caa9-4e12-be3b-3d9410f2d32f**. Set up the value for master handler label:
|
||||
|
||||
```
|
||||
export MASTER_0_SERVER_ID="413c7486-caa9-4e12-be3b-3d9410f2d32f"
|
||||
|
||||
```
|
||||
|
||||
and execute the following command to create an additional nodegroup in this scenario:
|
||||
|
||||
```
|
||||
openstack coe nodegroup create $CLUSTER_ID_OLDER gpu \
|
||||
--labels "worker_type=gpu,existing_helm_handler_master_id=$MASTER_0_SERVER_ID" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
--flavor vm.a6000.2 \
|
||||
--node-count 1
|
||||
|
||||
```
|
||||
|
||||
There may not be any space between the labels.
|
||||
|
||||
The request will be accepted and after a while, a new nodegroup will be available and based on GPU flavor. List the nodegroups with the command:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID_OLDER --max-width 120
|
||||
|
||||
```
|
||||
|
||||
Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup[](#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To create a new vGPU-enabled cluster, you can use the usual Horizon commands, selecting one of the existing templates with **vgu** in their names:
|
||||
|
||||

|
||||
|
||||
In the below example, we use the CLI to create a cluster called **k8s-gpu-with\_template** with **k8s-1.23.16-vgpu-v1.0.0** template. The sample cluster has
|
||||
|
||||
> * one master node with flavor **eo1.medium** and
|
||||
> * one worker node with **vm.a6000.2** flavor with vGPU enabled.
|
||||
|
||||
To adjust these parameters to your requirements, you will need to replace the $KEYPAIR to your own. Also, to verify that the nvidia labels are correctly installed, first create a namespace called **nvidia-device-plugin**. You can then list the namespaces to be sure that it was created properly. So, the preparation commands look like this:
|
||||
|
||||
```
|
||||
export KEYPAIR="sshkey"
|
||||
kubectl create namespace nvidia-device-plugin
|
||||
kubectl get namespaces
|
||||
|
||||
```
|
||||
|
||||
The final command to create the required cluster is:
|
||||
|
||||
```
|
||||
openstack coe cluster create k8s-gpu-with_template \
|
||||
--cluster-template "k8s-1.23.16-vgpu-v1.0.0" \
|
||||
--keypair=$KEYPAIR \
|
||||
--master-count 1 \
|
||||
--node-count 1
|
||||
|
||||
```
|
||||
|
||||
### Verify the vGPU installation[](#verify-the-vgpu-installation "Permalink to this headline")
|
||||
|
||||
You can verify that vGPU-enabled nodes were properly added to your cluster, by checking the **nvidia-device-plugin** deployed in the cluster, to the **nvidia-device-plugin** namespace. The command to list the contents of the **nvidia** namespace is:
|
||||
|
||||
```
|
||||
kubectl get daemonset nvidia-device-plugin \
|
||||
-n nvidia-device-plugin
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
See which nodes are now present:
|
||||
|
||||
```
|
||||
kubectl get node
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Each GPU node, should have several **nvidia** labels added. To verify, you can run one of the below commands, the second of which will show the labels formatted:
|
||||
|
||||
```
|
||||
kubectl get node k8s-gpu-cluster-XXXX --show-labels
|
||||
kubectl get node k8s-gpu-cluster-XXXX \
|
||||
-o go-template='{{range $key, $value := .metadata.labels}}{{$key}}: {{$value}}{{"\n"}}{{end}}'
|
||||
|
||||
```
|
||||
|
||||
Concretely, in our case, the second command is:
|
||||
|
||||
```
|
||||
kubectl get node k8s-gpu-with-template-lfs5335ymxcn-node-0 \
|
||||
-o go-template='{{range $key, $value := .metadata.labels}}{{$key}}: {{$value}}{{"\n"}}{{end}}'
|
||||
|
||||
```
|
||||
|
||||
and the result will look like this:
|
||||
|
||||

|
||||
|
||||
Also, GPU workers are tainted by default with the taint:
|
||||
|
||||
```
|
||||
node.cloudferro.com/type=gpu:NoSchedule
|
||||
|
||||
```
|
||||
|
||||
This can be verified by running the following command, in which we are using the name of the existing node:
|
||||
|
||||
```
|
||||
kubectl describe node k8s-gpu-with-template-lfs5335ymxcn-node-0 | grep 'Taints'
|
||||
|
||||
```
|
||||
|
||||
### Run test vGPU workload[](#run-test-vgpu-workload "Permalink to this headline")
|
||||
|
||||
We can run a sample workload on vGPU. To do so, create a YAML manifest file **vgpu-pod.yaml**, with the following contents:
|
||||
|
||||
**vgpu-pod.yaml**
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: gpu-pod
|
||||
spec:
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: cuda-container
|
||||
image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
|
||||
resources:
|
||||
limits:
|
||||
nvidia.com/gpu: 1 # requesting 1 vGPU
|
||||
tolerations:
|
||||
- key: nvidia.com/gpu
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
- effect: NoSchedule
|
||||
key: node.cloudferro.com/type
|
||||
operator: Equal
|
||||
value: gpu
|
||||
|
||||
```
|
||||
|
||||
Apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f vgpu-pod.yaml
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
This pod will request one vGPU, so effectively it will utilize the vGPU allocated to a single node. For example, if you had a cluster with 2 vGPU-enabled nodes, you could run 2 pods requesting 1 vGPU each.
|
||||
|
||||
Also, for scheduling the pods on GPU, you will need to apply the two *tolerations* as per the example above, That, effectively, means that the pod will only be scheduled on GPU nodes.
|
||||
|
||||
Looking at the logs, we see that the workload was indeed performed:
|
||||
|
||||
```
|
||||
kubectl logs gpu-pod
|
||||
|
||||
[Vector addition of 50000 elements]
|
||||
Copy input data from the host memory to the CUDA device
|
||||
CUDA kernel launch with 196 blocks of 256 threads
|
||||
Copy output data from the CUDA device to the host memory
|
||||
Test PASSED
|
||||
Done
|
||||
|
||||
```
|
||||
|
||||
Add non-GPU nodegroup to a GPU-first cluster[](#add-non-gpu-nodegroup-to-a-gpu-first-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We refer to GPU-first clusters as the ones created with **worker\_type=gpu** flag. For example, in cluster created with Scenario No. 3, the default nodegroup consists of vGPU nodes.
|
||||
|
||||
In such clusters, to add an additional, non-GPU nodegroup, you will need to:
|
||||
|
||||
> * specify the image ID of the system that manages this nodegroup
|
||||
> * add the label **worker\_type=default**
|
||||
> * ensure that the flavor for this nodegroup is non-GPU.
|
||||
|
||||
In order to retrieve the image ID, you need to know with which template you want to use to create the new nodegroup. Out of the existing non-GPU templates, we select **k8s-1.23.16-v1.0.2** for this example. Run the following command to extract the template ID, as that will be needed for nodegroup creation:
|
||||
|
||||
```
|
||||
openstack coe cluster \
|
||||
template show k8s-1.23.16-v1.0.2 | grep image_id
|
||||
|
||||
```
|
||||
|
||||
In our case, this yields the following result:
|
||||
|
||||

|
||||
|
||||
We can then add the non-GPU nodegroup with the following command, in which you can adjust the parameters. In our example, we use cluster name from Scenario 3 (the one freshly created with GPU) above and set up worker node flavor to **eo1.medium**:
|
||||
|
||||
```
|
||||
export CLUSTER_ID="k8s-gpu-with_template"
|
||||
export IMAGE_ID="42696e90-57af-4124-8e20-d017a44d6e24"
|
||||
openstack coe nodegroup create $CLUSTER_ID default \
|
||||
--labels "worker_type=default" \
|
||||
--merge-labels \
|
||||
--role worker \
|
||||
--flavor "eo1.medium" \
|
||||
--image $IMAGE_ID \
|
||||
--node-count 1
|
||||
|
||||
```
|
||||
|
||||
Then list the nodegroup contents to see whether the creation succeeded:
|
||||
|
||||
```
|
||||
openstack coe nodegroup list $CLUSTER_ID \
|
||||
--max-width 120
|
||||
|
||||
```
|
||||
|
||||

|
||||
@ -0,0 +1,148 @@
|
||||
Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster[](#enable-kubeapps-app-launcher-on-brand-name-magnum-kubernetes-cluster "Permalink to this headline")
|
||||
=================================================================================================================================================================================
|
||||
|
||||
[Kubeapps](https://kubeapps.dev/) app-launcher enables quick deployments of applications on your Kubernetes cluster, with convenient graphical user interface. In this article we provide guidelines for creating Kubernetes cluster with Kubeapps feature enabled, and deploying sample applications.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Brief background - deploying applications on Kubernetes
|
||||
> * Create a cluster with Kubeapps quick-launcher enabled
|
||||
> * Access Kubeapps service locally from browser
|
||||
> * Launch sample application from Kubeapps
|
||||
> * Current limitations
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at <https://portal.cloudferro.com/>.
|
||||
|
||||
No. 2 **Create Kubernetes cluster from Horizon GUI**
|
||||
|
||||
Know how to create a Kubernetes cluster from Horizon GUI, as described in article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **How to Access Kubernetes cluster post-deployment**
|
||||
|
||||
Access to Linux command line and ability to access cluster, as described in article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **Handling Helm**
|
||||
|
||||
Some familiarity with Helm, to customize app deployments with Kubeapps. See [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html).
|
||||
|
||||
No. 5 **Access to CloudFerro clouds**
|
||||
|
||||
Kubeapps is available on one of the clouds: WAW3-2, FRA1-2, WAW3-1.
|
||||
|
||||
Background[](#background "Permalink to this headline")
|
||||
-------------------------------------------------------
|
||||
|
||||
Deploying complex applications on Kubernetes becomes notably more efficient and convenient with Helm. Adding to this convenience, **Kubeapps**, an app-launcher with Graphical User Interface (GUI), provides a user-friendly starting point for application management. This GUI allows to deploy and manage applications on your K8s cluster, limiting the need for deep command-line expertise.
|
||||
|
||||
Kubeapps app-launcher can be enabled during cluster creation time. It will run as a local service, accessible from browser.
|
||||
|
||||
Create Kubernetes cluster with Kubeapps quick-launcher enabled[](#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Creating Kubernetes cluster with Kubeapps enabled, follows the generic guideline described in Prerequisite No. 2.
|
||||
|
||||
When creating the cluster in Horizon according to this guideline:
|
||||
|
||||
> * insert three labels with below values in the “Advanced” tab and
|
||||
> * choose to override the labels.
|
||||
|
||||
```
|
||||
kubeapps_enabled=true,helm_client_tag=v3.11.3,helm_client_sha256=ca2d5d40d4cdfb9a3a6205dd803b5bc8def00bd2f13e5526c127e9b667974a89
|
||||
|
||||
```
|
||||
|
||||
Important
|
||||
|
||||
There may be no spaces between label values.
|
||||
|
||||
Inserting these labels is shown in the image below:
|
||||
|
||||

|
||||
|
||||
Access Kubeapps service locally from your browser[](#access-kubeapps-service-locally-from-your-browser "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once the cluster is created, access the Linux console. You should have **kubectl** command line tool available, as specified in Prerequisite No. 3.
|
||||
|
||||
Kubeapps service is enabled for the kubeapps-operator service account. We need to obtain the token that authenticates this service account with the cluster.
|
||||
|
||||
To print the token, run the following command:
|
||||
|
||||
```
|
||||
kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o go-template='{{.data.token | base64decode}}' && echo
|
||||
|
||||
```
|
||||
|
||||
As result, a long token will be printed, similar to the following:
|
||||
|
||||

|
||||
|
||||
Copy the token. Then run the following command to tunnel the traffic between your local machine and the Kubeapps service:
|
||||
|
||||
```
|
||||
kubectl port-forward -n kube-system svc/magnum-apps-kubeapps 8080:80
|
||||
|
||||
```
|
||||
|
||||
Type **localhost:8080** in your browser to access Kubeapps, paste the token copied earlier and click **Submit**:
|
||||
|
||||

|
||||
|
||||
You can now operate Kubeapps:
|
||||
|
||||

|
||||
|
||||
Launch sample application from Kubeapps[](#launch-sample-application-from-kubeapps "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Clicking on “Catalog” exposes a long list of applications available for downloads from Kubeapps app-store.
|
||||
|
||||

|
||||
|
||||
As an example, we will install the Apache webserver and in order to do so, click on the “Apache” box. Note that Kubeapps interface is the graphical shortcut, which behind the scenes installs Helm chart on the cluster.
|
||||
|
||||
Once you familiarize yourself with prerequisites and additional information about this chart, click **Deploy** in the top right corner:
|
||||
|
||||

|
||||
|
||||
The next screen with the default “Visual Editor” tab enabled, allows to define a few major adjustments to how the service is deployed e.g. specifying the service type or replica count. Access to more detailed configurations (reflecting Helm chart’s *values.yaml* configuration file) is also available in the “YAML editor” GUI tab.
|
||||
|
||||
To follow with the article do not change the defaults, only enter the **Name of deployment** (in our case *apache-test*) and hit **Deploy** with the available version:
|
||||
|
||||

|
||||
|
||||
Since we deployed a service of type LoadBalancer, we need to wait a few minutes for it to be deployed on the cloud. After this completes, we can see the screen confirming the deployment is complete:
|
||||
|
||||

|
||||
|
||||
Also, in the console, we can double-check that the Apache service, along with the deployment and pod, were properly deployed. Execute the following commands:
|
||||
|
||||
```
|
||||
kubectl get deployments
|
||||
kubectl get pods
|
||||
kubectl get services
|
||||
|
||||
```
|
||||
|
||||
The results will be similar to this:
|
||||
|
||||

|
||||
|
||||
Current limitations[](#current-limitations "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
Both Kubeapps and Helm charts deployed by this launcher are open-source projects, which are continuously evolving. The versions installed on CloudFerro Cloud cloud provide a snapshot of this development, as a convenience feature.
|
||||
|
||||
It is expected that not all applications can be installed with one-click and additional configuration will be needed in each particular case.
|
||||
|
||||
One known limitation is that certain charts will require RWM (ReadWriteMany) persistent volume claims to properly operate. Currently, RWM persistent volumes are not natively available on CloudFerro Cloud cloud. A workaround could be installing NFS server and deploying a StorageClass with RWM-supportive provisioner e.g. using [nfs-subdir-external-provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) project from GitHub.
|
||||
|
||||
For NFS on Kubernetes cluster, see [Create and access NFS server from Kubernetes on CloudFerro Cloud](Create-and-access-NFS-server-from-Kubernetes-on-CloudFerro-Cloud.html).
|
||||
@ -0,0 +1,273 @@
|
||||
GitOps with Argo CD on CloudFerro Cloud Kubernetes[](#gitops-with-argo-cd-on-brand-name-kubernetes "Permalink to this headline")
|
||||
=================================================================================================================================
|
||||
|
||||
Argo CD is a continuous deployment tool for Kubernetes, designed with GitOps and Infrastructure as Code (IaC) principles in mind. It automatically ensures that the state of applications deployed on a Kubernetes cluster is always in sync with a dedicated Git repository where we define such desired state.
|
||||
|
||||
In this article we will demonstrate installing Argo CD on a Kubernetes cluster and deploying an application using this tool.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Argo CD
|
||||
> * Access Argo CD from your browser
|
||||
> * Create Git repository and push your app deployment configurations
|
||||
> * Create and deploy Argo CD application resource
|
||||
> * View the deployed resources
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Kubernetes cluster**
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Access to cluster with kubectl**
|
||||
|
||||
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **Familiarity with Helm**
|
||||
|
||||
Here is how to install and start using Helm charts:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 5 **Access to your own Git repository**
|
||||
|
||||
You can host the repository for this article on GitLab instance created in article [Install GitLab on CloudFerro Cloud Kubernetes](Install-GitLab-on-CloudFerro-Cloud-Kubernetes.html). You may also use it with [GitHub](https://github.com/git-guides/install-git), [GitLab](https://docs.gitlab.com/ee/topics/git/how_to_install_git/) and other source control platforms based on **git**.
|
||||
|
||||
No. 6 **git CLI operational**
|
||||
|
||||
**git** command installed locally. You may use it with [GitHub](https://github.com/git-guides/install-git), [GitLab](https://docs.gitlab.com/ee/topics/git/how_to_install_git/) and other source control platforms based on **git**.
|
||||
|
||||
No. 7 **Access to exemplary Flask application**
|
||||
|
||||
You should have access to the [example Flask application](https://github.com/CloudFerro/K8s-samples/tree/main/Flask-K8s-deployment), to be downloaded from GitHub in the article. It will serve as an example of a minimal application and by changing it, we will demonstrate that Argo CD is capturing those changes in a continual manner.
|
||||
|
||||
Step 1 Install Argo CD[](#step-1-install-argo-cd "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Let’s install Argo CD first, under the following assumptions:
|
||||
|
||||
> * this article has been tested on Kubernetes version 1.25
|
||||
> * use GUI only (no CLI used in this guide)
|
||||
> * deploy Argo CD without TLS certificates.
|
||||
|
||||
[Here is an in-depth installation guide](https://argo-cd.readthedocs.io/en/stable/getting_started/).
|
||||
|
||||
For production scenarios, it is [recommended to apply TLS](https://argo-cd.readthedocs.io/en/stable/operator-manual/tls/).
|
||||
|
||||
Let’s first create a dedicated namespace within our existing Kubernetes cluster. The namespace should be explicitly named **argocd**:
|
||||
|
||||
```
|
||||
kubectl create namespace argocd
|
||||
|
||||
```
|
||||
|
||||
Then install Argo CD:
|
||||
|
||||
```
|
||||
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 2 Access Argo CD from your browser[](#step-2-access-argo-cd-from-your-browser "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The Argo CD web application by default is not accessible from the browser. To enable this, change the applicable service from **ClusterIP** to **LoadBalancer** type with the command:
|
||||
|
||||
```
|
||||
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
|
||||
|
||||
```
|
||||
|
||||
After 1-2 minutes, retrieve the IP address of the service:
|
||||
|
||||
```
|
||||
kubectl get service argocd-server -n argocd
|
||||
|
||||
```
|
||||
|
||||
In our case, this provides the below result and indicates we have Argo CD running on IP address **185.254.233.247**:
|
||||
|
||||

|
||||
|
||||
Type the IP address you extracted to your browser (it will be a different IP address in your case, so be sure to replace **185.254.233.247** cited here with your own address). You will expectedly get a warning of invalid certificate. To suppress the warning, click “Advanced” and then “Proceed to Unsafe” and be transferred to the login screen of Argo CD:
|
||||
|
||||

|
||||
|
||||
The login is **admin**. To get the password, extract it from the deployed Kubernetes secret with the following command:
|
||||
|
||||
```
|
||||
kubectl get secret argocd-initial-admin-secret -n argocd -ojsonpath='{.data.password}' | base64 --decode ; echo
|
||||
|
||||
```
|
||||
|
||||
After typing in your credentials to the login form, you get transferred to the following screen:
|
||||
|
||||

|
||||
|
||||
Step 3 Create a Git repository[](#step-3-create-a-git-repository "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
You need to create a git repository first. The state of the application on your Kubernetes cluster will be synced to the state of this repo. It is recommended that it is a separate repository from your application code, to avoid triggering the CI pipelines whenever we change the configuration.
|
||||
|
||||
You will copy to this newly created repository files already available in (a different) GitHub repo mentioned in the **Prerequisite No. 5 Git CLI operational**.
|
||||
|
||||
Create the repository first, we call ours **argocd-sample**. While filling in the form, check off the initialization with README and choose Public visibility:
|
||||
|
||||

|
||||
|
||||
In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance. In the place denoted with a blue rectangle, you should enter your user name; usually, it will be **root** but can be anything else. If there already are some users defined in GitLab, their names will appear in a drop-down menu.
|
||||
|
||||
Step 4 Download Flask application[](#step-4-download-flask-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
The next goal is to download two yaml files to a folder called **ArgoCD-sample** and its subfolder **deployment**.
|
||||
|
||||
After submitting the “Create project” form, you will receive a list of commands to work with your repo. Review them and switch to the CLI from Prerequisite No. 6. Clone the entire CloudFerro K8s samples repo, then extract the sub-folder called *Flask-K8s-deployment*. For clarity, we rename its contents to a new folder, **ArgoCD-sample**. Use
|
||||
|
||||
```
|
||||
mkdir ~/ArgoCD-sample
|
||||
|
||||
```
|
||||
|
||||
if this is the first time you are working through this article. Then apply the following set of commands:
|
||||
|
||||
```
|
||||
git clone https://github.com/CloudFerro/K8s-samples
|
||||
mv ~/K8s-samples/Flask-K8s-deployment ~/ArgoCD-sample/deployment
|
||||
rm K8s-samples/ -rf
|
||||
|
||||
```
|
||||
|
||||
Files **deployment.yaml** and **service.yaml** deploy a sample Flask application on Kubernetes and expose it as a service. These are typical minimal examples for deployment and service and can be obtained from the CloudFerro Kubernetes samples repository.
|
||||
|
||||
Step 5 Push your app deployment configurations[](#step-5-push-your-app-deployment-configurations "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Then you need to upload files **deployment.yaml** and **service.yaml** files to the remote repository. Since you are using git, you perform the upload by *syncing* your local repo with the remote. First initiate the repo locally, then push the files to your remote with the following commands (replace to your own git repository instance):
|
||||
|
||||
```
|
||||
cd ArgoCD-sample
|
||||
git init
|
||||
git remote add origin [email protected]:root/ArgoCD-sample.git
|
||||
git add .
|
||||
git commit -m "First commit"
|
||||
git push origin master
|
||||
|
||||
```
|
||||
|
||||
As a result, at this point, we have the two files available in remote repository, in deployment folder:
|
||||
|
||||

|
||||
|
||||
Step 6 Create Argo CD application resource[](#step-6-create-argo-cd-application-resource "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Argo CD configuration for a specific application is defined using an application custom resource. Such resource connects a Kubernetes cluster with a repository where deployment configurations are stored.
|
||||
|
||||
Directly in the **ArgoCD-sample** folder, create file **application.yaml**, which will represent the application; be sure to replace **gitlab.mysampledomain.info** with your own domain.
|
||||
|
||||
**application.yaml**
|
||||
|
||||
```
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp-application
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
source:
|
||||
repoURL: https://gitlab.mysampledomain.info/root/argocd-sample.git
|
||||
targetRevision: HEAD
|
||||
path: deployment
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: myapp
|
||||
|
||||
```
|
||||
|
||||
Some explanations of this file:
|
||||
|
||||
spec.project.default
|
||||
: Specifies that our application is associated with the default project (represented as *appproject* CRD in Kubernetes). Additional projects can be created and used for managing multiple applications.
|
||||
|
||||
spec.syncPolicy.syncOptions.CreateNamespace=true
|
||||
: Ensures that a namespace (specified in spec.destination.namespace) will be automatically created on our cluster if it does not exist already
|
||||
|
||||
spec.syncPolicy.automated.selfHeal: true
|
||||
: Ensures that any manual changes in the cluster (e.g. applied using kubectl) will trigger a synchronization with the Git repo, overwrite these manual changes and therefore ensure consistency between the cluster and the repo state.
|
||||
|
||||
spec.syncPolicy.automated.prune: true
|
||||
: Ensures that deletion of a resource definition in the repo will also delete this resource from the Kubernetes cluster
|
||||
|
||||
spec.source.repoURL
|
||||
: This is the URL of our git repository where deployment artifacts reside.
|
||||
|
||||
spec.source.targetRevision.HEAD
|
||||
: Ensures that Kubernetes cluster will be synced with the most recent update on the git repository.
|
||||
|
||||
spec.source.source.path
|
||||
: The name of the folder in the Git repository, where the yaml manifests are stored.
|
||||
|
||||
spec.destination.server
|
||||
: The address of the Kubernetes cluster where we deploy our app. Since this is the same cluster where Argo CD is running, it can be accessed using the cluster’s internal DNS addressing.
|
||||
|
||||
spec.destination.namespace
|
||||
: The namespace in the cluster where the application will be deployed.
|
||||
|
||||
Step 7 Deploy Argo CD application[](#step-7-deploy-argo-cd-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
After we created the **application.yaml** file, the next step is to commit it and push to the remote repo. We can do this with the following commands:
|
||||
|
||||
```
|
||||
git add -A
|
||||
git commit -m "Added application.yaml file"
|
||||
git push origin master
|
||||
|
||||
```
|
||||
|
||||
The final step is to apply the **application.yaml** configuration to the cluster with the command below:
|
||||
|
||||
```
|
||||
kubectl apply -f application.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 8 View the deployed resources[](#step-8-view-the-deployed-resources "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
After performing the steps above, switch views to the Argo CD UI. We can see that our application appears on the list of applications and that the state to be applied on the cluster was properly captured from the Git repo. It will take a few minutes to complete the deployment of resources on the cluster:
|
||||
|
||||

|
||||
|
||||
This is the view of our app after deployment was properly applied:
|
||||
|
||||

|
||||
|
||||
After clicking on the application’s box, we can also see the details of all the resources which contribute to this deployment, both high-level and low-level ones.
|
||||
|
||||

|
||||
|
||||
With the default settings, Argo CD will poll the Git repository every 3 minutes to capture the desired state of the cluster. If any changes in the repo are detected, the applications on the cluster will be automatically relaunched with the new configuration applied.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
* test applying changes to the deployment in the repository (e.g. commit a deployment with different image in the container spec), verify ArgoCD capturing the change and changing the cluster state
|
||||
* customize the deployment of Argo CD to enable HTTPS
|
||||
* integrate Argo CD with your identity management tool; for details, see [Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud](Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-CloudFerro-Cloud.html)
|
||||
|
||||
Also of interest would be the following article: [CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image](CICD-pipelines-with-GitLab-on-CloudFerro-Cloud-Kubernetes-building-a-Docker-image.html)
|
||||
@ -0,0 +1,332 @@
|
||||
HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud[](#http-request-based-autoscaling-on-k8s-using-prometheus-and-keda-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
Kubernetes pod autoscaler (HPA) natively utilizes CPU and RAM metrics as the default triggers for increasing or decreasing number of pods. While this is often sufficient, there can be use cases where scaling on custom metrics is preferred.
|
||||
|
||||
[KEDA](https://keda.sh/) is a tool for autoscaling based on events/metrics provided from popular sources/technologies such as Prometheus, Kafka, Postgres and multiple others.
|
||||
|
||||
With this article we will deploy a sample app on CloudFerro Cloud cloud. We will collect HTTP requests from NGINX Ingress on our Kubernetes cluster and, using Keda with Prometheus scaler, apply custom HTTP request-based scaling.
|
||||
|
||||
Note
|
||||
|
||||
We will use *NGINX web server* to demonstrate the app, and *NGINX ingress* to deploy it and collect metrics. Note that *NGINX web server* and *NGINX ingress* are two separate pieces of software, with two different purposes.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install NGINX ingress on Magnum cluster
|
||||
> * Install Prometheus
|
||||
> * Install Keda
|
||||
> * Deploy a sample app
|
||||
> * Deploy our app ingress
|
||||
> * Access Prometheus dashboard
|
||||
> * Deploy KEDA ScaledObject
|
||||
> * Test with Locust
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
: You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Create a new Kubernetes cluster without Magnum NGINX preinstalled from Horizon UI**
|
||||
|
||||
The default NGINX ingress deployed from Magnum from Horizon UI does not yet implement Prometheus metrics export. Instead of trying to configure Magnum ingress for this use case, we will rather install a new NGINX ingress. To avoid conflicts, best to follow the below instruction on a Kubernetes cluster **without** Magnum NGINX preinstalled from Horizon UI.
|
||||
|
||||
No. 3 **kubectl pointed to the Kubernetes cluster**
|
||||
|
||||
The following article gives options for creating a new cluster and activating the **kubectl** command:
|
||||
|
||||
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
As mentioned, create the cluster **without** installing the NGINX ingress option.
|
||||
|
||||
No. 4 **Familiarity with deploying Helm charts**
|
||||
|
||||
This article will introduce you to Helm charts on Kubernetes:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
Install NGINX ingress on Magnum cluster[](#install-nginx-ingress-on-magnum-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Please type in the following commands to download the *ingress-nginx* Helm repo and then install the chart. Note we are using a custom namespace *ingress-nginx* as well as setting the options to enable Prometheus metrics.
|
||||
|
||||
```
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
|
||||
kubectl create namespace ingress-nginx
|
||||
|
||||
helm install ingress-nginx ingress-nginx/ingress-nginx \
|
||||
--namespace ingress-nginx \
|
||||
--set controller.metrics.enabled=true \
|
||||
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
|
||||
--set-string controller.podAnnotations."prometheus\.io/port"="10254"
|
||||
|
||||
```
|
||||
|
||||
Now run the following command to get the external IP address of the ingress controller, which will be used by ingress resources created in the further steps of this article.
|
||||
|
||||
```
|
||||
$ kubectl get services -n ingress-nginx
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
ingress-nginx-controller LoadBalancer 10.254.118.18 64.225.135.67 80:31573/TCP,443:30786/TCP 26h
|
||||
|
||||
```
|
||||
|
||||
We get **64.225.135.67**. Instead of that value, use the EXTERNAL-IP value you get in your terminal after running the above command.
|
||||
|
||||
Install Prometheus[](#install-prometheus "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
In order to install Prometheus, please apply the following command on your cluster:
|
||||
|
||||
```
|
||||
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
|
||||
|
||||
```
|
||||
|
||||
Note that this is Prometheus installation customized for NGINX Ingress and already installs to the *ingress-nginx* namespace by default, so no need to provide the namespace flag or create one.
|
||||
|
||||
Install Keda[](#install-keda "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
With below steps, create a separate namespace for Keda artifacts, download the repo and install the Keda-Core chart:
|
||||
|
||||
```
|
||||
kubectl create namespace keda
|
||||
|
||||
helm repo add kedacore https://kedacore.github.io/charts
|
||||
helm repo update
|
||||
|
||||
helm install keda kedacore/keda --version 2.3.0 --namespace keda
|
||||
|
||||
```
|
||||
|
||||
Deploy a sample app[](#deploy-a-sample-app "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
With the above steps completed, we can deploy a simple application. It will be an NGINX web server, serving a simple “Welcome to nginx!” page. Note, we create a deployment and then expose this deployment as a service of type ClusterIP. Create a file *app-deployment.yaml* in your favorite editor:
|
||||
|
||||
**app-deployment.yaml**
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
|
||||
```
|
||||
|
||||
Then apply with the below command:
|
||||
|
||||
```
|
||||
kubectl apply -f app-deployment.yaml -n ingress-nginx
|
||||
|
||||
```
|
||||
|
||||
We are deploying this application into the *ingress-nginx* namespace where also the ingress installation and Prometheus is hosted. For production scenarios, you might want to have better isolation of application vs. infrastructure, this is however beyond the scope of this article.
|
||||
|
||||
Deploy our app ingress[](#deploy-our-app-ingress "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Our application is already running and exposed in our cluster, but we want to also expose it publicly. For this purpose we will use NGINX ingress, which will also act as a proxy to register the request metrics. Create a file *app-ingress.yaml* with the following contents:
|
||||
|
||||
**app-ingress.yaml**
|
||||
|
||||
```
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: app-ingress
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: "64.225.135.67.nip.io"
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
path: /app
|
||||
pathType: Prefix
|
||||
|
||||
```
|
||||
|
||||
Then apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f app-ingress.yaml -n ingress-nginx
|
||||
|
||||
```
|
||||
|
||||
After a while, you can get a public IP address where the app is available:
|
||||
|
||||
```
|
||||
$ kubectl get ingress -n ingress-nginx
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
app-ingress nginx 64.225.135.67.nip.io 64.225.135.67 80 18h
|
||||
|
||||
```
|
||||
|
||||
After typing the IP address with the prefix (replace with your own floating IP with /app suffix), we can see the app exposed. We are using the *nip.io* service, which works as a DNS resolver, so there is no need to set up DNS records for the purpose of the demo.
|
||||
|
||||

|
||||
|
||||
Access Prometheus dashboard[](#access-prometheus-dashboard "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To access Prometheus dashboard we can port-forward the running prometheus-server to our localhost. This could be useful for troubleshooting. We have the *prometheus-server* running as a *NodePort* service, which can be verified per below:
|
||||
|
||||
```
|
||||
$ kubectl get services -n ingress-nginx
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
ingress-nginx-controller LoadBalancer 10.254.3.172 64.225.135.67 80:30881/TCP,443:30942/TCP 26h
|
||||
ingress-nginx-controller-admission ClusterIP 10.254.51.201 <none> 443/TCP 26h
|
||||
ingress-nginx-controller-metrics ClusterIP 10.254.15.196 <none> 10254/TCP 26h
|
||||
nginx ClusterIP 10.254.160.207 <none> 80/TCP 25h
|
||||
prometheus-server NodePort 10.254.24.85 <none> 9090:32051/TCP 26h
|
||||
|
||||
```
|
||||
|
||||
We will port-forward to the localhost in the following command:
|
||||
|
||||
```
|
||||
kubectl port-forward deployment/prometheus-server 9090:9090 -n ingress-nginx
|
||||
|
||||
```
|
||||
|
||||
Then enter *localhost:9090* in your browser, you will see the Prometheus dashboard. In this view we will be able to see various metrics exposed by nginx-ingress. This can be verified by starting to type “nginx-ingress” to search bar, then various related metrics will start to show up.
|
||||
|
||||

|
||||
|
||||
Deploy KEDA ScaledObject[](#deploy-keda-scaledobject "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
Keda ScaledObject is a custom resource which will enable scaling our application based on custom metrics. In the YAML manifest we define what will be scaled (the nginx deployment), what are the conditions for scaling, and the definition and configuration of the trigger, in this case Prometheus. Prepare a file *scaled-object.yaml* with the following contents:
|
||||
|
||||
**scaled-object.yaml**
|
||||
|
||||
```
|
||||
apiVersion: keda.sh/v1alpha1
|
||||
kind: ScaledObject
|
||||
metadata:
|
||||
name: prometheus-scaledobject
|
||||
namespace: ingress-nginx
|
||||
labels:
|
||||
deploymentName: nginx
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
kind: Deployment
|
||||
name: nginx # name of the deployment, must be in the same namespace as ScaledObject
|
||||
minReplicaCount: 1
|
||||
pollingInterval: 15
|
||||
triggers:
|
||||
- type: prometheus
|
||||
metadata:
|
||||
serverAddress: http://prometheus-server.ingress-nginx.svc.cluster.local:9090
|
||||
metricName: nginx_ingress_controller_requests
|
||||
threshold: '100'
|
||||
query: sum(rate(nginx_ingress_controller_requests[1m]))
|
||||
|
||||
```
|
||||
|
||||
For detailed definition of *ScaledObject*, refer to Keda documentation. In this example, we are leaving out a lot of default settings, most notable of which is called *coolDownPeriod*. Being not explicitly assigned a value, its default value of 300 seconds will be in effect, however, see below how you can change that value to something else.
|
||||
|
||||
We are using here the *nginx-ingress-controller-requests* metric for scaling. This metric will only populate in the Prometheus dashboard once the requests start hitting our app service. We are setting the threshold for **100** and the time to **1** minute, so in case there is more requests than **100** per pod in a minute, this will trigger scale up.
|
||||
|
||||
```
|
||||
kubectl apply -f scaled-object.yaml -n ingress-nginx
|
||||
|
||||
```
|
||||
|
||||
Test with Locust[](#test-with-locust "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
We can now test whether the scaling works as expected. We will use *Locust* for this, which is a load testing tool. To quickly deploy *Locust* as LoadBalancer service type, enter the following commands:
|
||||
|
||||
```
|
||||
kubectl create deployment locust --image paultur/locustproject:latest
|
||||
kubectl expose deployment locust --type LoadBalancer --port 80 --target-port 8089
|
||||
|
||||
```
|
||||
|
||||
After a couple of minutes the LoadBalancer is created and Locust is exposed:
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 28h
|
||||
locust LoadBalancer 10.254.88.89 64.225.132.243 80:31287/TCP 4m19s
|
||||
|
||||
```
|
||||
|
||||
Enter Locust UI in the browser using the EXTERNAL-IP. It can be only **64.225.132.243** or **64.225.132.243.nip.io**, one of these values is sure to work. Then hit “Start Swarming” to initiate mock requests on our app’s public endpoint:
|
||||
|
||||

|
||||
|
||||
With the default setting and even single user, *Locust* will start swarming hundreds of requests immediately. Tuning Locust is not in scope of this article, but we can quickly see the effect. The additional pod replicas are generated:
|
||||
|
||||
```
|
||||
$ kubectl get pods -n ingress-nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
ingress-nginx-controller-557bf68967-h9zf5 1/1 Running 0 27h
|
||||
nginx-85b98978db-2kjx6 1/1 Running 0 30s
|
||||
nginx-85b98978db-2kxzz 1/1 Running 0 61s
|
||||
nginx-85b98978db-2t42c 1/1 Running 0 31s
|
||||
nginx-85b98978db-2xdzw 0/1 ContainerCreating 0 16s
|
||||
nginx-85b98978db-2zdjm 1/1 Running 0 30s
|
||||
nginx-85b98978db-4btfm 1/1 Running 0 30s
|
||||
nginx-85b98978db-4mmlz 0/1 ContainerCreating 0 16s
|
||||
nginx-85b98978db-4n5bk 1/1 Running 0 46s
|
||||
nginx-85b98978db-525mq 1/1 Running 0 30s
|
||||
nginx-85b98978db-5czdf 1/1 Running 0 46s
|
||||
nginx-85b98978db-5kkgq 0/1 ContainerCreating 0 16s
|
||||
nginx-85b98978db-5rt54 1/1 Running 0 30s
|
||||
nginx-85b98978db-5wmdk 1/1 Running 0 46s
|
||||
nginx-85b98978db-6tc6p 1/1 Running 0 77s
|
||||
nginx-85b98978db-6zcdw 1/1 Running 0 61s
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
Cooling down[](#cooling-down "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
After hitting “Stop” in Locust, the pods will scale down to one replica, in line with the value of *coolDownPeriod* parameter, which is defined in the Keda ScaledObject. Its default value is 300 seconds. If you want to change it, use command
|
||||
|
||||
```
|
||||
kubectl edit scaledobject prometheus-scaledobject -n ingress-nginx
|
||||
|
||||
```
|
||||
@ -0,0 +1,235 @@
|
||||
How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum[](#how-to-access-kubernetes-cluster-post-deployment-using-kubectl-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===================================================================================================================================================================================================================================
|
||||
|
||||
In this tutorial, you start with a freshly installed Kubernetes cluster on Cloudferro OpenStack server and connect the main Kubernetes tool, **kubectl** to the cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to connect **kubectl** to the OpenStack Magnum server
|
||||
> * How to access clusters with **kubectl**
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Installation of kubectl**
|
||||
|
||||
Standard types of **kubectl** installation are described on [Install Tools page](https://kubernetes.io/docs/tasks/tools/) of the official Kubernetes site.
|
||||
|
||||
No. 3 **A cluster already installed on Magnum site**
|
||||
|
||||
You may already have a cluster installed if you have followed one of these articles:
|
||||
|
||||
> * With Horizon interface: [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
> * With command line interface: [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
* Or, you may want to create a new cluster called *k8s-cluster*, just for this occasion – by using the following CLI command:
|
||||
|
||||
```
|
||||
openstack coe cluster create \
|
||||
--cluster-template k8s-stable-1.23.5 \
|
||||
--labels eodata_access_enabled=false,floating-ip-enabled=true,master-lb-enabled=true \
|
||||
--merge-labels \
|
||||
--keypair sshkey \
|
||||
--master-count 3 \
|
||||
--node-count 2 \
|
||||
--master-flavor eo1.large \
|
||||
--flavor eo1.large \
|
||||
k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
Warning
|
||||
|
||||
It takes some 10-20 minutes for the new cluster to form.
|
||||
|
||||
In the rest of this text we shall use cluster name *k8s-cluster* – be sure to use the name of the existing cluster instead.
|
||||
|
||||
No. 4 **Connect openstack client to the cloud**
|
||||
|
||||
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html).
|
||||
|
||||
The Plan[](#the-plan "Permalink to this headline")
|
||||
---------------------------------------------------
|
||||
|
||||
> * Follow up the steps listed in Prerequisite No. 2 and install **kubectl** on the platform of your choice.
|
||||
> * Use the existing Kubernetes cluster on Cloudferro or install a new one using the methods outlined in Prerequisites Nos. 3.
|
||||
> * Use Step 2 in Prerequisite No. 4 to enable connection of **openstack** and **magnum** clients to the cloud.
|
||||
|
||||
You are then going to connect **kubectl** to the Cloud.
|
||||
|
||||
Step 1 Create directory to download the certificates[](#step-1-create-directory-to-download-the-certificates "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Create a new directory called *k8sdir* into which the certificates will be downloaded:
|
||||
|
||||
```
|
||||
mkdir k8sdir
|
||||
|
||||
```
|
||||
|
||||
Once the certificate file is downloaded, you will execute a command similar to this:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/dusko/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
This assumes
|
||||
|
||||
> * using an Ubuntu environment (*/home*),
|
||||
> * that the user is *dusko*,
|
||||
> * the directory you just created */k8sdir* and, finally, that
|
||||
> * *config* is the file which contains data for authorizing to the Kubernetes cluster.
|
||||
|
||||
Note
|
||||
|
||||
In Linux, a file may or may not have an extension, while on Windows, it must have an extension.
|
||||
|
||||
Step 2A Download Certificates From the Server using the CLI commands[](#step-2a-download-certificates-from-the-server-using-the-cli-commands "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You will use command
|
||||
|
||||
```
|
||||
openstack coe cluster config
|
||||
|
||||
```
|
||||
|
||||
to download the files that **kubectl** needs for authentication with the server. See its input parameters using the **–help** parameter:
|
||||
|
||||
```
|
||||
openstack coe cluster config --help
|
||||
usage: openstack coe cluster config [-h]
|
||||
[--dir <dir>] [--force] [--output-certs]
|
||||
[--use-certificate] [--use-keystone]
|
||||
<cluster>
|
||||
|
||||
Get Configuration for a Cluster
|
||||
|
||||
positional arguments:
|
||||
<cluster> The name or UUID of cluster to update
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--dir <dir> Directory to save the certificate and config files.
|
||||
--force Overwrite files if existing.
|
||||
--output-certs Output certificates in separate files.
|
||||
--use-certificate Use certificate in config files.
|
||||
--use-keystone Use Keystone token in config files.
|
||||
|
||||
```
|
||||
|
||||
Download the certificates into the *k8sdir* folder:
|
||||
|
||||
```
|
||||
openstack coe cluster config \
|
||||
--dir k8sdir \
|
||||
--force \
|
||||
--output-certs \
|
||||
k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
Four files will be downloaded into the folder:
|
||||
|
||||
```
|
||||
ls k8sdir
|
||||
ca.pem cert.pem config key.pem
|
||||
|
||||
```
|
||||
|
||||
Parameter *–output-certs* produces *.pem* files, which are X.509 certificates, originally created so that they can be sent via email. File *config* combines the *.pem* files and contains all the information needed for **kubectl** to access the cloud. Using *–force* overwrites the existing files (if any), so you are guaranteed to work with only the latest versions of the files from the server.
|
||||
|
||||
The result of this command is shown in the row below:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/dusko/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
Copy this command and paste it into the command line of terminal, then press the *Enter* key on the keyboard to execute it. System variable KUBECONFIG will be thus initialized and the **kubectl** command will have access to the *config* file at all times.
|
||||
|
||||
This is the entire procedure in terminal window:
|
||||
|
||||

|
||||
|
||||
Step 2B Download Certificates From the Server using Horizon commands[](#step-2b-download-certificates-from-the-server-using-horizon-commands "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can download the config file from Horizon directly to your computer. First list the clusters with command **Container Infra** -> **Clusters**, find the cluster and click on the rightmost drop-down menu in its column:
|
||||
|
||||

|
||||
|
||||
Click on option **Show Cluster Config** and the config file will be downloaded to the editor:
|
||||
|
||||

|
||||
|
||||
From the editor, save it on disk. The file name will combine the name of the cluster with the word *config* and if you have downloaded the same file several times, there may be a dash followed by a number, like this:
|
||||
|
||||
```
|
||||
k8s-cluster-config-1.yaml
|
||||
|
||||
```
|
||||
|
||||
For uniformity, save it to the same folder *k8sdir* as the *config* file and set up the KUBECONFIG variable to that address:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/dusko/k8sdir/k8s-cluster_config-1.yaml
|
||||
|
||||
```
|
||||
|
||||
Depending on your environment, you may need to open a new terminal window to make the above command work.
|
||||
|
||||
Step 3 Verify That kubectl Has Access to the Cloud[](#step-3-verify-that-kubectl-has-access-to-the-cloud "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
See basic data about the cluster with the following command:
|
||||
|
||||
```
|
||||
kubectl get nodes -o wide
|
||||
|
||||
```
|
||||
|
||||
The result is:
|
||||
|
||||

|
||||
|
||||
That verifies that **kubectl** has proper access to the cloud.
|
||||
|
||||
To see available commands **kubectl** has, use:
|
||||
|
||||
```
|
||||
kubectl --help
|
||||
|
||||
```
|
||||
|
||||
The listing is too long to reproduce here, but here is how it starts:
|
||||
|
||||

|
||||
|
||||
**kubectl** also has a long list of options, which are parameters that can be applied to any command. See them with
|
||||
|
||||
```
|
||||
kubectl options
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
With **kubectl** operational, you can
|
||||
|
||||
> * deploy apps on the cluster,
|
||||
> * access multiple clusters,
|
||||
> * create load balancers,
|
||||
> * access applications in the cluster using port forwarding,
|
||||
> * use Service to access application in a cluster,
|
||||
> * list container images in the cluster
|
||||
> * use Services, Deployments and all other resources in a Kubernetes cluster.
|
||||
|
||||
Kubernetes dashboard is a visual alternative to **kubectl**. To install it, see [Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum](Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
@ -0,0 +1,223 @@
|
||||
How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#how-to-create-api-server-loadbalancer-for-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================================
|
||||
|
||||
Load balancer can be understood both as
|
||||
|
||||
> * an external IP address through which the network / Internet traffic is coming into the Kubernetes cluster as well as
|
||||
> * the piece of software that decides to which of the master nodes to send the incoming traffic.
|
||||
|
||||
There is an option to create load balancer while creating the Kubernetes cluster but you can also create it without. This article will show you how to access the cluster even if you did not specify load balancer at the creation time.
|
||||
|
||||
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
> * Create a cluster called NoLoadBalancer with one master node and no load balancer
|
||||
> * Assign floating IP address to its master node
|
||||
> * Create *config* file to access the cluster
|
||||
> * In that *config* file, swap local server address with the actual floating IP of the master node
|
||||
> * Use parameter **–insecure-skip-tls-verify=true** to override server security
|
||||
> * Verify that **kubectl** is working normally, which means that you have full access to the Kubernetes cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Installation of the openstack command**
|
||||
|
||||
To activate **kubectl** command, the openstack command from CLI OpenStack Interface must be operational. The first part of article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) shows how to install it.
|
||||
|
||||
No. 3 **How to create Kubernetes cluster using Horizon commands**
|
||||
|
||||
The article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html) shows creation of clusters with Horizon visual interface. (In this article, you shall use it to create an exemplar cluster called *NoLoadBalancer*.)
|
||||
|
||||
No. 4 **Connect to the Kubernetes Cluster in Order to Use kubectl**
|
||||
|
||||
Article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html) will show you how to connect your local machine to the existing Kubernetes cluster.
|
||||
|
||||
How To Enable or Disable Load Balancer for Master Nodes[](#how-to-enable-or-disable-load-balancer-for-master-nodes "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A default state for the Kubernetes cluster in CloudFerro Cloud OpenStack Magnum hosting is to have no load balancer set up in advance. You can decide to have a load balancer created together with the basic Kubernetes cluster by checking on option **Enable Load Balancer for Master Nodes** in window **Network** when creating a cluster through Horizon interface. (See **Prerequisite No. 3** for the complete procedure.)
|
||||
|
||||
The check box to enable load balancer for master nodes has two completely different meanings when checked and not checked.
|
||||
|
||||
**Checked state**
|
||||
|
||||

|
||||
|
||||
If **checked**, the load balancer for master nodes will be created. If you specified two or more master nodes in previous screens, then this field **must** be checked.
|
||||
|
||||
Regardless of the number of master nodes you have specified, checking this field on yields higher chances of successfully creating the Kubernetes cluster.
|
||||
|
||||
**Non-checked state**
|
||||
|
||||

|
||||
|
||||
If you accept the default state of **unchecked**, no load balancer will be created. However, without any load balancer “in front” of the cluster, the cluster API is being exposed only within the Kubernetes network. You save on the existence of the load balancer but the direct connection from local machine to the cluster is lost.
|
||||
|
||||
One Master Node, No Load Balancer and the Problem It All Creates[](#one-master-node-no-load-balancer-and-the-problem-it-all-creates "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To show exactly what the problem is, use
|
||||
|
||||
> * Prerequisite No. 2 to install openstack client for the local machine, so that you can use the **openstack** command.
|
||||
> * Then use Prerequisite No. 4 to connect to the OpenStack cloud and start using **openstack** command from the local terminal.
|
||||
|
||||
Then you can try a very usual command such as
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
|
||||
```
|
||||
|
||||
but it will not work. If there were load balancer “in front of the cluster”, it would work, but here there isn’t so it won’t. The rest of this article will show you how to still make it work, using the fact that the master node of the cluster has its own load balancer for kube-api.
|
||||
|
||||
Step 1 Create a Cluster With One Master Node and No Load Balancer[](#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Create cluster *NoLoadBalancer* as explained in Prerequisite No. 3. Let there be
|
||||
|
||||
> * one master node and
|
||||
> * no load balancers (do not check field **Enable Load Balancer for Master Nodes** in subwindow **Network**).
|
||||
> * Use any key pair that you might have – it is of no concern for this article.
|
||||
> * Activate NGINX as Ingress controller
|
||||
|
||||
The result will be creation of cluster *NoLoadBalancer* as seen in this image:
|
||||
|
||||

|
||||
|
||||
To illustrate the problem, a very basic command such as
|
||||
|
||||
```
|
||||
kubectl get pods NoLoadBalancer -o yaml
|
||||
|
||||
```
|
||||
|
||||
to list the pods in cluster NoLoadBalancer, will show an error message like this one:
|
||||
|
||||
```
|
||||
Unable to connect to the server: dial tcp 10.0.0.54:6443: i/o timeout
|
||||
|
||||
```
|
||||
|
||||
Addresses starting with 10.0… are usually reserved for local networks, meaning that no access from the Internet is enabled at this time.
|
||||
|
||||

|
||||
|
||||
Step 2 Create Floating IP for Master Node[](#step-2-create-floating-ip-for-master-node "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here are the instances that serve as nodes for that cluster:
|
||||
|
||||

|
||||
|
||||
Master node is called *noloadbalancer-3h2i5x5iz2u6-master-0* and click on the right side of its row and click option **Associate Floating IP**.
|
||||
|
||||
To add the IP, click on a selection of available addresses (there may be only one but in certain cases, there can be several to choose from):
|
||||
|
||||

|
||||
|
||||
This is the result:
|
||||
|
||||

|
||||
|
||||
The IP number is **64.225.135.112** – you are going to use it later on, to change *config* file for access to the Kubernetes cluster.
|
||||
|
||||
Step 3 **Create config File for Kubernetes Cluster**[](#step-3-create-config-file-for-kubernetes-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You are now going to connect to *NoLoadBalancer* cluster in spite of it not having a load balancer from the very start. To that end, create a config file to connect to the cluster, with the following command:
|
||||
|
||||
```
|
||||
openstack coe cluster config NoLoadBalancer --force
|
||||
|
||||
```
|
||||
|
||||
It will return a row such as this:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/Users/<YOUR PATH TO CONFIG FILE>/config
|
||||
|
||||
```
|
||||
|
||||
Execute this command from terminal command line. A config file has also been created at that address. To show its contents, execute command
|
||||
|
||||
```
|
||||
cat config
|
||||
|
||||
```
|
||||
|
||||
assuming you already are in the required folder.
|
||||
|
||||
Config file will look a lot like gibberish because it contains certificates, tokens and other rows with random content, some of them hundreds of characters long. Here is one part of it:
|
||||
|
||||

|
||||
|
||||
The important row here is this network address:
|
||||
|
||||
```
|
||||
server: https://10.0.0.54:6443
|
||||
|
||||
```
|
||||
|
||||
Step 4 Swap Existing Floating IP Address for the Network Address[](#step-4-swap-existing-floating-ip-address-for-the-network-address "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Now back to Horizon interface and execute commands **Compute** -> **Instances** to see the addresses for master node of the *NoLoadBalancer* cluster:
|
||||
|
||||

|
||||
|
||||
There are two addresses:
|
||||
|
||||
```
|
||||
10.0.0.54, 64.225.135.112
|
||||
|
||||
```
|
||||
|
||||
Incidentally, the same **10.0.0.54** address is also present in the *config* file, ending with port address **:6443**.
|
||||
|
||||
Try now in terminal to execute **kubectl** command and see the result, perhaps like this one:
|
||||
|
||||

|
||||
|
||||
The access is there but the nodes and pods are still out of reach. That is because address **10.0.0.54** is internal network address for the cluster and was never supposed to work as the Internet address.
|
||||
|
||||
So, open the *config* file using *nano* (or other text editor of your choice). Swap **10.0.0.54** for **64.225.135.112** in line for server access. The address **64.225.135.112** is the address of the floating IP for master node and will fit in perfectly.
|
||||
|
||||
The line should look like this:
|
||||
|
||||

|
||||
|
||||
Save the edited file. In case of **nano**, those will be commands `Control-x`, `Y` and pressing `Enter` on the keyboard.
|
||||
|
||||
Step 4 Add Parameter –insecure-skip-tls-verify=true to Make kubectl Work[](#step-4-add-parameter-insecure-skip-tls-verify-true-to-make-kubectl-work "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Try again to activate kubectl and again it will fail. To make it work, add parameter **–insecure-skip-tls-verify=true**:
|
||||
|
||||
```
|
||||
kubectl get pods --insecure-skip-tls-verify=true
|
||||
|
||||
```
|
||||
|
||||
Or, try out a more meaningful command
|
||||
|
||||
```
|
||||
kubectl get nodes --insecure-skip-tls-verify=true
|
||||
|
||||
```
|
||||
|
||||
This is the result of all these commands, in terminal window:
|
||||
|
||||

|
||||
|
||||
To continue working successfully, use normal **kubectl** commands and always add **–insecure-skip-tls-verify=true** in the end.
|
||||
|
||||
Attention
|
||||
|
||||
Using parameter **–insecure-skip-tls-verify** won’t check cluster certificates for validity. That will make your **https** connections insecure. Not recommended for production environment. Use at your own risk, maybe for some local testing or when you are just learning about Kubernetes and clusters.
|
||||
|
||||
For production, it is strongly recommended to check on field **Enable Load Balancer for Master Nodes** when creating a new cluster, regardless of the number of master nodes you have specified.
|
||||
@ -0,0 +1,196 @@
|
||||
How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon[](#how-to-install-openstack-and-magnum-clients-for-command-line-interface-to-brand-name-horizon "Permalink to this headline")
|
||||
=================================================================================================================================================================================================================================
|
||||
|
||||
How To Issue Commands to the OpenStack and Magnum Servers[](#how-to-issue-commands-to-the-openstack-and-magnum-servers "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
There are three ways of working with Kubernetes clusters within Openstack Magnum and Horizon modules:
|
||||
|
||||
**Horizon Commands**
|
||||
|
||||
You issue Horizon commands using mouse and keyboard, through predefined screen wizards. It is the easiest way to start but not the most productive in the long run.
|
||||
|
||||
**Command Line Interface (CLI)**
|
||||
|
||||
CLI commands are issued from desktop computer or server in the cloud. This approach allows you to save commands as text and repeat them afterwards. This is the preferred way for professionals.
|
||||
|
||||
**HTTPS Requests to the Magnum Server**
|
||||
|
||||
Both the Horizon and the CLI use HTTPS requests internally and in an interactive manner. You can, however, write your own software to automate and/or change the state of the server, in real time.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to install the CLI – OpenStack and Magnum clients
|
||||
> * How to connect the CLI to the Horizon server
|
||||
> * Basic examples of using OpenStack and Magnum clients
|
||||
|
||||
Notes On Python Versions and Environments for Installation[](#notes-on-python-versions-and-environments-for-installation "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack is written in Python so you need to first install a Python working environment and then install the OpenStack clients. Officially, OpenStack runs only on Python 2.7 but you will most likely only be able to install a version 3.x of Python. During the installation, adjust accordingly the numbers of Python versions mentioned in the documentation.
|
||||
|
||||
You will be able to install Python on any of the popular platforms, such as Windows, MacOS, Linux on a desktop computer. Or, supposing you are logged into Horizon interface, you can use commands **Compute** => **Instances** to create an instance of a virtual machine. Then install the Python there. Ubuntu 18.04 or 20.04 would serve best in this regard.
|
||||
|
||||
Warning
|
||||
|
||||
Once you install Kubernetes cluster you will have also have installed instances with Fedora 33 or 35, say, for the master node of the control plane. You can install the Python and OpenStack clients there as well but Ubuntu is much easier to use and is the preferred solution in this case.
|
||||
|
||||
You can install the Python and the clients on several environments at once, say, on a desktop computer and on a virtual machine on the server, at the same time. Following the instructions in this tutorial, they will all be connected to one and the same Kubernetes cluster anyway.
|
||||
|
||||
Note
|
||||
|
||||
If you decide to install Python and the OpenStack clients on a virtual machine, you will need SSH keys in order to be able to enter the working environment. See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html).
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Installation of OpenStack CLI on Ubuntu 20.04 Server**
|
||||
|
||||
The article [How to install OpenStackClient for Linux on CloudFerro Cloud](../openstackcli/How-to-install-OpenStackClient-for-Linux-on-CloudFerro-Cloud.html) shows how to install OpenStack client on Ubuntu server. That Ubuntu may be the desktop operating system, a virtual machine on some other operating system, or an Ubuntu server in the cloud.
|
||||
|
||||
Installation on Mac OS will be similar to the installation on Ubuntu.
|
||||
|
||||
No. 3 **Installation of OpenStack CLI on Windows**
|
||||
|
||||
The article [How to install OpenStackClient GitBash for Windows on CloudFerro Cloud](../openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-CloudFerro-Cloud.html) shows installation on Windows.
|
||||
|
||||
No. 4 **General Instructions for Installation of OpenStack Clients**
|
||||
|
||||
There are various ways of installing Python and the required clients. For instance, on MacOS, you can install the clients using Python PIP or install them natively, using *homebrew*.
|
||||
|
||||
The article [Install the OpenStack command-line clients](https://docs.openstack.org/newton/user-guide/common/cli-install-openstack-command-line-clients.html) will give a systematic introduction to installation of OpenStack family of clients on various operating systems.
|
||||
|
||||
Once installed, the CLI commands will be identical across various platforms and operating systems.
|
||||
|
||||
No. 5 **Connect openstack command to the cloud**
|
||||
|
||||
After the successful installation of **openstack** command, it should be connected to the cloud. Follow this article for technical details: [How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication](../accountmanagement/How-to-activate-OpenStack-CLI-access-to-CloudFerro-Cloud-cloud-using-one-or-two-factor-authentication.html).
|
||||
|
||||
Step 1 Install the CLI for Kubernetes on OpenStack Magnum[](#step-1-install-the-cli-for-kubernetes-on-openstack-magnum "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you are going to install clients for commands **openstack** and **coe**, from modules OpenStack and Magnum, respectively.
|
||||
|
||||
Follow the Prerequisites Nos. 2, 3 or 4 to install the main client for OpenStack. Its name is *python-openstackclient* and the installation described in those will typically contain a command such as
|
||||
|
||||
```
|
||||
pip install python-openstackclient
|
||||
|
||||
```
|
||||
|
||||
If you have installed OpenStackClient using those prerequisite resources, we shall assume that the **openstack** is available and connected to the cloud.
|
||||
|
||||
At the end of installation from either of the prerequisite articles, install Magnum client by issuing this command:
|
||||
|
||||
```
|
||||
pip install python-magnumclient
|
||||
|
||||
```
|
||||
|
||||
Step 2 How to Use the OpenStack Client[](#step-2-how-to-use-the-openstack-client "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you are going to start using the OpenStack client you have installed and connected to the cloud.
|
||||
|
||||
There are two ways of using the OpenStackClient. If you enter the word **openstack** at the command prompt of the terminal, you will enter the special command line interface, like this:
|
||||
|
||||

|
||||
|
||||
The benefit would be that you do not have to type **openstack** keyword for every command.
|
||||
|
||||
Type **quit** to leave the **openstack** internal command line prompt.
|
||||
|
||||
The preferred way, however, is typing the keyword **openstack**, followed by parameters and running from terminal command line.
|
||||
|
||||
Openstack commands may have dozens of parameters so it is better to compose the command in an independent text editor and then copy and paste it into the terminal.
|
||||
|
||||
The Help Command[](#the-help-command "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
To learn about the available commands and their parameters, type **–help** after the command. If applied to the keyword **openstack** itself, it will write out a very long list of commands, which may come useful as an orientation. It may start out like this:
|
||||
|
||||

|
||||
|
||||
This is how it ends:
|
||||
|
||||

|
||||
|
||||
The colon in the last line means that the output is in **vi** (or **vim**) editor. To leave it, type letter **q** and press Enter on the keyboard.
|
||||
|
||||
Prerequisites No. 3 and 4 lead to official OpenStack user documentation.
|
||||
|
||||
Here is what happens when you enter a wrong parameter, say, *networks* instead of *network*:
|
||||
|
||||
```
|
||||
openstack networks list
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
You get a list of commands similar to what you just typed.
|
||||
|
||||
To list networks available in the system, use a singular version of the command:
|
||||
|
||||
```
|
||||
openstack network list
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Step 4 How to Use the Magnum Client[](#step-4-how-to-use-the-magnum-client "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpensStack command for the server is **openstack** but for Magnum, the command is not **magnum** as one would expect, but **coe**, for *container orchestration engine*. Therefore, the commands for clusters will always start with **openstack coe**.
|
||||
|
||||
See cluster commands by entering
|
||||
|
||||
```
|
||||
openstack coe
|
||||
|
||||
```
|
||||
|
||||
into the command line:
|
||||
|
||||

|
||||
|
||||
You can see the existing clusters using the following command:
|
||||
|
||||
```
|
||||
openstack coe cluster list
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
This is more or less the same information that you can get from the Horizon interface:
|
||||
|
||||

|
||||
|
||||
after clicking on **Container Infra** => **Clusters**.
|
||||
|
||||
Prerequisite No. 5 offers more technical info about the Magnum client.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this tutorial you have
|
||||
|
||||
> * installed the *OpenStack* and *Magnum* clients
|
||||
> * connected them to the server, then used
|
||||
> * **openstack** command to access the server in general and
|
||||
> * **coe** to access the clusters in particular.
|
||||
|
||||
> The article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) explains
|
||||
|
||||
* the advantages of using the CLI instead of Horizon interface, showing
|
||||
* how to create a cluster template as well as
|
||||
* how to create a new cluster
|
||||
|
||||
all via the CLI.
|
||||
@ -0,0 +1,368 @@
|
||||
How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum[](#how-to-use-command-line-interface-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=========================================================================================================================================================================================================================
|
||||
|
||||
In this article you shall use Command Line Interface (CLI) to speed up testing and creation of Kubernetes clusters on OpenStack Magnum servers.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The advantages of using CLI over the Horizon graphical interface
|
||||
> * Debugging OpenStack and Magnum commands
|
||||
> * How to create a new Kubernetes cluster template using CLI
|
||||
> * How to create a new Kubernetes cluster using CLI
|
||||
> * Reasons why the cluster may fail to create
|
||||
> * CLI commands to delete a cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Private and public keys**
|
||||
|
||||
An SSH key-pair created in OpenStack dashboard. To create it, follow this article [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html). You will have created keypair called *sshkey* and you will be able to use it for this tutorial as well.
|
||||
|
||||
No. 3 **Command Structure of OpenStack Client Commands**
|
||||
|
||||
Here is the manual for OpenStackClient commands: [Command Structure Xena version](https://docs.openstack.org/python-openstackclient/xena/cli/commands.html).
|
||||
|
||||
No. 4 **Command List of OpenStack Client Commands**
|
||||
|
||||
These are all the commands supported by Xena release of OpenStackClient: [Xena Command List](https://docs.openstack.org/python-openstackclient/xena/cli/command-list.html).
|
||||
|
||||
No. 5 **Documentation for Magnum client**
|
||||
|
||||
These are all the commands supported by Xena release of MagnumClient: [Magnum User Guide](https://docs.openstack.org/magnum/latest/user/).
|
||||
|
||||
No. 6 **How to install OpenStack and Magnum Clients**
|
||||
|
||||
The step that directly precedes this article is: [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html).
|
||||
|
||||
In that guide, you have installed the CLI and in this tutorial, you are going to use it to work with Kubernetes on OpenStack Magnum.
|
||||
|
||||
No. 7 **Autohealing of Kubernetes Clusters**
|
||||
|
||||
To learn more about autohealing of Kubernetes clusters, follow this official article [What is Magnum Autohealer?](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/magnum-auto-healer/using-magnum-auto-healer.md).
|
||||
|
||||
The Advantages of Using the CLI[](#the-advantages-of-using-the-cli "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
You can use the CLI and Horizon interface interchangeably, but there are at least three advantages in using CLI.
|
||||
|
||||
### Reproduce Commands Through Cut & Paste[](#reproduce-commands-through-cut-paste "Permalink to this headline")
|
||||
|
||||
Here is a command to list flavors in the system
|
||||
|
||||
```
|
||||
openstack flavor list
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
If you have this line stored in text editor app, you can reproduce it at will. In contrast, to get the list of flavors using Horizon, you would have to click on a series of screen buttons
|
||||
|
||||
> **Compute** => **Instances** => **Launch instance** => **Flavor**
|
||||
|
||||
and only then get the list of flavors to choose from:
|
||||
|
||||

|
||||
|
||||
A bonus is that keeping commands in a text editor automatically creates documentation for the server and cluster.
|
||||
|
||||
### CLI Commands Can Be Automated[](#cli-commands-can-be-automated "Permalink to this headline")
|
||||
|
||||
You can use available automation. The result of the following Ubuntu pipeline is the url for communication from **kubectl** to the Kubernetes cluster:
|
||||
|
||||

|
||||
|
||||
There are two commands pipelined into one:
|
||||
|
||||
```
|
||||
KUBERNETES_URL=$(openstack coe cluster show k8s-cluster
|
||||
| awk '/ api_address /{print $4}')
|
||||
|
||||
```
|
||||
|
||||
The result of the first command
|
||||
|
||||
```
|
||||
openstack coe cluster show k8s-cluster
|
||||
|
||||
```
|
||||
|
||||
is a series of lines starting with the name of the parameter and followed by the actual value.
|
||||
|
||||

|
||||
|
||||
The second statement, to the right of the pipelining symbol **|**
|
||||
|
||||
```
|
||||
awk '/ api_address /{print $4}')
|
||||
|
||||
```
|
||||
|
||||
is searching for the line starting with *api\_address* and extracting its value *https://64.225.132.135:6443*. The final result is exported to the system variable KUBERNETES\_URL, thus automatically setting it up for use by Kubernetes cluster command **kubectl** when accessing the cloud.
|
||||
|
||||
### CLI Yields Access to All of the Existing OpenStack and Magnum Parameters[](#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters "Permalink to this headline")
|
||||
|
||||
CLI commands offer access to a larger set of parameters than is available through Horizon. For instance, in Horizon, the default length of time allowed for creation of a cluster is 60 minutes while in CLI, you can set it to other values of choice.
|
||||
|
||||
### Debugging OpenStack and Magnum Commands[](#debugging-openstack-and-magnum-commands "Permalink to this headline")
|
||||
|
||||
To see what is actually happening behind the scenes, when executing client commands, add parameter **–debug**:
|
||||
|
||||
```
|
||||
openstack coe cluster list --debug
|
||||
|
||||
```
|
||||
|
||||
The output will be several screens long, consisting of GET and POST web calls, with dozens of parameters shown on screen. (The output is too voluminous to reproduce here.)
|
||||
|
||||
How to Enter OpenStack Commands[](#how-to-enter-openstack-commands "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
Note
|
||||
|
||||
In the forthcoming example, a version **fedora-coreos-34.20210904.3.0** of fedora images is used. As the system is updated in time, the actual values may become different, for instance, any of values **fedora-coreos-35** or **fedora-coreos-33.20210426.3.0**. Use Horizon command Compute -> Images to see what images of fedora are currently available, then edit and replace as needed.
|
||||
|
||||
There are several ways to write down and enter Openstack commands into the terminal command line interface.
|
||||
|
||||
One way is to enter command **openstack** and press *Enter* on the keyboard. You enter the line mode of the **openstack** command and can enter rows of various openstack parameters line after line. This is stricly for manual data entry and is difficult to automate.
|
||||
|
||||

|
||||
|
||||
Type **quit** and press *Enter* on keyboard to leave that mode.
|
||||
|
||||
The usual way of entering **openstack** parameters is in one long line. Leave spaces between parameters but enter label values *without* any spaces inbetween. An example may be:
|
||||
|
||||

|
||||
|
||||
The line breaks and blanks have to be eradicated manually in this case.
|
||||
|
||||
A more elegant way is to use backslash character, **\**, in line text. The character after backslash will not be taken into account so if you enter it at the very end of the line, the EOL character will be avoided and the first and the second line will be treated as one continuous line. That is exactly what you want, so here is what an entry line could look like with this approach:
|
||||
|
||||
```
|
||||
openstack coe cluster template create kubecluster \
|
||||
--image "fedora-coreos-34.20210904.3.0" \
|
||||
--external-network external \
|
||||
--master-flavor eo1.large \
|
||||
--flavor eo1.large \
|
||||
--docker-volume-size 50 \
|
||||
--network-driver calico \
|
||||
--docker-storage-driver overlay2 \
|
||||
--master-lb-enabled \
|
||||
--volume-driver cinder \
|
||||
--labels boot_volume_type=,boot_volume_size=50,kube_tag=v1.18.2,availability_zone=nova \
|
||||
--coe kubernetes -f value -c uuid
|
||||
|
||||
```
|
||||
|
||||
The end of each line is precented by a backslash and so all these lines appear as one (long) line to terminal command line scanner. However, when copying and pasting this to the terminal line, beware of the following situation:
|
||||
|
||||

|
||||
|
||||
If the blanks are present at the beginning of each line, that will be a problem. Eliminate them by going into any text editor and then removing them either manually or through replace function. What you need to have in text editor is this:
|
||||
|
||||

|
||||
|
||||
Now you can copy it and paste into the terminal command line:
|
||||
|
||||

|
||||
|
||||
You notice that the line with **labels** can become long and its right part may not be visible on screen. Use **\** and new line to break the long **–labels** line into several shorter ones:
|
||||
|
||||

|
||||
|
||||
Pressing *Enter* on the keyboard activates this entire command and it is accepted by the system as you can see in the line below the command.
|
||||
|
||||
Warning
|
||||
|
||||
If you are new to Kubernetes please, at first, create clusters only directly using the default cluster template.
|
||||
Once you get more experience, you can start creating your own cluster templates and here is how to do it using CLI.
|
||||
|
||||
OpenStack Command for Creation of Cluster[](#openstack-command-for-creation-of-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you can create a new cluster using either the default cluster template or any of the templates that you have already created.
|
||||
|
||||
Enter
|
||||
|
||||
```
|
||||
openstack coe cluster create -h
|
||||
|
||||
```
|
||||
|
||||
to see the parameters. Provide all or almost all of the required parameters.
|
||||
|
||||
```
|
||||
usage: openstack coe cluster create
|
||||
[-h]
|
||||
--cluster-template <cluster-template>
|
||||
[--discovery-url <discovery-url>]
|
||||
[--docker-volume-size <docker-volume-size>]
|
||||
[--labels <KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...>]
|
||||
[--keypair <keypair>]
|
||||
[--master-count <master-count>]
|
||||
[--node-count <node-count>]
|
||||
[--timeout <timeout>]
|
||||
[--master-flavor <master-flavor>]
|
||||
[--flavor <flavor>]
|
||||
<name>
|
||||
|
||||
```
|
||||
|
||||
Here is what one such command might actually look like:
|
||||
|
||||
```
|
||||
openstack coe cluster create
|
||||
--cluster-template k8s-stable-1.23.5
|
||||
--docker-volume-size 50
|
||||
--labels eodata_access_enabled=false,floating-ip-enabled=true,
|
||||
--merge-labels
|
||||
--keypair sshkey
|
||||
--master-count 3
|
||||
--node-count 2
|
||||
--timeout 190
|
||||
--master-flavor eo1.large
|
||||
--flavor eo1.large
|
||||
newcluster
|
||||
|
||||
```
|
||||
|
||||
Warning
|
||||
|
||||
When using the exemplar default cluster template, *k8s-stable-1.23.5*, there is no need to specify label **master-lb-enabled=true** as the master load balancer will always be created with the default cluster template. The only way to **not** have master load balancer created with the default template is to specify flag **–master-lb-disabled**. Again, using **master-lb-enabled=false** with **–merge-labels** applied afterwards, also will **not** work, i.e. will not prevent master LB from being created.
|
||||
|
||||
Here are some special labels the functionality of which is only available through CLI and not through Horizon as well.
|
||||
|
||||
**How to properly form a cluster with auto healing turned on**
|
||||
|
||||
Note
|
||||
|
||||
**Prerequisite No. 6** will show you how to enable command line interface with your cloud server. **Prerequisite No. 7** will give you a formal introduction to the notion of Kubernetes autohealing, as implemented in OpenStack Magnum.
|
||||
|
||||
The only way to have auto healing turned on and guarantee at the same time that the cluster will be formed normally, is to set up the following label:
|
||||
|
||||
```
|
||||
auto_healing_enabled=True
|
||||
|
||||
```
|
||||
|
||||
Warning
|
||||
|
||||
Do not include the above label if you want to create a cluster that does not use auto healing.
|
||||
|
||||
Here is a variation of the CLI command to generate a cluster. It will use medium values instead of large for flavors, will have only one master and one worker node, will have auto healing turned on etc.
|
||||
|
||||
**openstack coe cluster create –cluster-template k8s-stable-1.23.5 –labels floating-ip-enabled=true,master-lb-enabled=true,auto\_healing\_enabled=true –merge-labels –keypair sshkey –master-count 1 –node-count 1 –master-flavor eo1.medium –flavor eo1.medium newcluster**
|
||||
|
||||
**Execute the command for creation of a cluster**
|
||||
|
||||
Copy and paste the above command into the terminal where OpenStack and Magnum clients are active:
|
||||
|
||||

|
||||
|
||||
How To Check Upon the Status of the Cluster[](#how-to-check-upon-the-status-of-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The command to show the status of clusters is
|
||||
|
||||
```
|
||||
openstack coe cluster list
|
||||
|
||||
```
|
||||
|
||||
*newcluster* is in status of CREATE\_IN\_PROGRESS i.e. it is being created under the hood. Repeat the command after a minute or two and see the latest status, which now is CREATE\_FAILED. To see the reason why the creation of the cluster stopped, go to the Horizon interface, list the clusters and click on the name of *newcluster*.
|
||||
|
||||
Under **Stack**, there is a message like this:
|
||||
|
||||
```
|
||||
Resource CREATE failed: OverQuotaClient: resources.secgroup_kube_master: Quota exceeded for resources:
|
||||
['security_group_rule']. Neutron server returns request_ids: ['req-1aff5045-db64-4075-81df-80611db8cb6c']
|
||||
|
||||
```
|
||||
|
||||
The quota for the security group rules was exceeded. To verify, execute this command:
|
||||
|
||||
```
|
||||
openstack quota show --default
|
||||
|
||||
```
|
||||
|
||||
The result may be too cluttered in a normal terminal window, so in this case, more information will be available from the Horizon interface:
|
||||
|
||||

|
||||
|
||||
Red and orange colors denote danger and you either have to ask support to double your quotas or delete the instances and clusters that have exceeded them.
|
||||
|
||||
Note
|
||||
|
||||
It is out of scope of this article to describe how to delete elements through Horizon interface. Make sure that quotas are available before new cluster creation.
|
||||
|
||||
Failure to Create a Cluster[](#failure-to-create-a-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
There are many reasons why a cluster may fail to create. Maybe the state of system quotas is not optimal, maybe there is a mismatch between the parameters of the cluster and the parameters in the rest of the cloud. For example, if you base the creation of cluster on the default cluster template, it will use Fedora distribution and require 10 GiB of memory. It may clash with *–docker-volume-size* if that was set up to be larger then 10 GiB.
|
||||
|
||||
The flavors for master and minions are *eo1.large*, and if you want a larger Docker image size, increase the *–master-flavor* size.
|
||||
|
||||
The entire cloud may be overloaded and the creation of cluster may take longer than the default 60 minutes. Set up the *–timeout* parameter to 120 or 180 minutes in such cases.
|
||||
|
||||
If the creation process failed prematurely, then
|
||||
|
||||
> * review system quotas
|
||||
> * delete the failed cluster(s)
|
||||
> * review system quotas again
|
||||
> * change parameters and
|
||||
> * run the cluster creation command again.
|
||||
|
||||
CLI Commands to Delete a Cluster[](#cli-commands-to-delete-a-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
If the cluster failed to create, it is still taking up system resources. Delete it with command such as
|
||||
|
||||
```
|
||||
openstack coe cluster delete
|
||||
|
||||
```
|
||||
|
||||
List the clusters and you will first see that the status is DELETE\_IN\_PROGRESS and, after a while, the *newcluster* will disappear.
|
||||
|
||||
Now try to delete cluster *largecluster*. There are two of them, so putting up a command such as
|
||||
|
||||
```
|
||||
openstack coe cluster delete largecluster
|
||||
|
||||
```
|
||||
|
||||
will not be accepted. Instead of the name, enter the *uuid* value:
|
||||
|
||||
```
|
||||
openstack coe cluster delete e80c5815-d20b-4a2b-8588-49cf7a7e1aad
|
||||
|
||||
```
|
||||
|
||||
Again, the request will be accepted and then after a minute or two, the required cluster will disappear.
|
||||
|
||||
Now there is only one *largecluster* so this will work:
|
||||
|
||||
```
|
||||
openstack coe cluster delete largecluster
|
||||
|
||||
```
|
||||
|
||||
Deleting clusters that were not installed properly has freed up a significant amount of system resources. There are no more orange and red quotas:
|
||||
|
||||

|
||||
|
||||
In this step you have successfuly deleted the clusters whose creation has stopped prematurely, thus paving the way to the creation of the next cluster under slightly different circumstances.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this tutorial, you have used the CLI commands to generate cluster templates as well as clusters themselves. Also, if the cluster process failed, how to free up the system resources and try again.
|
||||
|
||||
OpenStack and Magnum did heavy lifting for you, letting you create full fledged Kubernetes clusters with only a handful of CLI commands. The next step is to start working with the Kubernetes clusters directly. That means installing the **kubectl** command with article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html) and using it to install the apps that you want to run on Kubernetes clusters.
|
||||
@ -0,0 +1,257 @@
|
||||
How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum[](#how-to-create-a-kubernetes-cluster-using-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=================================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a new Kubernetes cluster using one of the default cluster templates
|
||||
> * Visual interpretation of created networks and Kubernetes cluster nodes
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at <https://portal.cloudferro.com/> and if you are not going to use the cluster any more, remove them altogether to save resources costs.
|
||||
|
||||
Magnum clusters created by certain users are bound together with an impersonation token and in the event of removing that user from the project, the cluster will lose authentication to Openstack API making cluster non-operational. A typical scenario would be for the tenant manager to create user accounts and let them create Kubernetes clusters. Later on, in this scenario, when the cluster is operational, the user would be removed from the project. The cluster would be present but the user could not, say, create new clusters, or persistent volume claims would be dysfunctional and so on.
|
||||
|
||||
Therefore, good practice in creation of new Kubernetes clusters is to create a service account dedicated to creating a Magnum cluster. In essence, devote one account to one Kubernetes cluster, nothing more and nothing less.
|
||||
|
||||
No. 2 **Private and public keys**
|
||||
|
||||
An SSH key-pair created in OpenStack dashboard. To create it, follow this article [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html).
|
||||
|
||||
The key pair created in that article is called “sshkey”. You will use it as one of the parameters for creation of the Kubernetes cluster.
|
||||
|
||||
Step 1 Create New Cluster Screen[](#step-1-create-new-cluster-screen "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Click on **Container Infra** and then on **Clusters**.
|
||||
|
||||

|
||||
|
||||
There are no clusters yet so click on button **+ Create Cluster** on the right side of the screen.
|
||||
|
||||

|
||||
|
||||
On the left side and in blue color are the main options – screens into which you will enter data for the cluster. The three with the asterisks, **Details**, **Size**, and **Network** are mandatory; you must visit them and either enter new values or confirm the offered default values within each screen. When all the values are entered, the **Submit** button in the lower right corner will become active.
|
||||
|
||||
**Cluster Name**
|
||||
|
||||
This is your first cluster, name it just *Kubernetes*.
|
||||
|
||||

|
||||
|
||||
Cluster name cannot contain spaces. Using a name such as *XYZ k8s Production* will result in an error message, while a name such as *XYZ-k8s-Production* won’t.
|
||||
|
||||
**Cluster Template**
|
||||
|
||||
Cluster template is a blueprint for base configuration of the cluster, where the version number reflects the Kubernetes version used.
|
||||
|
||||
You immediately see how the cluster template is applied:
|
||||
|
||||

|
||||
|
||||
**Availability Zone**
|
||||
|
||||
**nova** is the name of the related module in OpenStack and is the only option offered here.
|
||||
|
||||
**Keypair**
|
||||
|
||||
Assuming you have used **Prerequisite No. 2**, choose *sshkey*.
|
||||
|
||||

|
||||
|
||||
**Addon Software - Enable Access to EO Data**
|
||||
|
||||
This field is specific to OpenStack systems that are developed by [Cloudferro hosting company](https://cloudferro.com/en/). *EODATA* here means **Earth Observation Data** and refers to data gained from scientific satelites monitoring the Earth.
|
||||
|
||||
Checking this field on, will install a network which will have access to the downloaded satelite data.
|
||||
|
||||
If you are just trying to learn about Kubernetes on OpenStack, leave this option unchecked. And vice versa: if you want to go into production and use satellite data, turn it on.
|
||||
|
||||
Note
|
||||
|
||||
There is cluster template label called **eodata\_access\_enabled=true** which – if turned on – will have the same effect of creating a network for connecting to the EODATA.
|
||||
|
||||
This is what the screen looks like when all the data have been entered:
|
||||
|
||||

|
||||
|
||||
Click on lower right button **Next** or on option **Size** from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.
|
||||
|
||||
Step 2 Define Master and Worker Nodes[](#step-2-define-master-and-worker-nodes "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In general terms, *master nodes* are used to host the internal infrastructure of the cluster, while the *worker nodes* are used to host the K8s applications.
|
||||
|
||||
This is how this window looks before entering the data:
|
||||
|
||||

|
||||
|
||||
If there are any fields with default values, such as **Flavor of Master Nodes** and **Flavor of Worker Nodes**, these values were predefined in the cluster template.
|
||||
|
||||
**Number of Master Nodes**
|
||||
|
||||

|
||||
|
||||
Kubernetes cluster has *master* and *worker* nodes. In real applications, a typical setup would be running 3 master nodes to ensure High Availability of the cluster’s infrastructure. Here, you want to create your first cluster in a new environment so settle for just **1** master node.
|
||||
|
||||
**Flavor of Master Nodes**
|
||||
|
||||

|
||||
|
||||
Select **eo1.large** for master node flavor.
|
||||
|
||||
**Number of Worker Nodes**
|
||||
|
||||

|
||||
|
||||
Enter **3**. This is for introductory purposes only, in real life the cluster can consist of multiple worker nodes. The cluster sizing guidelines are beyond the scope of this article.
|
||||
|
||||
**Flavor of Worker Nodes**
|
||||
|
||||
Again, choose **eo1.large**.
|
||||
|
||||
**Auto Scaling**
|
||||
|
||||

|
||||
|
||||
When there is lot of demand for workers’ services, the Kubernetes system can scale to using more worker nodes. Our sample setting is minimum 2 and maximum 4 master nodes. With this setting the number of nodes will be dynamically adjusted between these values, based on the ongoing load (number and resource requests of pods running K8S applications on the cluster).
|
||||
|
||||
Here is what the screen **Size** looks like when all the data are entered:
|
||||
|
||||

|
||||
|
||||
To proceed, click on lower right button **Next** or on option **Network** from the left main menu.
|
||||
|
||||
Step 3 Defining Network and LoadBalancer[](#step-3-defining-network-and-loadbalancer "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This is the last of mandatory screens and the blue **Submit** button in the lower right corner is now active. (If it is not, use screen button **Back** to fix values in previous screens.)
|
||||
|
||||

|
||||
|
||||
**Enable Load Balancer for Master Nodes**
|
||||
|
||||
This option will be automatically checked, when you selected more than one master node. Using multiple master nodes ensures High Availability of the cluster infrastructure, and in such case the Load Balancer will be then necessary to distribute the traffic between masters.
|
||||
|
||||
If you selected only one master node, which might be relevant in non-production scenarios e.g. testing, you will still have an option to either add or skip the Load Balancer. Note that using a LoadBalancer with one master node is still a relevant option, as this option will allow to access the cluster from outside of the cluster network. With no such option selected you will need to rely on SSH access to the master.
|
||||
|
||||
**Create New Network**
|
||||
|
||||
This box comes **turned on**, meaning that the system will create a network just for this cluster. Since Kubernetes clusters need subnets for inter-communications, a related subnetwork will be firstly created and then used further down the road.
|
||||
|
||||
It is strongly recommended to use automatic creation of network when creating a new cluster.
|
||||
|
||||
However, turning the checkbox off discloses an option to use an existing network as well.
|
||||
|
||||
**Use an Existing Network**
|
||||
|
||||
Using an existing network is a more advanced option. You would need to first create a network dedicated to this cluster in OpenStack along with the necessary adjustments. Creation of such a custom network is beyond the scope of this article. Note you should not use the network of another cluster, project network or EODATA network.
|
||||
|
||||
If you have an existing network and you would like to proceed, you will need to choose the network and the subnet from the dropdown below:
|
||||
|
||||

|
||||
|
||||
Both fields have an asterisk behind them, meaning you must specify a concrete value in each of the two fields.
|
||||
|
||||
**Cluster API**
|
||||
|
||||
The setting of “Available on public internet” implies that floating IPs will be assigned to both master and worker nodes. This option is usually redundant and has security concerns. Unless you have a specific requirement, leave this option on “private” setting. Then you can always assign floating IPs to required nodes from the “Compute” section in Horizon.
|
||||
|
||||
**Ingress Controller**
|
||||
|
||||
Use of ingress is a more advanced feature, related to load balancing the traffic to the Kubernetes applications.
|
||||
|
||||
If you are just starting with Kubernetes, you will rather not require this feature immediately, so you could leave this option out.
|
||||
|
||||
Step 4 Advanced options[](#step-4-advanced-options "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
**Option Management**
|
||||
|
||||

|
||||
|
||||
There is just one option in this window, **Auto Healing** and its field **Automatically Repair Unhealthy Nodes**.
|
||||
|
||||
*Node* is a basic unit of Kubernetes cluster and the Kubernetes systems software will automatically poll the state of each cluster; if not ready or not available, the system will replace the unhealthy node with a healthy one – provided, of course, that this field is checked on.
|
||||
|
||||
If this is your first time trying out the formation of Kubernetes clusters, auto healing may not be of interest to you. In production, however, auto healing should always be on.
|
||||
|
||||
**Option Advanced**
|
||||
|
||||

|
||||
|
||||
Option **Advanced** allows for entering of so-called *labels*, which are named parameters for the Kubernetes system. Normally, you don’t have to enter anything here.
|
||||
|
||||
Labels can change how the cluster creation is performed. There is a set of labels, called the *Template and Workflow Labels*, that the system sets up by default. If this check box is left as is, that is, unchecked, the default labels will be used unchanged. That guarantees that the cluster will be formed with all of the essential parameters in order. Even if you add your own labels, as shown in the image above, everything will still function.
|
||||
|
||||
If you **turn on** the field **I do want to override Template and Workflow Labels** and if you use any of the *Template and Workflow Labels* by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.
|
||||
|
||||
Step 5 Forming of the Cluster[](#step-5-forming-of-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Once you click on **Submit** button, OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.
|
||||
|
||||
Cluster generation usually takes from 10 to 15 minutes. It will be automatically abandoned if duration time is longer than 60 minutes.
|
||||
|
||||
If there is any problem with creation of the cluster, the system will signal it in various ways. You may see a message in the upper right corner, with a red background, like this:
|
||||
|
||||

|
||||
|
||||
Just repeat the process and in most cases you will proceed to the following screen:
|
||||
|
||||

|
||||
|
||||
Click on the name of the cluster, *Kubernetes*, and see what it will look like if everything went well.
|
||||
|
||||

|
||||
|
||||
Step 6 Review cluster state[](#step-6-review-cluster-state "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Here is what OpenStack Magnum created for you as the result of filling in the data in those three screens:
|
||||
|
||||
> * A new network called *Kubernetes*, complete with subnet, ready to connect further.
|
||||
> * New instances – virtual machines that serve as nodes.
|
||||
> * A new external router.
|
||||
> * New security groups, and of course
|
||||
> * A fully functioning Kubernetes cluster on top of all these other elements.
|
||||
|
||||
You can observe that the number of nodes in the cluster was initially 3, but after a while the cluster auto-scaled itself to 2. This is expected and is the result of autoscaler, which detected that our cluster is mostly still idle in terms of application load.
|
||||
|
||||
There is another way which we can view our cluster setup and inspect any deviations from required state. Click on **Network** in the main menu and then on **Network Topology**. You will see a real time graphical representation of the network. As soon as the one of the cluster elements is added, it will be shown on screen.
|
||||
|
||||

|
||||
|
||||
Also in the Horizon’s “Compute” panel you can see the virtual machines which were created for master and worker nodes:
|
||||
|
||||

|
||||
|
||||
Node names start with *kubernetes* because that is the name of the cluster in lower case.
|
||||
|
||||
Resources tied up from one attempt of creating a cluster are **not** automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You now have a fully operational Kubernetes cluster. You can
|
||||
|
||||
* use ready-made Docker images to automate installation of apps,
|
||||
* activate the Kubernetes dashboard and watch the state of the cluster online
|
||||
|
||||
and so on.
|
||||
|
||||
Here are some relevant articles:
|
||||
|
||||
Read more about ingress here: [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
Article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) shows how to use command line interface to create Kubernetes clusters.
|
||||
|
||||
To access your newly created cluster from command line, see article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
@ -0,0 +1,180 @@
|
||||
How to create Kubernetes cluster using Terraform on CloudFerro Cloud[](#how-to-create-kubernetes-cluster-using-terraform-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================
|
||||
|
||||
In this article we demonstrate using [Terraform](https://www.terraform.io/) to deploy an OpenStack Magnum Kubernetes cluster on CloudFerro Cloud cloud.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting account**
|
||||
|
||||
You need an active CloudFerro Cloud account <https://portal.cloudferro.com/>.
|
||||
|
||||
No. 2 **Active CLI session with OpenStackClient for Linux**
|
||||
|
||||
You need an OpenStack CLI installed and the respective Python virtual environment sourced. For guidelines see:
|
||||
|
||||
[How to install OpenStackClient for Linux on CloudFerro Cloud](../openstackcli/How-to-install-OpenStackClient-for-Linux-on-CloudFerro-Cloud.html)
|
||||
|
||||
It will show you how to install Python, create and activate a virtual environment, and then connect to the cloud by downloading and activating the proper RC file from the CloudFerro Cloud cloud.
|
||||
|
||||
No. 3 **Connect to the cloud via an RC file**
|
||||
|
||||
Another article, [How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication](../accountmanagement/How-to-activate-OpenStack-CLI-access-to-CloudFerro-Cloud-cloud-using-one-or-two-factor-authentication.html), deals with connecting to the cloud and is covering either of the one- or two-factor authentication procedures that are enabled on your account. It also covers all the main platforms: Linux, MacOS and Windows.
|
||||
|
||||
You will use both the Python virtual environment and the downloaded RC file **after** Terraform has been installed.
|
||||
|
||||
No. 4 **Familiarity with creating Kubernetes clusters**
|
||||
|
||||
Familiarity with creating Kubernetes clusters in a standard way e.g. using Horizon or OpenStack CLI:
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
[How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 5 **Terraform operational**
|
||||
|
||||
Have Terraform installed locally or on a cloud VM - installation guidelines along with further information can be found in this article:
|
||||
|
||||
[Generating and authorizing Terraform using Keycloak user on CloudFerro Cloud](../openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-CloudFerro-Cloud.html)
|
||||
|
||||
After you finish working through that article, you will have access to the cloud via an active **openstack** command. Also, special environmental (**env**) variables (**OS\_USERNAME**, **OS\_PASSWORD**, **OS\_AUTH\_URL** and others) will be set up so that various programs can use them – Terraform being the prime target here.
|
||||
|
||||
Define provider for Terraform[](#define-provider-for-terraform "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Terraform uses the notion of *provider*, which represents your concrete cloud environment and covers authentication. CloudFerro Cloud clouds are built complying with OpenStack technology and OpenStack is one of the standard types of providers for Terraform.
|
||||
|
||||
We need to:
|
||||
|
||||
> * instruct Terraform to use OpenStack as a provider type
|
||||
> * provide credentials which will to point to our own project and user in the cloud.
|
||||
|
||||
Assuming you have worked through Prerequisite No. 2 (download and source the RC file), several OpenStack-related environment variables will be populated in your local system. The ones pointing to your OpenStack environment start with OS, e.g. **OS\_USERNAME**, **OS\_PASSWORD**, **OS\_AUTH\_URL**. When we define OpenStack as TerraForm provider type, Terraform will know to automatically use these **env** variables to authenticate.
|
||||
|
||||
Let’s define the Terraform provider now by creating file **provider.tf** with the following contents:
|
||||
|
||||
> **provider.tf**
|
||||
|
||||
```
|
||||
# Define providers
|
||||
terraform {
|
||||
required_version = ">= 0.14.0"
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
version = "~> 1.35.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Configure the OpenStack Provider
|
||||
provider "openstack" {
|
||||
auth_url = "https://keystone.cloudferro.com:5000/v3"
|
||||
# the rest of configuration parameters are taken from environment variables once RC file is correctly sourced
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The **auth\_url** is the only configuration option that shall be provided in the configuration file, despite it also being available within the environment variables.
|
||||
|
||||
Having this provider spec allows us to create a cluster in the following steps, but can also be reused to create other resources in your OpenStack environment e.g. virtual machines, volumes and many others.
|
||||
|
||||
Define cluster resource in Terraform[](#define-cluster-resource-in-terraform "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
The second step is to define the exact specification of a resource that we want to create with Terraform. In our case we want to create a OpenStack Magnum cluster. In Terraform terminology, it will be an instance of **openstack\_containerinfra\_cluster\_v1** resource type. To proceed, create file **cluster.tf** which contains the specification of our cluster:
|
||||
|
||||
**cluster.tf**
|
||||
|
||||
```
|
||||
# Create resource
|
||||
resource "openstack_containerinfra_cluster_v1" "k8s-cluster" {
|
||||
name = "k8s-cluster"
|
||||
cluster_template_id = "524535ed-9a0f-4b70-966f-6830cdc52604"
|
||||
node_count = 3
|
||||
master_count = 3
|
||||
flavor = "eo1.large"
|
||||
master_flavor = "hmad.medium"
|
||||
keypair = "mykeypair"
|
||||
labels = {
|
||||
eodata_access_enabled = true
|
||||
etcd_volume_size = 0
|
||||
}
|
||||
merge_labels = true
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The above setup reflects a cluster with some frequently used customizations:
|
||||
|
||||
cluster\_template\_id
|
||||
: corresponds to the ID of one of default cluster templates in WAW3-2 cloud, which is **k8s-localstorage-1.23.16-v1.0.0**. The default templates and their IDs can be looked up in Horizon UI interface in the submenu **Cluster Infra** -→ **Container Templates**.
|
||||
|
||||
node\_count, node\_flavor, master\_node\_count, master\_node\_flavor
|
||||
: correspond intuitively to **count** and **flavor** of master and worker nodes in the cluster.
|
||||
|
||||
keypair
|
||||
: reflects the name of keypair used in our openstack project in the chosen cloud
|
||||
|
||||
labels and merge\_labels
|
||||
: We use two labels:
|
||||
|
||||
eodata\_access\_enabled=true
|
||||
: ensures that EODATA network with fast access to satellite images is connected to our cluster nodes,
|
||||
|
||||
etcd\_volume\_size=0
|
||||
: which ensures that master nodes are properly provisioned with NVME local storage.
|
||||
|
||||
With this configuration, it is mandatory to also use configuration **merge\_labels=true** to properly apply these labels and avoid overwriting them by template defaults.
|
||||
|
||||
In our example we operate on WAW3-2 cloud, where flavor **hmad.medium** is available. If using another cloud, adjust the parameters accordingly.
|
||||
|
||||
The above configuration reflects a cluster where *loadbalancer* is placed in front of the master nodes, and where this loadbalancer’s flavor is **HA-large**. Customizing this default, similarly as with other more advanced defaults, would require creating a custom Magnum template, which is beyond the scope of this article.
|
||||
|
||||
Apply the configurations and create the cluster[](#apply-the-configurations-and-create-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once both Terraform configurations described in previous steps are defined, we can apply them to create our cluster.
|
||||
|
||||
The first step is to have both files **provider.tf** and **cluster.tf** available in a dedicated folder. Then **cd** to this folder and type:
|
||||
|
||||
```
|
||||
terraform init
|
||||
|
||||
```
|
||||
|
||||
This command will initialize our cluster deployment. It will capture any formal errors with authentication to OpenStack, which might need correcting before moving to the next stage.
|
||||
|
||||

|
||||
|
||||
As the next step, Terraform will plan the actions it needs to perform to create the resource. Proceed with typing:
|
||||
|
||||
```
|
||||
terraform plan
|
||||
|
||||
```
|
||||
|
||||
The result is shown below and gives a chance to correct any logical errors to our expected setup:
|
||||
|
||||

|
||||
|
||||
The last step is to apply the planned changes. Perform this step with the command:
|
||||
|
||||
```
|
||||
terraform apply
|
||||
|
||||
```
|
||||
|
||||
The output of this last command will initially repeat the plan, then ask to enter word **yes** to set the Terraform into action.
|
||||
|
||||
Upon confirming with **yes**, the action is deployed and the console will update every 10 seconds to give a “Still creating …” check until our cluster is created.
|
||||
|
||||
The final lines of the output after successfully provisioning the cluster, should read similar to the below:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Terraform can be used also to deploy additional applications to our cluster e.g. using Helm provider for Terraform. Check Terraform documentation for more details.
|
||||
@ -0,0 +1,389 @@
|
||||
How to install Rancher RKE2 Kubernetes on CloudFerro Cloud[](#how-to-install-rancher-rke2-kubernetes-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================
|
||||
|
||||
[RKE2](https://docs.rke2.io/) - Rancher Kubernetes Engine version 2 - is a Kubernetes distribution provided by SUSE. Running a self-managed RKE2 cluster in CloudFerro Cloud cloud is a viable option, especially for those seeking smooth integration with Rancher platform and customization options.
|
||||
|
||||
An RKE2 cluster can be provisioned from Rancher GUI. However, in this article we use Terraform, which enables streamlined, automated cluster creation. We also use OpenStack Cloud Controller Manager (CCM) to integrate RKE2 cluster with the wider OpenStack environment. Using the customized version of CCM enables us to take advantage of CloudFerro Cloud cloud-native features. The end result is
|
||||
|
||||
> * a provisioned RKE2 cluster
|
||||
> * running under OpenStack, with
|
||||
> * an integrated OpenStack Cloud Controller Manager.
|
||||
|
||||
We also illustrate the coding techniques used, in case you want to enhance the RKE2 implementation further.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Perform the preliminary setup
|
||||
>
|
||||
> > * Create new project
|
||||
> > * Create application credentials
|
||||
> > * Have keypair operational
|
||||
> > * Authenticate to the newly formed project
|
||||
>
|
||||
> * Use Terraform configuration for RKE2 from CloudFerro’s GitHub repository
|
||||
> * Provision an RKE2 cluster
|
||||
> * Demonstrate the incorporated cloud-native load-balancing
|
||||
> * Implementation details
|
||||
> * Further customization
|
||||
|
||||
The code is tested on Ubuntu 22.04.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Terraform available on your local command line**
|
||||
|
||||
See [Generating and authorizing Terraform using Keycloak user on CloudFerro Cloud](../openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 3 **Python virtual environment sourced**
|
||||
|
||||
[How to install Python virtualenv or virtualenvwrapper on CloudFerro Cloud](../cloud/How-to-install-Python-virtualenv-or-virtualenvwrapper-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 4 **OpenStack CLI installed locally**
|
||||
|
||||
When installed, you will have access to **openstack** command and will be able to communicate with the OpenStack cloud:
|
||||
|
||||
[How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication](../accountmanagement/How-to-activate-OpenStack-CLI-access-to-CloudFerro-Cloud-cloud-using-one-or-two-factor-authentication.html)
|
||||
|
||||
No. 5 **kubectl tool installed locally**
|
||||
|
||||
Standard types of **kubectl** installation are described on [Install Tools page](https://kubernetes.io/docs/tasks/tools/) of the official Kubernetes site.
|
||||
|
||||
No. 6 **Available key pair in OpenStack**
|
||||
|
||||
[How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html).
|
||||
|
||||
No. 7 **Application credentials**
|
||||
|
||||
The following article describes how to create and use application credentials, using CLI:
|
||||
|
||||
[How to generate or use Application Credentials via CLI on CloudFerro Cloud](../cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-CloudFerro-Cloud.html)
|
||||
|
||||
In this article, we shall create application credentials through Horizon but with a specific selection of user roles.
|
||||
|
||||
No. 8 **Projects, roles, users and groups**
|
||||
|
||||
Option **Identity** lists available projects, roles, users and groups. See [What is an OpenStack project on CloudFerro Cloud](../cloud/What-is-an-OpenStack-project-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 9 **Experience with Kubernetes and Helm**
|
||||
|
||||
To follow up on this article, you should know your way around Kubernetes in general. Having the actual experience of using it on CloudFerro Cloud cloud, would be even better. For a series of article on Kubernetes, see [KUBERNETES](kubernetes.html).
|
||||
|
||||
To perform the installation required in this article, one of the steps will be to create Helm CRD and use it. This article shows the basics of using Helm [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html).
|
||||
|
||||
No. 10 **Cloud Controller Manager**
|
||||
|
||||
Within a general Kubernetes environment, [the Cloud Controller Manager (CCM)](https://kubernetes.io/docs/concepts/architecture/cloud-controller/) allows Kubernetes to integrate with cloud provider APIs. It abstracts cloud-specific logic and manages and synchronizes resources between Kubernetes and the underlying cloud infrastructure. Also, it provides controllers for Nodes, Routes, Services and Volumes.
|
||||
|
||||
Under OpenStack, CCM integrates with OpenStack APIs. The code used here is from a concrete repository for Cloud Controller Manager – <https://github.com/kubernetes/cloud-provider-openstack> It implements the above mentioned (as well as) other OpenStack-Kubernetes integrations.
|
||||
|
||||
No. 11 **rke2-terraform repository**
|
||||
|
||||
You will need to download the following repository
|
||||
|
||||
> <https://github.com/CloudFerro/K8s-samples/tree/main/rke2-terraform>
|
||||
|
||||
in order to install install Terraform manifests for provisioning of RKE2 on CloudFerro Cloud using Terraform.
|
||||
|
||||
No. 12 **Customize the cloud configuration for Terraform**
|
||||
|
||||
One of the files downloaded from the above link will be **variables.tf**. It contains definitions of region, cluster name and many other variables. The default value for region is **WAW3-2** so customize it for your own cloud.
|
||||
|
||||

|
||||
|
||||
Step 1 Perform the preliminary setup[](#step-1-perform-the-preliminary-setup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Our objective is to create a Kubernetes cluster, which runs in the cloud environment. RKE2 software packages will be installed on cloud virtual machines playing roles of Kubernetes master and worker nodes. Also, several other OpenStack resources will be created along.
|
||||
|
||||
As part of the preliminary setup to provision these resources we will:
|
||||
|
||||
> * Create a dedicated OpenStack project to isolate all resources dedicated to the cluster
|
||||
> * Create application credentials
|
||||
> * Ensure a key pair is enabled for the project
|
||||
> * Source locally the RC file for this project
|
||||
|
||||
We here provide the instruction to install the project, credentials, key pair and source locally the RC file.
|
||||
|
||||
### Preparation step 1 Create new project[](#preparation-step-1-create-new-project "Permalink to this headline")
|
||||
|
||||
First step is to create a new project use Horizon UI. Click on Identity → Projects. Fill in the name of the project on the first tab:
|
||||
|
||||

|
||||
|
||||
In the second tab, ensure that the user you operate with is added as a project member with: “member”, “load-balancer\_member” and “creator” roles.
|
||||
|
||||

|
||||
|
||||
Then click on “Create Project”. Once the project is created, switch to the context of this project from top left menu:
|
||||
|
||||

|
||||
|
||||
### Preparation step 2 Create application credentials[](#preparation-step-2-create-application-credentials "Permalink to this headline")
|
||||
|
||||
The next step is to create an application credential that will be used to authenticate the OpenStack Cloud Controller Manager (used for automated load balancer provisioning). To create one, go to menu **Identity** → **Application Credentials**. Fill in the form as per the below example, passing all available roles (“member”, “load-balancer\_member”, “creator”, “reader”) roles to this credential. Set the expiry date to a date in the future.
|
||||
|
||||

|
||||
|
||||
After clicking on **Create Application Credential**, copy both application ID and credential secret in a safe place. The window will be only displayed once, so the best solution is to download files **openrc** and **clouds.yaml**, which will both contain the required values.
|
||||
|
||||

|
||||
|
||||
Prerequisite No. 7 contains a complete guide to application credentials.
|
||||
|
||||
### Preparation step 3 Keypair operational[](#preparation-step-3-keypair-operational "Permalink to this headline")
|
||||
|
||||
Before continuing, ensure you have a keypair available. If you already had a keypair in your main project, this keypair will be available also for the newly created project. If you do not have one yet, create it from the left menu **Project** → **Compute** → **Key Pairs**. For additional details, visit Prerequisite No. 6.
|
||||
|
||||
### Preparation step 4 Authenticate to the newly formed project[](#preparation-step-4-authenticate-to-the-newly-formed-project "Permalink to this headline")
|
||||
|
||||
Lastly, download the RC file corresponding to the new project from Horizon GUI, then source this file in your local Linux terminal. See Prerequisite No. 4.
|
||||
|
||||
Step 2 Use Terraform configuration for RKE2 from CloudFerro’s GitHub repository[](#step-2-use-terraform-configuration-for-rke2-from-cloudferro-s-github-repository "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We added folder **rke2-terraform** to CloudFerro’s [K8s-samples GitHub repository](https://github.com/CloudFerro/K8s-samples/tree/main/rke2-terraform), from Prerequisite No. 11. This project includes configuration files to provision an RKE2 cluster on CloudFerro clouds and can be used as a starter pack for further customizations to your specific requirements.
|
||||
|
||||

|
||||
|
||||
In this section, we briefly introduce this repository, explaining the content and purpose of the specific configuration files. These files are the actual commands to Terraform and are defined in its standard files, with the extension **.tf**.
|
||||
|
||||
variables.tf
|
||||
: Contains key variables that specify configuration of our cluster e.g. **number of worker nodes**, **cloud region** where the cluster will be placed, **name of the cluster**. Most of these variables have their default values set and you can modify these defaults directly in the file. The variables with no defaults (secret, sensitive data) should have their values provided separately, via the use of **tfvars** file, which is explained in the next section.
|
||||
|
||||
providers.tf
|
||||
: Used for declaring and configuring Terraform providers. In our case, we only use OpenStack provider, which is provisioning cloud resources that form the cluster.
|
||||
|
||||
main.tf
|
||||
: Contains declaration of resources to be created by Terraform. Several OpenStack resources are required to form a cluster e.g. a Network, Subnet, Router, Virtual Machines and others. Review the file for details and customize to your preference.
|
||||
|
||||
security-groups.tf
|
||||
: Contains declaration of security groups and security group rules used in OpenStack to open specific ports on virtual machines forming the cluster. Thus, the communication from selected sources gets enabled on each VM. Modify the file to customize.
|
||||
|
||||
cloud-init-masters.yml.tpl
|
||||
: and
|
||||
|
||||
cloud-init-workers.yml.tpl
|
||||
: These two are template files used to create *cloud-init* files, which in turn are used for bootstrapping the created virtual machines:
|
||||
|
||||
> * ensuring certain packages are installed on these VMs,
|
||||
> * creating and running scripts on them etc.
|
||||
|
||||
The content of these templates gets populated based on the user-data section in virtual machine declarations in **main.conf**.
|
||||
|
||||
One of the primary functions of each *cloud-init* file is to install rke2 on both master and worker nodes.
|
||||
|
||||
Step 3 Provision an RKE2 cluster[](#step-3-provision-an-rke2-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s provision an RKE2 Kubernetes cluster now. This will consist of the following steps:
|
||||
|
||||
> * Clone the github repository
|
||||
> * Adjust the defaults in **variables.tf**
|
||||
> * Create file **terraform.tfvars**, with secrets
|
||||
> * Initialize, plan and apply the Terraform configurations
|
||||
> * Use the retrieved **kubeconfig** to access the cluster with **kubectl**
|
||||
|
||||
The first step is to clone the github repository. We clone the entire repo but just leave the **rke2-terraform** folder with the below commands:
|
||||
|
||||
```
|
||||
git clone https://github.com/CloudFerro/K8s-samples
|
||||
mkdir ~/rke2-terraform
|
||||
mv ~/K8s-samples/rke2-terraform/* ~/rke2-terraform
|
||||
rm K8s-samples/ -rf
|
||||
cd rke2-terraform
|
||||
|
||||
```
|
||||
|
||||
As mentioned in Prerequisite No. 12, inspect and eventually change the value of the default settings in **variables.tf** e.g. change the name of the cluster, cloud region or virtual machine settings.
|
||||
|
||||
In our case, we stick to the defaults.
|
||||
|
||||
Note
|
||||
|
||||
Highly available control plane is currently not covered by this repository. Also, setting number of master nodes to a value other than 1 is **not** supported.
|
||||
|
||||
### Enter data in file terraform.tfvars[](#enter-data-in-file-terraform-tfvars "Permalink to this headline")
|
||||
|
||||
The next step is to create file **terraform.tfvars**, with the following contents:
|
||||
|
||||
```
|
||||
ssh_keypair_name = "your_ssh_keypair_name"
|
||||
project_id = "your_project_id"
|
||||
public_key = "your_public_key"
|
||||
application_credential_id = "your_app_credential_id"
|
||||
application_credential_secret = "your_app_credential_secret"
|
||||
|
||||
```
|
||||
|
||||
Get ssh\_keypair\_name
|
||||
: Choose one from the list shown after **Compute** -> **Key Pairs**.
|
||||
|
||||
Get project\_id
|
||||
: To get **project\_id**, the easiest way is to list all of the projects with **Identity** -> **Projects**, click on project name and read the **ID**.
|
||||
|
||||
Get public\_key
|
||||
: To get **public\_key**, execute **Compute** -> **Key Pairs** and click on the name of the keypair name you have entered for variable **ssh\_keypair\_name**.
|
||||
|
||||
Get application\_credential\_id
|
||||
: Get application credential **ID** from one of the files **openrc** or **clouds.yaml**.
|
||||
|
||||
Get application\_credential\_secret
|
||||
: The same, only for secret.
|
||||
|
||||
### Run Terraform to provision RKE2 cluster[](#run-terraform-to-provision-rke2-cluster "Permalink to this headline")
|
||||
|
||||
This completes the set up part. We can now run the standard Terraform commands - **init**, **plan** and **apply** - to create our RKE2 cluster. The commands should be executed in the order provided below. Type **yes** when required to reconfirm the steps planned by Terraform.
|
||||
|
||||
```
|
||||
terraform init
|
||||
terraform plan
|
||||
terraform apply
|
||||
|
||||
```
|
||||
|
||||
The provisioning will take a few minutes (apx. 5-10 minutes for a small cluster). Logs will be printed to console confirming creation of each resource. Here is a sample final output from the **terraform apply** command:
|
||||
|
||||

|
||||
|
||||
As a part of the provisioning process, the *kubeconfig* file **kubeconfig.yaml** will be copied to your local working directory. Export the environment variable pointing your local kubectl installation to this *kubeconfig* location (replace the path in the sample command below):
|
||||
|
||||
```
|
||||
export KUBECONFIG=/path_to_your_kubeconfig_file/kubeconfig.yaml
|
||||
|
||||
```
|
||||
|
||||
Then check whether the cluster is available with:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
|
||||
```
|
||||
|
||||
We can see that the cluster is provisioned correctly in our case, with both master and worker nodes being **Ready**:
|
||||
|
||||

|
||||
|
||||
Step 4 Demonstrate cloud-native integration covered by the repo[](#step-4-demonstrate-cloud-native-integration-covered-by-the-repo "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We can verify the automated provisioning of load balancers and public Floating IP by exposing a service of type LoadBalancer. The following **kubectl** commands will deploy and expose an **nginx** server in our RKE2 cluster’s default namespace:
|
||||
|
||||
```
|
||||
kubectl create deployment nginx-deployment --image=nginx:latest
|
||||
kubectl expose deployment nginx-deployment --type=LoadBalancer --port=80 --target-port=80
|
||||
|
||||
```
|
||||
|
||||
It takes around 2-3 minutes for the FIP and LoadBalancer to be provisioned. When you run this command:
|
||||
|
||||
```
|
||||
kubectl get services
|
||||
|
||||
```
|
||||
|
||||
After this time, you should see the result similar to the one below, where EXTERNAL-IP got properly populated:
|
||||
|
||||

|
||||
|
||||
Similarly, you could verify the presence of the created load balancer in the Horizon interface via the left menu: **Project** → **Network** → **LoadBalancers**
|
||||
|
||||

|
||||
|
||||
and **Project** → **Network** → **Floating IPs**:
|
||||
|
||||

|
||||
|
||||
Ultimately, we can check the service is running as a public service in our browser with the assigned floating IP:
|
||||
|
||||

|
||||
|
||||
Implementation details[](#implementation-details "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Explaining all of the techniques that went into production of RKE2 repository from Prerequisite No. 11 is out of scope of this article. However, here is an illustration of how at least one feature was implemented.
|
||||
|
||||
Let us examine the **cloud-init-masters.yml.tpl** file, concretely, the part between line numbers 53 and 79:
|
||||
|
||||
```
|
||||
- path: /var/lib/rancher/rke2/server/manifests/rke2-openstack-cloud-controller-manager.yaml
|
||||
permissions: "0600"
|
||||
owner: root:root
|
||||
content: |
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: openstack-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: openstack-cloud-controller-manager
|
||||
repo: https://kubernetes.github.io/cloud-provider-openstack
|
||||
targetNamespace: kube-system
|
||||
bootstrap: True
|
||||
valuesContent: |-
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/control-plane: "true"
|
||||
cloudConfig:
|
||||
global:
|
||||
auth-url: https://keystone.cloudferro.com:5000
|
||||
application-credential-id: "${application_credential_id}"
|
||||
application-credential-secret: "${application_credential_secret}"
|
||||
region: ${region}
|
||||
tenant-id: ${project_id}
|
||||
loadBalancer:
|
||||
floating-network-id: "${floating_network_id}"
|
||||
subnet-id: ${subnet_id}
|
||||
|
||||
```
|
||||
|
||||
It covers creating a yaml definition of a HelmChart CRD
|
||||
|
||||
*rke2-openstack-cloud-controller-manager.yaml*
|
||||
|
||||
in location
|
||||
|
||||
**/var/lib/rancher/rke2/server/manifests/**
|
||||
|
||||
on the master node. Upon cluster creation, RKE2 provisioner automatically captures this file and deploys a pod responsible for provisioning such load balancers. This can be verified by checking the pods in the *kube-system* namespace:
|
||||
|
||||
```
|
||||
kubectl get pods -n kube-system
|
||||
|
||||
```
|
||||
|
||||
One of the entries is the aforementioned pod:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
openstack-cloud-controller-manager-bz7zt 1/1 Running 1 (4h ago) 26h
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
Further customization[](#further-customization "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Depending on your use case, further customization to the provided sample repository will be required to tune the Terraform configurations to provision an RKE2 cluster. We suggest evaluating the following enhancements:
|
||||
|
||||
> * Incorporate High Availability of the Control Plane
|
||||
> * Integrate with CSI Cinder to enable automated provisioning of block storage with the Persistent Volume Claims (PVCs)
|
||||
> * Integrate NVIDIA device plugin for enabling native integration of VMs with vGPUs.
|
||||
> * Implement node autoscaler to complement the Kubernetes-native Horizontal Pod Autoscaler (HPA)
|
||||
> * Implement affinity and anti-affinity rules for placement of worker and master nodes
|
||||
|
||||
To implement these features, you would need to simultaneously adjust definitions for both Terraform and Kubernetes resources. Covering those steps is, therefore, outside of scope of this article.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this article, you have created a proper Kubernetes solution using RKE2 cluster as a foundation.
|
||||
|
||||
You can also consider creating Kubernetes clusters using Magnum within OpenStack:
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
@ -0,0 +1,339 @@
|
||||
Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud[](#implementing-ip-whitelisting-for-load-balancers-with-security-groups-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================================================================
|
||||
|
||||
In this article we describe how to use commands in Horizon, CLI and Terraform to secure load balancers for Kubernetes clusters in OpenStack by implementing IP whitelisting.
|
||||
|
||||
What Are We Going To Do[](#what-are-we-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Introduction[](#introduction "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
Load balancers without proper restrictions are vulnerable to unauthorized access. By implementing IP whitelisting, only specified IP addresses are permitted to access the load balancer. You decide from which IP address it is possible to access the load balancers in particular and the Kubernetes cluster in general.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **List of IP addresses/ranges to whitelist**
|
||||
|
||||
This is the list of IP addresses that you want the load balancer to be able to listen to.
|
||||
|
||||
No. 3 **A preconfigured load balancer**
|
||||
|
||||
In OpenStack, each time you create a Kubernetes cluster, the corresponding load balancers are created automatically.
|
||||
|
||||
See article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **OpenStack command operational**
|
||||
|
||||
This is a necessary for CLI procedures.
|
||||
|
||||
This boils down to sourcing the proper RC file from Horizon. See [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 5 **Python Octavia Client**
|
||||
|
||||
To operate Load Balancers with CLI, the Python Octavia Client (python-octaviaclient) is required. It is a command-line client for the OpenStack Load Balancing service. Install the load-balancer (Octavia) plugin with the following command from the Terminal window, on Ubuntu 22.04:
|
||||
|
||||
```
|
||||
pip install python-octaviaclient
|
||||
|
||||
```
|
||||
|
||||
Or, if you have virtualenvwrapper installed:
|
||||
|
||||
```
|
||||
mkvirtualenv python-octaviaclient
|
||||
pip install python-octaviaclient
|
||||
|
||||
```
|
||||
|
||||
Depending on the environment, you might need to use variants such as python3, pip3 and so on.
|
||||
|
||||
No. 6 **Terraform installed**
|
||||
|
||||
You will need Terraform version 1.50 or higher to be operational.
|
||||
|
||||
For complete introduction and installation of Terrafom on OpenStack see article [Generating and authorizing Terraform using Keycloak user on CloudFerro Cloud](../openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-CloudFerro-Cloud.html)
|
||||
|
||||
To use Terraform in this capacity, you will need to authenticate to the cloud using application credentials with **unrestricted** access. Check article [How to generate or use Application Credentials via CLI on CloudFerro Cloud](../cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-CloudFerro-Cloud.html)
|
||||
|
||||
Horizon: Whitelisting Load Balancers[](#horizon-whitelisting-load-balancers "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------
|
||||
|
||||
We will whitelist load balancers by restricting the relevant ports in their security groups. In Horizon, use command **Network** –> **Load Balancers** to see the list of load balancers:
|
||||
|
||||

|
||||
|
||||
Let us use load balancer with the name starting with **gitlab**. There is no direct connect from load balancer to security groups, so we first have to identify an instance which corresponds to that load balancer. Use commands **Project** –> **Compute** –> **Instances** and search for instances containing **gitlab** in its name:
|
||||
|
||||

|
||||
|
||||
Edit the security groups of those instances – for each instance, go to the **Actions** menu and select **Edit Security Groups**.
|
||||
|
||||

|
||||
|
||||
Filter by **gitlab**:
|
||||
|
||||

|
||||
|
||||
Use commands **Project** –> **Network** –> **Security Groups** to list security groups with **gitlab** in its name:
|
||||
|
||||

|
||||
|
||||
Choose which one you are going to edit; alternatively, you can create a new security group. Anyways, be sure to enter the following data:
|
||||
|
||||
> * **Direction**: Ingress
|
||||
> * **Ether Type**: IPv4
|
||||
> * **Protocol**: TCP
|
||||
> * **Port Range**: Specify the port range used by your load balancer.
|
||||
> * **Remote IP Prefix**: Enter the IP address or CIDR to whitelist.
|
||||
|
||||
Save and apply the changes.
|
||||
|
||||
### Verification[](#verification "Permalink to this headline")
|
||||
|
||||
To confirm the configuration:
|
||||
|
||||
1. Go to the **Instances** section in Horizon.
|
||||
2. View the security groups applied to the load balancers’ associated instances.
|
||||
3. Ensure the newly added rule is visible.
|
||||
|
||||
CLI: Whitelisting Load Balancers[](#cli-whitelisting-load-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------
|
||||
|
||||
The OpenStack CLI provides a command-line method for implementing IP whitelisting.
|
||||
|
||||
Be sure to work through Prerequisites Nos 4 and 5 in order to have **openstack** command fully operational.
|
||||
|
||||
List the security groups associated with the load balancer:
|
||||
|
||||
```
|
||||
openstack loadbalancer show <LOAD_BALANCER_NAME_OR_ID>
|
||||
|
||||
```
|
||||
|
||||
Identify the pool associated with the load balancer:
|
||||
|
||||
```
|
||||
openstack loadbalancer pool list
|
||||
|
||||
```
|
||||
|
||||
Show details of the pool to list its members:
|
||||
|
||||
```
|
||||
openstack loadbalancer pool show <POOL_NAME_OR_ID>
|
||||
|
||||
```
|
||||
|
||||
Note the IP addresses of the pool members and identify the instances hosting them.
|
||||
|
||||
Create a security group for IP whitelisting:
|
||||
|
||||
```
|
||||
openstack security group create <SECURITY_GROUP_NAME>
|
||||
|
||||
```
|
||||
|
||||
Add rules to the security group:
|
||||
|
||||
```
|
||||
openstack security group rule create \
|
||||
--ingress \
|
||||
--ethertype IPv4 \
|
||||
--protocol tcp \
|
||||
--dst-port <PORT_RANGE> \
|
||||
--remote-ip <IP_OR_CIDR> \
|
||||
<SECURITY_GROUP_ID>
|
||||
|
||||
```
|
||||
|
||||
Apply the security group to the instances hosting the pool members:
|
||||
|
||||
```
|
||||
openstack server add security group <INSTANCE_ID> <SECURITY_GROUP_NAME>
|
||||
|
||||
```
|
||||
|
||||
### Verification[](#id1 "Permalink to this headline")
|
||||
|
||||
Verify the applied security group rules:
|
||||
|
||||
```
|
||||
openstack security group show <SECURITY_GROUP_ID>
|
||||
|
||||
```
|
||||
|
||||
Confirm the security group is attached to the appropriate instances:
|
||||
|
||||
```
|
||||
openstack server show <INSTANCE_ID>
|
||||
|
||||
```
|
||||
|
||||
Terraform: Whitelisting Load Balancers[](#terraform-whitelisting-load-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Terraform is an Infrastructure as Code (IaC) tool that can automate the process of configuring IP whitelisting.
|
||||
|
||||
Create a security group and whitelist rule in **main.tf**:
|
||||
|
||||
```
|
||||
# main.tf
|
||||
|
||||
# Security Group to Whitelist IPs
|
||||
resource "openstack_networking_secgroup_v2" "whitelist_secgroup" {
|
||||
name = "loadbalancer_whitelist"
|
||||
description = "Security group for load balancer IP whitelisting"
|
||||
}
|
||||
|
||||
# Add Whitelist Rule for Specific IPs
|
||||
resource "openstack_networking_secgroup_rule_v2" "allow_whitelist" {
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = "tcp"
|
||||
port_range_min = 80 # Replace with actual port range
|
||||
port_range_max = 80
|
||||
remote_ip_prefix = "192.168.1.0/24" # Replace with actual CIDR
|
||||
security_group_id = openstack_networking_secgroup_v2.whitelist_secgroup.id
|
||||
}
|
||||
|
||||
# Existing Instances Associated with Pool Members
|
||||
resource "openstack_compute_instance_v2" "instances" {
|
||||
count = 2 # Adjust to the number of pool member instances
|
||||
name = "pool_member_${count.index + 1}"
|
||||
flavor_id = "m1.small" # Replace with an appropriate flavor
|
||||
image_id = "image-id" # Replace with a valid image ID
|
||||
key_pair = "your-key-pair"
|
||||
security_groups = [openstack_networking_secgroup_v2.whitelist_secgroup.name]
|
||||
network {
|
||||
uuid = "network-uuid" # Replace with the UUID of your network
|
||||
}
|
||||
}
|
||||
|
||||
# Associate the Load Balancer with Security Group via Instances
|
||||
resource "openstack_lb_loadbalancer_v2" "loadbalancer" {
|
||||
name = "my_loadbalancer"
|
||||
vip_subnet_id = "subnet-id" # Replace with the subnet ID
|
||||
depends_on = [openstack_compute_instance_v2.instances]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Initialize and apply the configuration:
|
||||
|
||||
```
|
||||
terraform init
|
||||
terraform apply
|
||||
|
||||
```
|
||||
|
||||
**Verification**
|
||||
|
||||
Use Terraform to review the applied state:
|
||||
|
||||
```
|
||||
terraform show
|
||||
openstack server show <INSTANCE_ID>
|
||||
openstack security group show <SECURITY_GROUP_ID>
|
||||
|
||||
```
|
||||
|
||||
State of Security: Before and after whitelisting the balancers[](#state-of-security-before-and-after-whitelisting-the-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
|
||||
|
||||
> * Only specified IPs can access the load balancer.
|
||||
> * Unauthorized access attempts are denied.
|
||||
|
||||
### Verification Tools[](#verification-tools "Permalink to this headline")
|
||||
|
||||
Various tools can ensure the protection is installed and active:
|
||||
|
||||
livez
|
||||
: Kubernetes monitoring endpoint.
|
||||
|
||||
nmap
|
||||
: (free): For port scanning and access verification.
|
||||
|
||||
curl
|
||||
: (free): To confirm access control from specific IPs.
|
||||
|
||||
Wireshark
|
||||
: (free): For packet-level analysis.
|
||||
|
||||
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
|
||||
|
||||
```
|
||||
nmap -p <PORT> <LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with http and curl[](#testing-with-http-and-curl "Permalink to this headline")
|
||||
|
||||
```
|
||||
curl http://<LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with curl and livez[](#testing-with-curl-and-livez "Permalink to this headline")
|
||||
|
||||
This would be a typical response before changes:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose
|
||||
[+]ping ok
|
||||
[+]log ok
|
||||
[+]etcd ok
|
||||
[+]poststarthook/start-kube-apiserver-admission-initializer ok
|
||||
[+]poststarthook/generic-apiserver-start-informers ok
|
||||
[+]poststarthook/priority-and-fairness-config-consumer ok
|
||||
[+]poststarthook/priority-and-fairness-filter ok
|
||||
[+]poststarthook/storage-object-count-tracker-hook ok
|
||||
[+]poststarthook/start-apiextensions-informers ok
|
||||
[+]poststarthook/start-apiextensions-controllers ok
|
||||
[+]poststarthook/crd-informer-synced ok
|
||||
[+]poststarthook/start-system-namespaces-controller ok
|
||||
[+]poststarthook/bootstrap-controller ok
|
||||
[+]poststarthook/rbac/bootstrap-roles ok
|
||||
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
|
||||
[+]poststarthook/priority-and-fairness-config-producer ok
|
||||
[+]poststarthook/start-cluster-authentication-info-controller ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
|
||||
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
|
||||
[+]poststarthook/start-legacy-token-tracking-controller ok
|
||||
[+]poststarthook/aggregator-reload-proxy-client-cert ok
|
||||
[+]poststarthook/start-kube-aggregator-informers ok
|
||||
[+]poststarthook/apiservice-registration-controller ok
|
||||
[+]poststarthook/apiservice-status-available-controller ok
|
||||
[+]poststarthook/kube-apiserver-autoregistration ok
|
||||
[+]autoregister-completion ok
|
||||
[+]poststarthook/apiservice-openapi-controller ok
|
||||
[+]poststarthook/apiservice-openapiv3-controller ok
|
||||
[+]poststarthook/apiservice-discovery-controller ok
|
||||
livez check passed
|
||||
|
||||
```
|
||||
|
||||
And, this would be a typical response after the changes:
|
||||
|
||||
```
|
||||
curl -k https://<KUBE_API_IP>:6443/livez?verbose -m 5
|
||||
curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Compare with articles:
|
||||
|
||||
[Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-CloudFerro-Cloud.html)
|
||||
|
||||
[Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-CloudFerro-Cloud.html)
|
||||
@ -0,0 +1,206 @@
|
||||
Install GitLab on CloudFerro Cloud Kubernetes[](#install-gitlab-on-brand-name-kubernetes "Permalink to this headline")
|
||||
=======================================================================================================================
|
||||
|
||||
Source control is essential for building professional software. Git has become synonym of a modern source control system and GitLab is one of most popular tools based on Git.
|
||||
|
||||
GitLab can be deployed as your local instance to ensure privacy of the stored artifacts. It is also the tool of choice for its rich automation capabilities.
|
||||
|
||||
In this article, we will install GitLab on a Kubernetes cluster in CloudFerro Cloud cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Create a Floating IP and associate the A record in DNS
|
||||
> * Apply preliminary configuration
|
||||
> * Install GitLab Helm chart
|
||||
> * Verify the installation
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Understand Helm deployments**
|
||||
|
||||
To install GitLab on Kubernetes cluster, we will use the appropriate Helm chart. The following article explains the procedure:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 3 **Kubernetes cluster without ingress controller already installed**
|
||||
|
||||
The Helm chart for installation of GitHub client will install its own ingress controller, so for the sake of following this article, you should
|
||||
|
||||
> * either use a cluster that does **not** have one such ingress controller already installed, or
|
||||
> * create a new cluster **without** activating option **Ingress Controller** in window **Network**. That option should remain like this:
|
||||
|
||||

|
||||
|
||||
General explanation of how to create a Kubernetes cluster is here:
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
Be sure to use cluster template for at least version 1.25, like this:
|
||||
|
||||

|
||||
|
||||
No. 4 **Have your own domain and be able to manage it**
|
||||
|
||||
You will be able to manage the records of a domain associated with your gitlab instance at your domain registrar. Alternatively OpenStack on CloudFerro Cloud hosting lets you manage DNS as a service:
|
||||
|
||||
[DNS as a Service on CloudFerro Cloud Hosting](../cloud/DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html)
|
||||
|
||||
No. 5 **Proof of concept vs. production ready version of GitLab client**
|
||||
|
||||
In Step 3 below, you will create file **my-values-gitlab.yaml** to define the default configuration of the GitLab client. The values chosen there will provide for a solid quick-start, perhaps in the “proof of concept” phase of development. To customize for production, this reference will come handy: <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v7.11.1/values.yaml?ref_type=tags>
|
||||
|
||||
Step 1 Create a Floating IP and associate the A record in DNS[](#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Our GitLab client will run web application (GUI) exposed as a Kubernetes service. We will use GitLab’s Helm chart, which will, as part of GitLab’s installation,
|
||||
|
||||
> * deploy an ingress (controller and resource) to establish service routing and
|
||||
> * enable its HTTPS encryption (using CertManager).
|
||||
|
||||
We will first create a Floating IP (FIP) using Horizon GUI. This FIP will be later associated with the ingress controller. To proceed go to **Network** tab, then **Floating IPs** and click on **Allocate IP to project** button. Fill in a brief description and click **Allocate IP**.
|
||||
|
||||

|
||||
|
||||
After closing the form, your new floating IP will appear on the list and let us say that for the sake of this article, its value is **64.225.134.173**. The next step is to create an A record that will associate the subdomain **gitlab.<yourdomain>** with this IP address. In our case, it might look like this if you are using DNS as a Service under OpenStack Horizon UI on your CloudFerro Cloud cloud:
|
||||
|
||||

|
||||
|
||||
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A condition to ensure compatibility with Kubernetes setup on CloudFerro Cloud clouds is to enable the Service Accounts provisioned by GitLab Helm chart to have sufficient access to reading scaling metrics. This can be done by creating an appropriate *rolebinding*.
|
||||
|
||||
First, create a namespace gitlab where we will deploy the Helm chart:
|
||||
|
||||
```
|
||||
kubectl create ns gitlab
|
||||
|
||||
```
|
||||
|
||||
Then, create a file **gitlab-rolebinding.yaml** with the following contents:
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: gitlab-rolebinding
|
||||
namespace: gitlab
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:metrics-server-aggregated-reader
|
||||
|
||||
```
|
||||
|
||||
This adds the *rolebinding* of the namespace with the appropriate metrics reading cluster role. Apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f gitlab-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 3 Install GitLab Helm chart[](#step-3-install-gitlab-helm-chart "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Now let’s download GitLab’s Helm repository with the following two commands:
|
||||
|
||||
```
|
||||
helm repo add gitlab https://charts.gitlab.io/
|
||||
helm repo update
|
||||
|
||||
```
|
||||
|
||||
Next, let’s prepare a configuration file **my-values-gitlab.yaml** to contain our specific configuration settings. They will override the default **values.yaml** configuration.
|
||||
|
||||
**my-values-gitlab.yaml**
|
||||
|
||||
```
|
||||
global:
|
||||
edition: ce
|
||||
hosts:
|
||||
domain: mysampledomain.info
|
||||
externalIP: 64.225.134.173
|
||||
certmanager-issuer:
|
||||
email: [email protected]
|
||||
|
||||
```
|
||||
|
||||
Here is a brief explanation of the concrete settings in this piece of code:
|
||||
|
||||
global.edition
|
||||
: **ce** – we are using the free, community edition of GitLab.
|
||||
|
||||
global.hosts.domain
|
||||
: Use your own domain instead of **mysampledomain.info**.
|
||||
|
||||
global.hosts.externalIP
|
||||
: Instead of **64.225.134.173** place the floating IP of the ingress controller that was created in Step 1.
|
||||
|
||||
global.certmanager-issuer.email
|
||||
: Instead of **XYZ@XXYYZZ.com**, provide your real email address. It will be stated on our GitLab client’s HTTPS certificates.
|
||||
|
||||
Once all the above conditions are met, we can install a chart to the **gitlab** namespace, with the following command:
|
||||
|
||||
```
|
||||
helm install gitlab gitlab/gitlab --values my-values-gitlab.yaml --namespace gitlab --version 7.11.1
|
||||
|
||||
```
|
||||
|
||||
Here is what the output of a successful installation may look like:
|
||||
|
||||

|
||||
|
||||
After this step, there will be several Kubernetes resources created.
|
||||
|
||||

|
||||
|
||||
Step 4 Verify the installation[](#step-4-verify-the-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
After a short while, when all the pods are up, we can access Gitlab’s service entering the address: **gitlab.<yourdomain>**:
|
||||
|
||||

|
||||
|
||||
In order to log in to GitLab with your initial user, use **root** as username and extract the password with the following command:
|
||||
|
||||
```
|
||||
kubectl get secret gitlab-gitlab-initial-root-password -n gitlab -ojsonpath='{.data.password}' | base64 --decode ; echo
|
||||
|
||||
```
|
||||
|
||||
This takes us to the following screen. From there we can utilize various features of GitLab:
|
||||
|
||||

|
||||
|
||||
Errors during the installation[](#errors-during-the-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
In case you encounter errors during installation, which you cannot recover, it might be worth to start with fresh installation. Here is the command to delete the chart:
|
||||
|
||||
```
|
||||
helm uninstall gitlab -n gitlab
|
||||
|
||||
```
|
||||
|
||||
After that, you can restart the procedure from Step 2.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You now have a local instance of GitLab at your disposal. As next steps you could:
|
||||
|
||||
> * Make the installation more robust and secure e.g. by setting up GitLab’s storage outside of the cluster
|
||||
> * Configure custom runners
|
||||
> * Set up additional users, or federate authentication to external identity provider
|
||||
|
||||
These steps are not in scope of this article, refer to GitLab’s documentation for further guidelines.
|
||||
@ -0,0 +1,217 @@
|
||||
Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[](#install-and-run-argo-workflows-on-brand-name-cloud-name-magnum-kubernetes "Permalink to this headline")
|
||||
================================================================================================================================================================================
|
||||
|
||||
[Argo Workflows](https://argoproj.github.io/argo-workflows/) enable running complex job workflows on Kubernetes. It can
|
||||
|
||||
> * provide custom logic for managing dependencies between jobs,
|
||||
> * manage situations where certain steps of the workflow fail,
|
||||
> * run jobs in parallel to crunch numbers for data processing or machine learning tasks,
|
||||
> * run CI/CD pipelines,
|
||||
> * create workflows with directed acyclic graphs (DAG) etc.
|
||||
|
||||
Argo applies a microservice-oriented, container-native approach, where each step of a workflow runs as a container.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Authenticate to the cluster
|
||||
> * Apply preliminary configuration to **PodSecurityPolicy**
|
||||
> * Install Argo Workflows to the cluster
|
||||
> * Run Argo Workflows from the cloud
|
||||
> * Run Argo Workflows locally
|
||||
> * Run sample workflow with two tasks
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
: You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **kubectl pointed to the Kubernetes cluster**
|
||||
: If you are creating a new cluster, for the purposes of this article, call it *argo-cluster*. See [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
Authenticate to the cluster[](#authenticate-to-the-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Let us authenticate to *argo-cluster*. Run from your local machine the following command to create a config file in the present working directory:
|
||||
|
||||
```
|
||||
openstack coe cluster config argo-cluster
|
||||
|
||||
```
|
||||
|
||||
This will output the command to set the KUBECONFIG env. variable pointing to the location of your cluster e.g.
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/eouser/config
|
||||
|
||||
```
|
||||
|
||||
Run this command.
|
||||
|
||||
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. Argo Workflows will require some additional privileges in order to run correctly.
|
||||
|
||||
First create a dedicated namespace for Argo Workflows artifacts:
|
||||
|
||||
```
|
||||
kubectl create namespace argo
|
||||
|
||||
```
|
||||
|
||||
The next step is to create a *RoleBinding* that will add a *magnum:podsecuritypolicy:privileged* ClusterRole. Create a file *argo-rolebinding.yaml* with the following contents:
|
||||
|
||||
**argo-rolebinding.yaml**
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: argo-rolebinding
|
||||
namespace: argo
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: magnum:podsecuritypolicy:privileged
|
||||
|
||||
```
|
||||
|
||||
and apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f argo-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Install Argo Workflows[](#install-argo-workflows "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
In order to deploy Argo on the cluster, run the following command:
|
||||
|
||||
```
|
||||
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.4.4/install.yaml
|
||||
|
||||
```
|
||||
|
||||
There is also an Argo CLI available for running jobs from command line. Installing it is outside of scope of this article.
|
||||
|
||||
Run Argo Workflows from the cloud[](#run-argo-workflows-from-the-cloud "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
Normally, you would need to authenticate to the server via a UI login. Here, we are going to switch authentication mode by applying the following patch to the deployment. (For production, you might need to incorporate a proper authentication mechanism.) Submit the following command:
|
||||
|
||||
```
|
||||
kubectl patch deployment \
|
||||
argo-server \
|
||||
--namespace argo \
|
||||
--type='json' \
|
||||
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
|
||||
"server",
|
||||
"--auth-mode=server"
|
||||
]}]'
|
||||
|
||||
```
|
||||
|
||||
Argo service by default gets exposed as a Kubernetes service of *ClusterIp* type, which can be verified by typing the following command:
|
||||
|
||||
```
|
||||
kubectl get services -n argo
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
argo-server ClusterIP 10.254.132.118 <none> 2746:31294/TCP 1d
|
||||
|
||||
```
|
||||
|
||||
In order to expose this service to the Internet, convert type *ClusterIP* to *LoadBalancer* by patching the service with the following command:
|
||||
|
||||
```
|
||||
kubectl -n argo patch service argo-server -p '{"spec": {"type": "LoadBalancer"}}'
|
||||
|
||||
```
|
||||
|
||||
After a couple of minutes a cloud LoadBalancer will be generated and the External IP gets populated:
|
||||
|
||||
```
|
||||
kubectl get services -n argo
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
argo-server LoadBalancer 10.254.132.118 64.225.134.153 2746:31294/TCP 1d
|
||||
|
||||
```
|
||||
|
||||
The IP in our case is **64.225.134.153**.
|
||||
|
||||
Argo is by default served on HTTPS with a self-signed certificate, on port **2746**. So, by typing <https:/>/<your-service-external-ip>:2746 you should be able to access the service:
|
||||
|
||||

|
||||
|
||||
Run sample workflow with two tasks[](#run-sample-workflow-with-two-tasks "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to run a sample workflow, first close the initial pop-ups in the UI. Then go to the top-left icon “Workflows” and click on it, then you might need to press “Continue” in the following pop-up.
|
||||
|
||||
The next step is to click “Submit New Workflow” button in the top left part of the screen, which displays a screen similar to the one below:
|
||||
|
||||

|
||||
|
||||
Although you can run the workflow provided by Argo as a start, we provide here an alternative minimal example. In order to run it, create a file, which we can call **argo-article.yaml** and copy in place of the example YAML manifest:
|
||||
|
||||
**argo-article.yaml**
|
||||
|
||||
```
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Workflow
|
||||
metadata:
|
||||
generateName: workflow-
|
||||
namespace: argo
|
||||
spec:
|
||||
entrypoint: my-workflow
|
||||
serviceAccountName: argo
|
||||
templates:
|
||||
- name: my-workflow
|
||||
dag:
|
||||
tasks:
|
||||
- name: downloader
|
||||
template: downloader-tmpl
|
||||
- name: processor
|
||||
template: processor-tmpl
|
||||
dependencies: [downloader]
|
||||
- name: downloader-tmpl
|
||||
script:
|
||||
image: python:alpine3.6
|
||||
command: [python]
|
||||
source: |
|
||||
print("Files downloaded")
|
||||
- name: processor-tmpl
|
||||
script:
|
||||
image: python:alpine3.6
|
||||
command: [python]
|
||||
source: |
|
||||
print("Files processed")
|
||||
|
||||
```
|
||||
|
||||
This sample mocks a workflow with 2 tasks/jobs. First the downloader task runs, once it finished the processor task does its part. Some highlights about this workflow definition:
|
||||
|
||||
> * Both tasks run as containers. So for each task, the **python:alpine3.6** container is first pulled from DockerHub registry. Then this container does a simple work of printing a text. In a production workflow, rather than using a script, the code with your logic would be pulled of your container registry as a custom Docker image.
|
||||
> * The order of executing the script is here defined using **DAG** (Directed Acyclic Graph). This allows for specifying the task dependencies in the dependencies section. In our case the dependency is placed on the Processor, so it will only start after the Downloader finishes. If we skipped the dependencies on the Processor, it would run in parallel with the Downloader.
|
||||
> * Each task in this sequence runs as a Kubernetes pod. When a task is done the pod completes, which frees the resources on the cluster.
|
||||
|
||||
You can run this sample by clicking the “+Create” button. Once the workflow completes you should see an outcome as per below:
|
||||
|
||||

|
||||
|
||||
Also, when clicking on each step, on the right side of the screen there is more information displayed. E.g. when clicking on the Processor step, we can see its logs in the bottom right part of the screen.
|
||||
|
||||
The results show that indeed the message “Files processed” was printed in the container:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
For production, consider alternative authentication mechanism and replacing self-signed HTTPS certificates with the ones generated by a Certificate Authority.
|
||||
@ -0,0 +1,233 @@
|
||||
Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud[](#install-and-run-dask-on-a-kubernetes-cluster-in-brand-name-cloud "Permalink to this headline")
|
||||
=========================================================================================================================================================================
|
||||
|
||||
[Dask](https://www.dask.org/) enables scaling computation tasks either as multiple processes on a single machine, or on Dask clusters that consist of multiple worker machines. Dask provides a scalable alternative to popular Python libraries e.g. Numpy, Pandas or SciKit Learn, but still using a compact and very similar API.
|
||||
|
||||
Dask scheduler, once presented with a computation task, splits it into smaller tasks that can be executed in parallel on the worker nodes/processes.
|
||||
|
||||
In this article you will install a Dask cluster on Kubernetes and run Dask worker nodes as Kubernetes pods. As part of the installation, you will get access to a Jupyter instance, where you can run the sample code.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Dask on Kubernetes
|
||||
> * Access Jupyter and Dask Scheduler dashboard
|
||||
> * Run a sample computing task
|
||||
> * Configure Dask cluster on Kubernetes from Python
|
||||
> * Resolving errors
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Kubernetes cluster on CloudFerro cloud**
|
||||
|
||||
To create Kubernetes cluster on cloud refer to this guide: [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Access to kubectl command line**
|
||||
|
||||
The instructions for activation of **kubectl** are provided in: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 4 **Familiarity with Helm**
|
||||
|
||||
For more information on using Helm and installing apps with Helm on Kubernetes, refer to [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 5 **Python3 available on your machine**
|
||||
|
||||
> Python3 preinstalled on the working machine.
|
||||
|
||||
No. 6 **Basic familiarity with Jupyter and Python scientific libraries**
|
||||
|
||||
> We will use [Pandas](https://pandas.pydata.org/docs/user_guide/index.html#user-guide) as an example.
|
||||
|
||||
Step 1 Install Dask on Kubernetes[](#step-1-install-dask-on-kubernetes "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
To install Dask as a Helm chart, first download the Dask Helm repository:
|
||||
|
||||
```
|
||||
helm repo add dask https://helm.dask.org/
|
||||
|
||||
```
|
||||
|
||||
Instead of installing the chart out of the box, let us customize the configuration for convenience. To view all possible configurations and their defaults run:
|
||||
|
||||
```
|
||||
helm show dask/dask
|
||||
|
||||
```
|
||||
|
||||
Prepare file *dask-values.yaml* to override some of the defaults:
|
||||
|
||||
**dask-values.yaml**
|
||||
|
||||
```
|
||||
scheduler:
|
||||
serviceType: LoadBalancer
|
||||
jupyter:
|
||||
serviceType: LoadBalancer
|
||||
worker:
|
||||
replicas: 4
|
||||
|
||||
```
|
||||
|
||||
This changes the default service type for Jupyter and Scheduler to LoadBalancer, so that they get exposed publicly. Also, the default number of Dask workers is 3 but is now changed to 4. Each Dask worker pod will get allocated 3GB RAM and 1CPU, we keep it at this default.
|
||||
|
||||
To deploy the chart, create the namespace *dask* and install to it:
|
||||
|
||||
```
|
||||
helm install dask dask/dask -n dask --create-namespace -f dask-values.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 2 Access Jupyter and Dask Scheduler dashboard[](#step-2-access-jupyter-and-dask-scheduler-dashboard "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
After the installation step, you can access Dask services:
|
||||
|
||||
```
|
||||
kubectl get services -n dask
|
||||
|
||||
```
|
||||
|
||||
There are two services, for Jupyter and Dask Scheduler dashboard. Populating external IPs will take few minutes:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
dask-jupyter LoadBalancer 10.254.230.230 64.225.128.91 80:32437/TCP 6m49s
|
||||
dask-scheduler LoadBalancer 10.254.41.250 64.225.128.236 8786:31707/TCP,80:31668/TCP 6m49s
|
||||
|
||||
```
|
||||
|
||||
We can paste the external IPs to the browser to view the services. To access Jupyter, you will first need to pass the login screen, the default password is *dask*. Then you can view the Jupyter instance:
|
||||
|
||||

|
||||
|
||||
Similarly, with the Scheduler Dashboard, paste the floating IP to the browser to view it. If you then click on the “Workers” tab above, you can see that 4 workers are running on our Dask cluster:
|
||||
|
||||

|
||||
|
||||
Step 3 Run a sample computing task[](#step-3-run-a-sample-computing-task "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
The installed Jupyter instance already contains Dask and other useful Python libraries installed. To run a sample job, first activate the notebook by clicking on icon named **NoteBook** → **Python3(ipykernel)** on the right hand side of the Jupyter instance browser screen.
|
||||
|
||||
The sample job performs calculation on table (dataframe) of 100k rows, and just one column. Each record will be filled with a random integer from 1 to 100,000 and the task is to calculate the sum of all records.
|
||||
|
||||
The code will run the same example for Pandas (single process) and Dask (parallelized on our cluster) and we will be able to inspect the results.
|
||||
|
||||
Copy the following code and paste to the cell in Jupyter notebook:
|
||||
|
||||
```
|
||||
import dask.dataframe as dd
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import time
|
||||
|
||||
data = {'A': np.random.randint(1, 100_000_000, 100_000_000)}
|
||||
df_pandas = pd.DataFrame(data)
|
||||
df_dask = dd.from_pandas(df_pandas, npartitions=4)
|
||||
|
||||
# Pandas
|
||||
start_time_pandas = time.time()
|
||||
result_pandas = df_pandas['A'].sum()
|
||||
end_time_pandas = time.time()
|
||||
print(f"Result Pandas: {result_pandas}")
|
||||
print(f"Computation time Pandas: {end_time_pandas - start_time_pandas:.2f} seconds.")
|
||||
|
||||
# Dask
|
||||
start_time_dask = time.time()
|
||||
result_dask = df_dask['A'].sum().compute()
|
||||
end_time_dask = time.time()
|
||||
print(f"Result Dask: {result_dask}")
|
||||
print(f"Computation time Dask: {end_time_dask - start_time_dask:.2f} seconds.")
|
||||
|
||||
```
|
||||
|
||||
Hit play or use option Run from the main menu to execute the code. After a few seconds, the result will appear below the cell with code.
|
||||
|
||||
Some of the results we could observe for this example:
|
||||
|
||||
```
|
||||
Result Pandas: 4999822570722943
|
||||
Computation time Pandas: 0.15 seconds.
|
||||
Result Dask: 4999822570722943
|
||||
Computation time Dask: 0.07 seconds.
|
||||
|
||||
```
|
||||
|
||||
Note these results are not deterministic and simple Pandas could also perform better case by case. The overhead to distribute and collect results from Dask workers needs to be also taken into account. Further tuning the performance of Dask is beyond the scope of this article.
|
||||
|
||||
Step 4 Configure Dask cluster on Kubernetes from Python[](#step-4-configure-dask-cluster-on-kubernetes-from-python "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
For managing the Dask cluster on Kubernetes we can use a dedicated Python library *dask-kubernetes*. Using this library, we can reconfigure certain parameters of our Dask cluster.
|
||||
|
||||
One way to run *dask-kubernetes* would be from the Jupyter instance but then we would have to provide reference to *kubeconfig* of our cluster. Instead, we install *dask-kubernetes* in our local environment, with the following command:
|
||||
|
||||
```
|
||||
pip install dask-kubernetes
|
||||
|
||||
```
|
||||
|
||||
Once this is done, we can manage the Dask cluster from Python. As an example, let us upscale it to 5 Dask nodes. Use *nano* to create file *scale-cluster.py*:
|
||||
|
||||
```
|
||||
nano scale-cluster.py
|
||||
|
||||
```
|
||||
|
||||
then insert the following commands:
|
||||
|
||||
**scale-cluster.py**
|
||||
|
||||
```
|
||||
from dask_kubernetes import HelmCluster
|
||||
|
||||
cluster = HelmCluster(release_name="dask", namespace="dask")
|
||||
cluster.scale(5)
|
||||
|
||||
```
|
||||
|
||||
Apply with:
|
||||
|
||||
```
|
||||
python3 scale-cluster.py
|
||||
|
||||
```
|
||||
|
||||
Using the command
|
||||
|
||||
```
|
||||
kubectl get pods -n dask
|
||||
|
||||
```
|
||||
|
||||
you can see that the number of workers now is 5:
|
||||
|
||||

|
||||
|
||||
Or, you can see the current number of worker nodes in the Dask Scheduler dashboard (refresh the screen):
|
||||
|
||||

|
||||
|
||||
Note that the functionalities of *dask-kubernetes* should be possible to achieve using just Kubernetes API directly, the choice will depend on your personal preference.
|
||||
|
||||
Resolving errors[](#resolving-errors "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
When running command
|
||||
|
||||
```
|
||||
python3 scale-cluster.py
|
||||
|
||||
```
|
||||
|
||||
on WSL version 1, error messages such as these may appear:
|
||||
|
||||

|
||||
|
||||
The code will work properly, that is, it will increase the number of workers to 5, as required. The error should not appear on WSL version 2 and other Ubuntu distros.
|
||||
@ -0,0 +1,534 @@
|
||||
Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud[](#install-and-run-noobaa-on-kubernetes-cluster-in-single-and-multicloud-environment-on-brand-name "Permalink to this headline")
|
||||
========================================================================================================================================================================================================================================
|
||||
|
||||
[NooBaa](https://www.noobaa.io/) enables creating an abstracted S3 backend on Kubernetes. Such backend can be connected to multiple S3 backing stores e.g. in a multi-cloud setup, allowing for storage expandability or High Availability among other beneficial features.
|
||||
|
||||
In this article you will learn the basics of using NooBaa
|
||||
|
||||
> * how to install it on Kubernetes cluster
|
||||
> * how to create a NooBaa bucket backed by S3 object storage in the CloudFerro Cloud cloud
|
||||
> * how to create a NooBaa bucket mirroring data on two different clouds
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install NooBaa in local environment
|
||||
> * Apply preliminary configuration
|
||||
> * Install NooBaa on the Kubernetes cluster
|
||||
> * Create a NooBaa backing store
|
||||
> * Create a Bucket Class
|
||||
> * Create an ObjectBucketClaim
|
||||
> * Connect to NooBaa bucket from S3cmd
|
||||
> * Testing access to the bucket
|
||||
> * Create mirroring on clouds WAW3-1 and WAW3-2
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Access to Kubernetes cluster on WAW3-1 cloud**
|
||||
|
||||
A cluster on WAW3-1 cloud, where we will run our NooBaa installation - follow guidelines in this article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 3 **Familiarity with using Object Storage on CloudFerro clouds**
|
||||
|
||||
More information in [How to use Object Storage on CloudFerro Cloud](../s3/How-to-use-Object-Storage-on-CloudFerro-Cloud.html)
|
||||
|
||||
Traditional OpenStack term for imported or downloaded files is *Containers* in main menu option *Object Store*. We will use the term “bucket” for object storage containers, to differentiate vs. container term in Docker/Kubernetes sense.
|
||||
|
||||
No. 4 **kubectl operational**
|
||||
|
||||
**kubectl** CLI tool installed and pointing to your cluster via KUBECONFIG env. variable - more information in [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 5 **Access to private S3 keys in WAW3-1 cloud**
|
||||
|
||||
You may also use access to OpenStack CLI to generate and read the private S3 keys - [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html).
|
||||
|
||||
No. 6 **Familiarity with s3cmd for accessing object storage**
|
||||
|
||||
For more info on **s3cmd**, see [How to access private object storage using S3cmd or boto3 on CloudFerro Cloud](../s3/How-to-access-private-object-storage-using-S3cmd-or-boto3-on-CloudFerro-Cloud.html).
|
||||
|
||||
No. 7 **Access to WAW3-2 cloud**
|
||||
|
||||
To mirror data on WAW3-1 and WAW3-2, you will need access to those two clouds.
|
||||
|
||||
Install NooBaa in local environment[](#install-noobaa-in-local-environment "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The first step to work with NooBaa is to install it on our local system. We will download the installer, make it executable and move it to the system path:
|
||||
|
||||
```
|
||||
curl -LO https://github.com/noobaa/noobaa-operator/releases/download/v5.11.0/noobaa-linux-v5.11.0
|
||||
chmod +x noobaa-linux-v5.11.0
|
||||
sudo mv noobaa-linux-v5.11.0 /usr/local/bin/noobaa
|
||||
|
||||
```
|
||||
|
||||
Enter the password for root user, if required.
|
||||
|
||||
After this sequence of steps, it should be possible to run a test command
|
||||
|
||||
```
|
||||
noobaa help
|
||||
|
||||
```
|
||||
|
||||
This will result in an output similar to the below:
|
||||
|
||||

|
||||
|
||||
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
We will need to apply additional configuration on a Magnum cluster to avoid PodSecurityPolicy exception. For a refresher, see article [Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud](Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-CloudFerro-Cloud-cloud.html).
|
||||
|
||||
Let’s start by creating a dedicated namespace for Noobaa artifacts:
|
||||
|
||||
```
|
||||
kubectl create namespace noobaa
|
||||
|
||||
```
|
||||
|
||||
Then create a file *noobaa-rolebinding.yaml* with the following contents:
|
||||
|
||||
**noobaa-rolebinding.yaml**
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: noobaa-rolebinding
|
||||
namespace: noobaa
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: magnum:podsecuritypolicy:privileged
|
||||
|
||||
```
|
||||
|
||||
and apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f noobaa-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Install NooBaa on the Kubernetes cluster[](#install-noobaa-on-the-kubernetes-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We already have NooBaa available in our local environment, but we still need to install NooBaa on our Kubernetes cluster. NooBaa will use the context of the KUBECONFIG by **kubectl** (as activated in Prerequisite No. 4), so install NooBaa in the dedicated namespace:
|
||||
|
||||
```
|
||||
noobaa install -n noobaa
|
||||
|
||||
```
|
||||
|
||||
After a few minutes, this will install NooBaa and provide additional information about the setup. See the status of NooBaa with command
|
||||
|
||||
```
|
||||
noobaa status -n noobaa
|
||||
|
||||
```
|
||||
|
||||
It outputs several useful insights about the NooBaa installation, with the “key facts” available towards the end of this status:
|
||||
|
||||
> * NooBaa created a default backing store called *noobaa-default-backing-store*, backed by a block volume created in OpenStack.
|
||||
> * S3 credentials are provided to access the bucket created with the default backing store. Such volume-based backing store has its use e.g. for utilizing the S3 access method to our block storage.
|
||||
|
||||
For the purpose of this article, we will not use the default backing store, but rather learn to create a new backing store based on cloud S3 object storage. Such setup can be then easily extended so that we can end up with separate backing stores for different clouds. In the second part of this article you will create one store on WAW3-1 cloud, another one on WAW3-2 cloud and they will be available through one abstracted S3 bucket in NooBaa.
|
||||
|
||||
Create a NooBaa backing store[](#create-a-noobaa-backing-store "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
### Step 1. Create object storage bucket on WAW3-1[](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Now create an object storage bucket on WAW3-1 cloud:
|
||||
|
||||
> * switch to Horizon,
|
||||
> * use commands **Object Store** –> **Containers** –> **+ Container** to create a new object bucket.
|
||||
|
||||

|
||||
|
||||
Buckets on WAW3-1 cloud need to have unique names. In our case, we use bucket name *noobaademo-waw3-1* which we will use throughout the article.
|
||||
|
||||
Note
|
||||
|
||||
You need to create a bucket with a different name and use this generated name to follow along.
|
||||
|
||||
### Step 2. Set up EC2 credentials[](#step-2-set-up-ec2-credentials "Permalink to this headline")
|
||||
|
||||
If you have properly set up the EC2 (S3) keys for your WAW3-1 object storage, take note of them with the following command:
|
||||
|
||||
```
|
||||
openstack ec2 credentials list
|
||||
|
||||
```
|
||||
|
||||
### Step 3. Create a new NooBaa backing store[](#step-3-create-a-new-noobaa-backing-store "Permalink to this headline")
|
||||
|
||||
With the above in place, we can create a new NooBaa backing store called *custom-bs* by running the command below. Make sure to replace the access-key XXXXXX and the secret-key YYYYYYY with your own EC2 keys and the *bucket* with your own bucket name:
|
||||
|
||||
```
|
||||
noobaa -n noobaa backingstore create s3-compatible custom-bs --endpoint https://s3.waw3-1.cloudferro.com --signature-version v4 --access-key XXXXXX \
|
||||
--secret-key YYYYYYY --target-bucket noobaademo-waw3-1
|
||||
|
||||
```
|
||||
|
||||
Note that the credentials get stored as a Kubernetes secret in the namespace. You can verify that the backing store and the secret got created by running the following commands:
|
||||
|
||||
```
|
||||
kubectl get backingstore -n noobaa
|
||||
kubectl get secret -n noobaa
|
||||
|
||||
```
|
||||
|
||||
The naming of the artifacts will follow the name of the backing store in case there are already more such resources available in the namespace.
|
||||
|
||||
Also, when viewing the bucket in Horizon (backing store), we can see NooBaa populated it’s folder structure:
|
||||
|
||||

|
||||
|
||||
### Step 4. Create a Bucket Class[](#step-4-create-a-bucket-class "Permalink to this headline")
|
||||
|
||||
When we have the backing store, the next step is to create a BucketClass (BC). Such BucketClass serves as a blueprint for NooBaa buckets: it defines
|
||||
|
||||
> * which BackingStore(s) these buckets will use, and
|
||||
> * which placement strategy to use in case of multiple bucket stores.
|
||||
|
||||
The placement strategy could be *Mirror* or *Spread*. There is also support for using multiple tiers, where data is by default pushed to the first tier, and when this is full, to the next one.
|
||||
|
||||
In order to create a *BucketClass*, prepare the following file *custom-bc.yaml*:
|
||||
|
||||
**custom-bc.yaml**
|
||||
|
||||
```
|
||||
apiVersion: noobaa.io/v1alpha1
|
||||
kind: BucketClass
|
||||
metadata:
|
||||
labels:
|
||||
app: noobaa
|
||||
name: custom-bc
|
||||
namespace: noobaa
|
||||
spec:
|
||||
placementPolicy:
|
||||
tiers:
|
||||
- backingStores:
|
||||
- custom-bs
|
||||
placement: Spread
|
||||
|
||||
```
|
||||
|
||||
Then apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f custom-bc.yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 5. Create an ObjectBucketClaim[](#step-5-create-an-objectbucketclaim "Permalink to this headline")
|
||||
|
||||
As the last step, we create an *ObjectBucketClaim*. This bucket claim utilizes the *noobaa.noobaa.io* storage class which got deployed with NooBaa, and references the *custom-bc* bucket class created in the previous step. Create a file called *custom-obc.yaml*:
|
||||
|
||||
**custom-obc.yaml**
|
||||
|
||||
```
|
||||
apiVersion: objectbucket.io/v1alpha1
|
||||
kind: ObjectBucketClaim
|
||||
metadata:
|
||||
name: custom-obc
|
||||
namespace: noobaa
|
||||
spec:
|
||||
generateBucketName: my-bucket
|
||||
storageClassName: noobaa.noobaa.io
|
||||
additionalConfig:
|
||||
bucketclass: custom-bc
|
||||
|
||||
```
|
||||
|
||||
Then apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f custom-obc.yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 6. Obtain name of the NooBaa bucket[](#step-6-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
As a result, besides the *ObjectBucket* claim resource, also a configmap and a secret with the same name *custom-obc* got created in NooBaa. Let’s view the configmap with:
|
||||
|
||||
```
|
||||
kubectl get configmap custom-obc -n noobaa -o yaml
|
||||
|
||||
```
|
||||
|
||||
The result is similar to the following:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
data:
|
||||
BUCKET_HOST: s3.noobaa.svc
|
||||
BUCKET_NAME: my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf
|
||||
BUCKET_PORT: "443"
|
||||
BUCKET_REGION: ""
|
||||
BUCKET_SUBREGION: ""
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
We can see the name of the NooBaa bucket *my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf*, which is backing up our “physical” WAW3-1 bucket. Store this name for later use in this article.
|
||||
|
||||
### Step 7. Obtain secret for the NooBaa bucket[](#step-7-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
The secret is also relevant for us as we need to extract the S3 keys to the NooBaa bucket. The access and secret key are base64 encoded in the secret, we can retrieve them decoded with the following commands:
|
||||
|
||||
```
|
||||
kubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
|
||||
kubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
|
||||
|
||||
```
|
||||
|
||||
Take note of access and secret keys, as we will use them in the next step.
|
||||
|
||||
### Step 8. Connect to NooBaa bucket from S3cmd[](#step-8-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
|
||||
Noobaa created a few services when it got deployed, which we can verify with the command below:
|
||||
|
||||
```
|
||||
kubectl get services -n noobaa
|
||||
|
||||
```
|
||||
|
||||
The output should be similar to the one below:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
noobaa-db-pg ClusterIP 10.254.158.217 <none> 5432/TCP 3h24m
|
||||
noobaa-mgmt LoadBalancer 10.254.145.9 64.225.135.152 80:31841/TCP,443:31736/TCP,8445:32063/TCP,8446:32100/TCP 3h24m
|
||||
s3 LoadBalancer 10.254.244.226 64.225.133.81 80:30948/TCP,443:31609/TCP,8444:30079/TCP,7004:31604/TCP 3h24m
|
||||
sts LoadBalancer 10.254.23.154 64.225.135.92 443:31374/TCP 3h24m
|
||||
|
||||
```
|
||||
|
||||
The “s3” service provides the endpoint that can be used to access Nooba storage (backed by the actual storage in WAW3-1). In our case, this endpoint URL is **64.225.133.81**. Replace it with the value you get from the above command, when working through this article.
|
||||
|
||||
### Step 9. Configure S3cmd to access NooBaa[](#step-9-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
|
||||
Now that we have both the endpoint and the keys, we can configure **s3cmd** to access the bucket created by NooBaa. Create a configuration file *noobaa.s3cfg* with the following contents:
|
||||
|
||||
```
|
||||
check_ssl_certificate = False
|
||||
check_ssl_hostname = False
|
||||
access_key = XXXXXX
|
||||
secret_key = YYYYYY
|
||||
host_base = 64.225.133.81
|
||||
host_bucket = 64.225.133.81
|
||||
use_https = True
|
||||
verbosity = WARNING
|
||||
signature_v2 = False
|
||||
|
||||
```
|
||||
|
||||
Then from the same location apply with:
|
||||
|
||||
```
|
||||
s3cmd --configure -c noobaa.s3cfg
|
||||
|
||||
```
|
||||
|
||||
If the **s3cmd** is not installed on your system, see Prerequisite No. 6.
|
||||
|
||||
The **s3cmd** command will let you press Enter to confirm each value from config file and let you change it on the fly, if different from default.
|
||||
|
||||
Omitting those questions in the output below, the result should be similar to the following:
|
||||
|
||||
```
|
||||
...
|
||||
Success. Your access key and secret key worked fine :-)
|
||||
|
||||
Now verifying that encryption works...
|
||||
Not configured. Never mind.
|
||||
|
||||
Save settings? [y/N] y
|
||||
Configuration saved to 'noobaa.s3cfg'
|
||||
|
||||
```
|
||||
|
||||
### Step 10. Testing access to the bucket[](#step-10-testing-access-to-the-bucket "Permalink to this headline")
|
||||
|
||||
We can upload a test file to NooBaa. In our case, we upload a simple text file *xyz.txt* with text content “xyz”, using the following command:
|
||||
|
||||
```
|
||||
s3cmd put xyz.txt s3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf -c noobaa.s3cfg
|
||||
|
||||
```
|
||||
|
||||
The file gets uploaded correctly:
|
||||
|
||||
```
|
||||
upload: 'xyz.txt' -> 's3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf/xyz.txt' [1 of 1]
|
||||
4 of 4 100% in 0s 5.67 B/s done
|
||||
|
||||
```
|
||||
|
||||
We can also see in Horizon that a few new folders and files were added to NooBaa. However, we will not see the *xyz.txt* file directly there, because NooBaa applies its own fragmentation techniques on the data.
|
||||
|
||||
Connect NooBaa in a multi-cloud setup[](#connect-noobaa-in-a-multi-cloud-setup "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
NooBaa can be used to create an abstracted S3 endpoint, connected to two or more cloud S3 endpoints. This can be helpful in scenarios of e.g. replicating the same data in multiple clouds or combining the storage of multiple clouds.
|
||||
|
||||
In this section of the article we demonstrate the “mirroring scenario”. We create an S3 NooBaa endpoint replicating (mirroring) data between WAW3-1 cloud and WAW3-2 cloud.
|
||||
|
||||
Note
|
||||
|
||||
To illustrate the process, we are going create a new set of resources, new S3 buckets and introduce new naming of the entities. The steps 1 to 9 from above are almost identical so we shall denote them as **Step 1 Multi-cloud**, **Step 2 Multi-cloud** and so on.
|
||||
|
||||
To proceed, first create two additional buckets from the Horizon interface. Replace the further commands and file contents in this section to reflect these bucket names.
|
||||
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-1[](#step-1-multi-cloud-create-bucket-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Go to WAW3-1 Horizon interface and create a bucket we call *noobaamirror-waw3-1* (supply your own bucket name here and adhere to it in the rest of the article). It will be the available on endpoint <https://s3.waw3-1.cloudferro.com>.
|
||||
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-2[](#step-1-multi-cloud-create-bucket-on-waw3-2 "Permalink to this headline")
|
||||
|
||||
Next, go to WAW3-2 Horizon interface and create a bucket we call *noobaamirror-waw3-2* (again, supply your own bucket name here and adhere to it in the rest of the article). It will be available on endpoint <https://s3.waw3-2.cloudferro.com>
|
||||
|
||||
### Step 2 Multi-cloud. Set up EC2 credentials[](#step-2-multi-cloud-set-up-ec2-credentials "Permalink to this headline")
|
||||
|
||||
Use the existing pair of EC2 credentials or first create a new pair and then use them in the next step.
|
||||
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1[](#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Apply the following command to create *mirror-bs1* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
|
||||
|
||||
```
|
||||
noobaa -n noobaa backingstore create s3-compatible mirror-bs1 --endpoint https://s3.waw3-1.cloudferro.com --signature-version v4 --access-key XXXXXX --secret-key YYYYYY --target-bucket noobaamirror-waw3-1
|
||||
|
||||
```
|
||||
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2[](#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2 "Permalink to this headline")
|
||||
|
||||
Apply the following command to create *mirror-bs2* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
|
||||
|
||||
```
|
||||
noobaa -n noobaa backingstore create s3-compatible mirror-bs2 --endpoint https://s3.waw3-2.cloudferro.com --signature-version v4 --access-key XXXXXX --secret-key YYYYYY --target-bucket noobaamirror-waw3-2
|
||||
|
||||
```
|
||||
|
||||
### Step 4 Multi-cloud. Create a Bucket Class[](#step-4-multi-cloud-create-a-bucket-class "Permalink to this headline")
|
||||
|
||||
To create a BucketClass called *bc-mirror*, create a file called *bc-mirror.yaml* with the following contents:
|
||||
|
||||
**bc-mirror.yaml**
|
||||
|
||||
```
|
||||
apiVersion: noobaa.io/v1alpha1
|
||||
kind: BucketClass
|
||||
metadata:
|
||||
labels:
|
||||
app: noobaa
|
||||
name: bc-mirror
|
||||
namespace: noobaa
|
||||
spec:
|
||||
placementPolicy:
|
||||
tiers:
|
||||
- backingStores:
|
||||
- mirror-bs1
|
||||
- mirror-bs2
|
||||
placement: Mirror
|
||||
|
||||
```
|
||||
|
||||
and apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f bc-mirror.yaml
|
||||
|
||||
```
|
||||
|
||||
Note
|
||||
|
||||
The mirroring is implemented by listing **two** backing stores, *mirror-bs1* and *mirror-bs1*, under the *tiers* option.
|
||||
|
||||
### Step 5 Multi-cloud. Create an ObjectBucketClaim[](#step-5-multi-cloud-create-an-objectbucketclaim "Permalink to this headline")
|
||||
|
||||
Again, create file *obc-mirror.yaml* for ObjectBucketClaim *obc-mirror*:
|
||||
|
||||
**obc-mirror.yaml**
|
||||
|
||||
```
|
||||
apiVersion: objectbucket.io/v1alpha1
|
||||
kind: ObjectBucketClaim
|
||||
metadata:
|
||||
name: obc-mirror
|
||||
namespace: noobaa
|
||||
spec:
|
||||
generateBucketName: my-bucket
|
||||
storageClassName: noobaa.noobaa.io
|
||||
additionalConfig:
|
||||
bucketclass: bc-mirror
|
||||
|
||||
```
|
||||
|
||||
and apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f obc-mirror
|
||||
|
||||
```
|
||||
|
||||
### Step 6 Multi-cloud. Obtain name of the NooBaa bucket[](#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
Extract bucket name from the configmap:
|
||||
|
||||
```
|
||||
kubectl get configmap obc-mirror -n noobaa -o yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 7 Multi-cloud. Obtain secret for the NooBaa bucket[](#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
Extract S3 keys from the created secret:
|
||||
|
||||
```
|
||||
kubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
|
||||
kubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
|
||||
|
||||
```
|
||||
|
||||
### Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd[](#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
|
||||
Create additional config file for s3cmd e.g. *noobaa-mirror.s3cfg* and update the access key, the secret key and the bucket name to the ones retrieved above:
|
||||
|
||||
```
|
||||
s3cmd --configure -c noobaa-mirror.s3cfg
|
||||
|
||||
```
|
||||
|
||||
### Step 9 Multi-cloud. Configure S3cmd to access NooBaa[](#step-9-multi-cloud-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
|
||||
To test, upload the *xyz.txt* file, which behind the scenes uploads a copy to both clouds. Be sure to change the bucket name *my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff* to the one retrieved from the configmap:
|
||||
|
||||
```
|
||||
s3cmd put xyz.txt s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-mirror.s3cfg
|
||||
|
||||
```
|
||||
|
||||
### Step 10 Multi-cloud. Testing access to the bucket[](#step-10-multi-cloud-testing-access-to-the-bucket "Permalink to this headline")
|
||||
|
||||
To verify, delete the “physical” bucket on one of the clouds (e.g. from WAW3-1) from the Horizon interface. With the **s3cmd** command below you can see that NooBaa will still hold the copy from WAW3-2 cloud:
|
||||
|
||||
```
|
||||
s3cmd ls s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-mirror.s3cfg
|
||||
2023-07-21 09:47 4 s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff/xyz.txt
|
||||
|
||||
```
|
||||
@ -0,0 +1,550 @@
|
||||
Installing HashiCorp Vault on CloudFerro Cloud Magnum[](#installing-hashicorp-vault-on-brand-name-cloud-name-magnum "Permalink to this headline")
|
||||
==================================================================================================================================================
|
||||
|
||||
In Kubernetes, a *Secret* is an object that contains passwords, tokens, keys or any other small pieces of data. Using *Secrets* ensures that the probability of exposing confidential data while creating, running and editing Pods is much smaller. The main problem is that *Secrets* are stored unencrypted in *etcd* so anyone with
|
||||
|
||||
> * API access, as well as anyone who
|
||||
> * can create a Pod or create a Deployment in a namespace
|
||||
|
||||
can also retrieve or modify a Secret.
|
||||
|
||||
You can apply a number of strategies to improve the security of the cluster or you can install a specialized solution such as [HashiCorp Vault](https://www.vaultproject.io/). It offers
|
||||
|
||||
> * secure storage of all kinds of secrets – passwords, TLS certificates, database credentials, API encryption keys and others,
|
||||
> * encryption of all of the data,
|
||||
> * dynamic serving of the credentials,
|
||||
> * granular access policies for users, applications, and services,
|
||||
> * logging and auditing of data usage,
|
||||
> * revoking or deleting any key or secret,
|
||||
> * setting automated secret rotation – for administrators and users alike.
|
||||
|
||||
In this article, we shall install HashiCorp Vault within a Magnum Kubernetes cluster, on CloudFerro Cloud cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install self-signed TLS certificates with CFSSL
|
||||
> * Generate certificates to enable encryption of traffic with Vault
|
||||
> * Install Consul storage backend for High Availability
|
||||
> * Install Vault
|
||||
> * Sealing and unsealing the Vault
|
||||
> * Unseal Vault
|
||||
> * Run Vault UI
|
||||
> * Return livenessProbe to production value
|
||||
> * Troubleshooting
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Familiarity with kubectl**
|
||||
|
||||
You should have an appropriate Kubernetes cluster up and running, with **kubectl** pointing to it [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Familiarity with deploying Helm charts**
|
||||
|
||||
This article will introduce you to Helm charts on Kubernetes:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
Step 1 Install CFSSL[](#step-1-install-cfssl "Permalink to this headline")
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
To ensure that Vault communication with the cluster is encrypted, we need to provide TLS certificates.
|
||||
|
||||
We will use the self-signed TLS certificates issued by a private Certificate Authority. To generate them we will use CFSSL utilities: **cfssl** and **cfssljson**.
|
||||
|
||||
**cfssl** is a CLI utility. **cfssljson** takes the JSON output from **cfssl** and writes certificates, keys, and CSR (certificate signing requests).
|
||||
|
||||
We need to download the binaries of both tools: **cfssl** and **cfssljson** from <https://github.com/cloudflare/cfssl> and make them executable:
|
||||
|
||||
```
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 -o cfssl
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 -o cfssljson
|
||||
chmod +x cfssl
|
||||
chmod +x cfssljson
|
||||
|
||||
```
|
||||
|
||||
Then we also need to add them to our path:
|
||||
|
||||
```
|
||||
sudo mv cfssl cfssljson /usr/local/bin
|
||||
|
||||
```
|
||||
|
||||
Step 2 Generate TLS certificates[](#step-2-generate-tls-certificates "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Before we start, let’s create a dedicated namespace where all Vault-related Kubernetes resources will live:
|
||||
|
||||
```
|
||||
kubectl create namespace vault
|
||||
|
||||
```
|
||||
|
||||
We will need to issue two sets of certificates. The first set will be a root certificate for Certificate Authority. The second will reference the CA certificate and create the actual Vault cert.
|
||||
|
||||
To create the key request for CA, we will base it on a JSON file **ca-csr.json**. Create this file in your favorite editor, and if you want to, substitute the certificate details to your own use case:
|
||||
|
||||
**ca-csr.json**
|
||||
|
||||
```
|
||||
{
|
||||
"hosts": [
|
||||
"cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "Poland",
|
||||
"L": "Warsaw",
|
||||
"O": "MyOrganization"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Then issue the command to generate a self-signed root CA certificate.
|
||||
|
||||
```
|
||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
||||
|
||||
```
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
```
|
||||
2023/01/02 15:27:36 [INFO] generating a new CA key and certificate from CSR
|
||||
2023/01/02 15:27:36 [INFO] generate received request
|
||||
2023/01/02 15:27:36 [INFO] received CSR
|
||||
2023/01/02 15:27:36 [INFO] generating key: rsa-2048
|
||||
2023/01/02 15:27:36 [INFO] encoded CSR
|
||||
2023/01/02 15:27:36 [INFO] signed certificate with serial number 472447709029717049436439292623827313295747809061
|
||||
|
||||
```
|
||||
|
||||
Also, as a result, three entities are generated:
|
||||
|
||||
> * the private key,
|
||||
> * the CSR, and the
|
||||
> * self-signed certificate (*ca.pem*, *ca.csr*, *ca-key.pem*).
|
||||
|
||||
The next step is to create Vault certificates, which reference the private CA. To do so, first create a configuration file *ca-config.json*, to override the default configuration. This is especially useful for changing certificate validity:
|
||||
|
||||
**ca-config.json**
|
||||
|
||||
```
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "17520h"
|
||||
},
|
||||
"profiles": {
|
||||
"default": {
|
||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
||||
"expiry": "17520h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Then generate the Vault keys, referencing this file and the CA keys:
|
||||
|
||||
```
|
||||
cfssl gencert \
|
||||
-ca ./ca.pem \
|
||||
-ca-key ./ca-key.pem \
|
||||
-config ca-config.json \
|
||||
-profile default \
|
||||
-hostname="vault,vault.vault.svc.cluster.local,localhost,127.0.0.1" \
|
||||
ca-csr.json | cfssljson -bare vault
|
||||
|
||||
```
|
||||
|
||||
The result will be the following:
|
||||
|
||||
```
|
||||
2023/01/02 16:19:52 [INFO] generate received request
|
||||
2023/01/02 16:19:52 [INFO] received CSR
|
||||
2023/01/02 16:19:52 [INFO] generating key: rsa-2048
|
||||
2023/01/02 16:19:52 [INFO] encoded CSR
|
||||
2023/01/02 16:19:52 [INFO] signed certificate with serial number 709743788174272015258726707100830785425213226283
|
||||
|
||||
```
|
||||
|
||||
Also, another three files get created in your working folder: *vault.pem*, *vault.csr*, *vault-key.pem*.
|
||||
|
||||
The last step is to store the generated keys as Kubernetes TLS secrets on our cluster:
|
||||
|
||||
```
|
||||
kubectl -n vault create secret tls tls-ca --cert ./ca.pem --key ./ca-key.pem -n vault
|
||||
kubectl -n vault create secret tls tls-server --cert ./vault.pem --key ./vault-key.pem -n vault
|
||||
|
||||
```
|
||||
|
||||
The naming of those secrets reflects the Vault Helm chart default names.
|
||||
|
||||
Step 3 Install Consul Helm chart[](#step-3-install-consul-helm-chart "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
The Consul backend will ensure High Availability of our Vault installation. Consul will live in a namespace that we have already created, **vault**.
|
||||
|
||||
Here is an override configuration file for the Consul Helm chart: *consul-values.yaml*.
|
||||
|
||||
**consul-values.yaml**
|
||||
|
||||
```
|
||||
global:
|
||||
datacenter: vault-kubernetes-guide
|
||||
|
||||
client:
|
||||
enabled: true
|
||||
|
||||
server:
|
||||
replicas: 1
|
||||
bootstrapExpect: 1
|
||||
disruptionBudget:
|
||||
maxUnavailable: 0
|
||||
|
||||
```
|
||||
|
||||
Now install the *hashicorp* repository of Helm charts and verify that *vault* is in it:
|
||||
|
||||
```
|
||||
helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
helm search repo hashicorp/vault
|
||||
|
||||
```
|
||||
|
||||
As the last step, install Consul chart:
|
||||
|
||||
```
|
||||
helm install consul hashicorp/consul -f consul-values.yaml -n vault
|
||||
|
||||
```
|
||||
|
||||
This is the report about success of the installation:
|
||||
|
||||
```
|
||||
NAME: consul
|
||||
LAST DEPLOYED: Thu Feb 9 18:52:58 2023
|
||||
NAMESPACE: vault
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
Thank you for installing HashiCorp Consul!
|
||||
|
||||
Your release is named consul.
|
||||
|
||||
```
|
||||
|
||||
Shortly, several Consul pods will get deployed in the *vault* namespace. Run the following command to verify it:
|
||||
|
||||
```
|
||||
kubectl get pods -n vault
|
||||
|
||||
```
|
||||
|
||||
Wait until all of the pods are **Running** and then proceed with the next step.
|
||||
|
||||
Step 4 Install Vault Helm chart[](#step-4-install-vault-helm-chart "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
We are now ready to install Vault.
|
||||
|
||||
First, let’s provide file *vault-values.yaml* which will override configuration file for the Vault Helm chart. These overrides ensure turning on encryption, High Availability, setting up larger time for *readinessProbe* and exposing the UI as LoadBalancer service type:
|
||||
|
||||
**vault-values.yaml**
|
||||
|
||||
```
|
||||
# Vault Helm Chart Value Overrides
|
||||
global:
|
||||
enabled: true
|
||||
tlsDisable: false
|
||||
|
||||
injector:
|
||||
enabled: true
|
||||
image:
|
||||
repository: "hashicorp/vault-k8s"
|
||||
tag: "0.14.1"
|
||||
|
||||
resources:
|
||||
requests:
|
||||
memory: 500Mi
|
||||
cpu: 500m
|
||||
limits:
|
||||
memory: 1000Mi
|
||||
cpu: 1000m
|
||||
|
||||
server:
|
||||
# These Resource Limits are in line with node requirements in the
|
||||
# Vault Reference Architecture for a Small Cluster
|
||||
|
||||
image:
|
||||
repository: "hashicorp/vault"
|
||||
tag: "1.9.2"
|
||||
|
||||
# For HA configuration and because we need to manually init the vault,
|
||||
# we need to define custom readiness/liveness Probe settings
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
path: "/v1/sys/health?standbyok=true"
|
||||
initialDelaySeconds: 360
|
||||
|
||||
extraEnvironmentVars:
|
||||
VAULT_CACERT: /vault/userconfig/tls-ca/tls.crt
|
||||
|
||||
# extraVolumes is a list of extra volumes to mount. These will be exposed
|
||||
# to Vault in the path `/vault/userconfig/<name>/`.
|
||||
# These reflect the Kubernetes vault and ca secrets created
|
||||
extraVolumes:
|
||||
- type: secret
|
||||
name: tls-server
|
||||
- type: secret
|
||||
name: tls-ca
|
||||
|
||||
standalone:
|
||||
enabled: false
|
||||
|
||||
# Run Vault in "HA" mode.
|
||||
ha:
|
||||
enabled: true
|
||||
replicas: 3
|
||||
config: |
|
||||
ui = true
|
||||
|
||||
listener "tcp" {
|
||||
tls_disable = 0
|
||||
address = "0.0.0.0:8200"
|
||||
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
|
||||
tls_key_file = "/vault/userconfig/tls-server/tls.key"
|
||||
tls_min_version = "tls12"
|
||||
}
|
||||
storage "consul" {
|
||||
path = "vault"
|
||||
address = "consul-consul-server:8500"
|
||||
}
|
||||
|
||||
# Vault UI
|
||||
ui:
|
||||
enabled: true
|
||||
serviceType: "LoadBalancer"
|
||||
serviceNodePort: null
|
||||
externalPort: 8200
|
||||
|
||||
```
|
||||
|
||||
Then run the installation:
|
||||
|
||||
```
|
||||
helm install vault hashicorp/vault -n vault -f vault-values.yaml
|
||||
|
||||
```
|
||||
|
||||
As a result, several pods get created:
|
||||
|
||||
```
|
||||
kubectl get pods -n vault
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
consul-consul-client-655fq 1/1 Running 0 104s
|
||||
consul-consul-client-dkngt 1/1 Running 0 104s
|
||||
consul-consul-client-nnbnl 1/1 Running 0 104s
|
||||
consul-consul-connect-injector-8447d8d97b-8hkj8 1/1 Running 0 104s
|
||||
consul-consul-server-0 1/1 Running 0 104s
|
||||
consul-consul-webhook-cert-manager-7c4ccbdd4c-d89bw 1/1 Running 0 104s
|
||||
vault-0 1/1 Running 0 23s
|
||||
vault-1 1/1 Running 0 23s
|
||||
vault-2 1/1 Running 0 23s
|
||||
vault-agent-injector-6c7cfc768-kv968 1/1 Running 0 23s
|
||||
|
||||
```
|
||||
|
||||
Sealing and unsealing the Vault[](#sealing-and-unsealing-the-vault "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
Right after the installation, Vault server starts in a *sealed* state. It knows where and how to access the physical storage but, by design, it is lacking the key to decrypt any of it. The only operations you can do when Vault is sealed are to
|
||||
|
||||
> * unseal Vault and
|
||||
> * check the status of the seal.
|
||||
|
||||
The reverse process, called *unsealing*, consists of creating the plaintext root key necessary to read the decryption key.
|
||||
|
||||
In real life, there would be an administrator who could first generate the so-called *key shares* or *unseal keys*, which is a set of exactly **five** text strings. Then they would disperse these keys to two or more people, so that the secrets would be hard to gather for a potential attacker. And to perform the unsealing, at least three out of those five strings would have to be presented to the Vault, in any order.
|
||||
|
||||
In this article, however, you are both the administrator and the user and can set up things your way. First you will
|
||||
|
||||
> * generate the keys and have them available in plain sight and then you will
|
||||
> * enter three out of those five strings back to the system.
|
||||
|
||||
You will have a limited but sufficient amount of time to enter the keys; the value *livenessProbe* in file **vault-values.yaml** is 360 seconds, which will give you ample time to enter the keys.
|
||||
|
||||
At the end of the article we show how to interactively set it to **60** seconds, so that the cluster can check health of the pods more frequently.
|
||||
|
||||
Step 5 Unseal Vault[](#step-5-unseal-vault "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
Three nodes in the Kubernetes cluster represent Vault and are named *vault-0*, *vault-1*, *vault-2*. To make the Vault functional, you will have to unseal all three of them.
|
||||
|
||||
To start, enter the container in *vault-0*:
|
||||
|
||||
```
|
||||
kubectl -n vault exec -it vault-0 -- sh
|
||||
|
||||
```
|
||||
|
||||
Then from inside the pod, get the keys:
|
||||
|
||||
```
|
||||
vault operator init
|
||||
|
||||
```
|
||||
|
||||
The result will be the following, you will get the 5 unseal keys and a root token. Save these keys to Notepad, so you have convenient access to them later:
|
||||
|
||||
```
|
||||
Unseal Key 1: jcJj2ukVBNG5K01PX3UkskPotc+tGAvalG5CqBveS6LN
|
||||
Unseal Key 2: OBzqfTYL9lmmvuewk85kPxpgc0D/CDVXrY9cdBElA3hJ
|
||||
Unseal Key 3: M6QysiGixui4SlqB7Jdgv0jaHn8m45V91iabrxRvNo6v
|
||||
Unseal Key 4: H7T5BHR2isbBSHfu2q4aKG0hvvA13uXlT9799whxmuL+
|
||||
Unseal Key 5: rtbXv3TqdUeN3luelJa8OOI/CKlILANXxFVkyE/SKv4c
|
||||
|
||||
Initial Root Token: s.Pt7xVk5rShSuIJqRPqBFWY5H
|
||||
|
||||
```
|
||||
|
||||
Then, from within the pod *vault-0*, unseal it by typing:
|
||||
|
||||
```
|
||||
vault operator unseal
|
||||
|
||||
```
|
||||
|
||||
You will get prompted for the key, then paste key 1 from your notepad. Repeat this process 3 times in the *vault-0* pod, each time providing a different key out of those five you have just generated.
|
||||
|
||||
This is what the entire process looks like:
|
||||
|
||||

|
||||
|
||||
In third attempt, the values change to **Initialized** to be **true** and **sealed** to be **false**:
|
||||
|
||||
```
|
||||
Key Value
|
||||
--- -----
|
||||
Seal Type shamir
|
||||
Initialized true
|
||||
Sealed false
|
||||
... ...
|
||||
|
||||
```
|
||||
|
||||
The pod is unsealed.
|
||||
|
||||
**Now repeat the same process for** *vault-1* **and** *vault-2* **pods**.
|
||||
|
||||
To stop using the console in *vault-0*, press Ctrl-D on keyboard. Then enter *vault-1* with command
|
||||
|
||||
```
|
||||
kubectl -n vault exec -it vault-1 -- sh
|
||||
|
||||
```
|
||||
|
||||
and unseal it by entering at least three keys. Then the similar procedure for *vault-2*. Only when all three pods are unsealed will the Vault become active.
|
||||
|
||||
Step 6 Run Vault UI[](#step-6-run-vault-ui "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
With our configuration, Vault UI is exposed on port 8200 of a dedicated LoadBalancer that got created.
|
||||
|
||||
To check the LoadBalancer, run:
|
||||
|
||||
```
|
||||
kubectl -n vault get svc
|
||||
|
||||
```
|
||||
|
||||
Check the external IP of the LoadBalancer (it could take a couple of minutes when external IP is available):
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
...
|
||||
vault-ui LoadBalancer 10.254.49.9 64.225.129.145 8200:32091/TCP 143m
|
||||
|
||||
```
|
||||
|
||||
Type the external IP to the browser, specifying HTTPS and port 8200. The site may ask you for the certificate and can complain that there is a risk of proceeding. You should accept all the risks and see that Vault UI is available, similar to the image below. To login, provide the token which you obtained earlier:
|
||||
|
||||

|
||||
|
||||
You can now start using the Vault.
|
||||
|
||||

|
||||
|
||||
Return livenessProbe to production value[](#return-livenessprobe-to-production-value "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
*livenessProbe* in Kubernetes is time in which the system checks the health of the nodes. That would normally not be a concern of yours but if you do not unseal the Vault within that amount of time, the unsealing won’t work. Under normal circumstances, the value would be **60** seconds so that in case of any disturbance, the system would react within one minute instead of six. But it is very hard to copy and enter three strings under one minute as would happen if the value of **60** were present in file **vault-values.yaml**. You would almost inevitably see Kubernetes error **137**, meaning that you did not perform the required operations in time.
|
||||
|
||||
In file **vault-values.yaml** the following section defined **360** seconds as the time for activating the *livenessProbe*:
|
||||
|
||||
```
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
path: "/v1/sys/health?standbyok=true"
|
||||
initialDelaySeconds: 360
|
||||
|
||||
```
|
||||
|
||||
To return the value of *livenessProbe* to **60**, execute the command:
|
||||
|
||||
```
|
||||
kubectl edit statefulset vault -n vault
|
||||
|
||||
```
|
||||
|
||||
You can now access the equivalent of file **vault-values.yaml** inside the Kubernetes cluster. The command will automatically enter a Vim-like editor so press the **O** key on the keyboard in order to be able to change the value with it:
|
||||
|
||||

|
||||
|
||||
When done, save and leave Vim with the standard **:w** and **:q** syntax.
|
||||
|
||||
Troubleshooting[](#troubleshooting "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Check the events, which can point out hints of what needs to be improved:
|
||||
|
||||
```
|
||||
kubectl get events -n vault
|
||||
|
||||
```
|
||||
|
||||
If there are errors and you want to delete Vault installation in order to repeat the process from a clean slate, note that **MutatingWebhookConfiguration** might be left in the default namespace. Delete it prior to trying again:
|
||||
|
||||
```
|
||||
kubectl get MutatingWebhookConfiguration
|
||||
|
||||
kubectl delete MutatingWebhookConfiguration consul-consul-connect-injector
|
||||
kubectl delete MutatingWebhookConfiguration vault-agent-injector-cfg
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now you have Vault server as a part of the cluster and you can also use it from the IP address it got installed to.
|
||||
|
||||
Another way to improve Kubernetes security is securing applications with HTTPS using ingress:
|
||||
|
||||
[Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud](Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-CloudFerro-Cloud-Cloud.html).
|
||||
@ -0,0 +1,188 @@
|
||||
Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud[](#installing-jupyterhub-on-magnum-kubernetes-cluster-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
================================================================================================================================================================================================
|
||||
|
||||
Jupyter notebooks are a popular method of presenting application code, as well as running exploratory experiments and analysis, conveniently, from a web browser. From a Jupyter notebook, one can run code, see the generated results in attractive visual form, and often also interactively interact with the generated output.
|
||||
|
||||
JupyterHub is an open-source service that creates cloud-based Jupyter notebook servers, on-demand, enabiling users to run their notebooks without being concerned about the setup and required resources.
|
||||
|
||||
It is straightforward to quickly deploy JupyterHub using Magnum Kubernetes service, which we present in this article.
|
||||
|
||||
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Authenticate to the cluster
|
||||
> * Run Jupyterhub Helm chart installation
|
||||
> * Retrieve details of Jupyterhub service
|
||||
> * Run Jupyterhub on HTTPS
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **kubectl up and running**
|
||||
|
||||
For further instructions refer to [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
No. 3 **Helm up and running**
|
||||
|
||||
Helm is package manager for Kubernetes as explained in article
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 4 **A registered domain name available**
|
||||
|
||||
To see the results of the installation, you should have a registered domain of your own. You will use it in Step 5 to run JupyterHub on HTTPS in a browser.
|
||||
|
||||
Step 1 Authenticate to the cluster[](#step-1-authenticate-to-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
First of all, we need to authenticate to the cluster. It may so happen that you already have a cluster at your disposal and that the config file is already in place. In other words, you are able to execute the **kubectl** command immediately.
|
||||
|
||||
You may also create a new cluster and call it, say, *jupyter-cluster*, as explained in Prerequisite No. 2. In that case, run from your local machine the following command to create config file in the present working directory:
|
||||
|
||||
```
|
||||
openstack coe cluster config jupyter-cluster
|
||||
|
||||
```
|
||||
|
||||
This will output the command to set the KUBECONFIG *env*, which is a variable pointing to the location of your newly created cluster e.g.
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/eouser/config
|
||||
|
||||
```
|
||||
|
||||
Run this command.
|
||||
|
||||
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. JupyterHub will require some additional privileges in order to run correctly.
|
||||
|
||||
We will start by creating a dedicated namespace for our JupyterHub Helm artifacts:
|
||||
|
||||
```
|
||||
kubectl create namespace jupyterhub
|
||||
|
||||
```
|
||||
|
||||
The next step is to create a *RoleBinding* that will add a *magnum:podsecuritypolicy:privileged* ClusterRole to the ServiceAccount which will be later deployed by JupyterHub Helm chart in the *jupyterhub* namespace. This role will enable additional privileges to this Service Account. Create a file *jupyterhub-rolebinding.yaml* with the following contents:
|
||||
|
||||
**jupyterhub-rolebinding.yaml**
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: jupyterhub-rolebinding
|
||||
namespace: jupyterhub
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: magnum:podsecuritypolicy:privileged
|
||||
|
||||
```
|
||||
|
||||
Then apply with:
|
||||
|
||||
```
|
||||
kubectl apply -f jupyterhub-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 3 Run Jupyterhub Helm chart installation[](#step-3-run-jupyterhub-helm-chart-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To install Helm chart with the default settings use the below set of commands. This will
|
||||
|
||||
> * download and update the JupyterHub repository, and
|
||||
> * install the chart to the *jupyterhub* namespace.
|
||||
|
||||
```
|
||||
helm repo add jupyterhub https://hub.jupyter.org/helm-chart/
|
||||
helm repo update
|
||||
helm install jupyterhub jupyterhub/jupyterhub --version 2.0.0 --namespace jupyterhub
|
||||
|
||||
```
|
||||
|
||||
This is the result of successful Helm chart installation:
|
||||
|
||||

|
||||
|
||||
Step 4 Retrieve details of your service[](#step-4-retrieve-details-of-your-service "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once all the Helm resources get deployed to the *jupyterhub* namespace, we can view their state and definitions using standard **kubectl** commands.
|
||||
|
||||
To view the services resource created by Helm, execute the following command:
|
||||
|
||||
```
|
||||
kubectl get services -n jupyterhub
|
||||
|
||||
```
|
||||
|
||||
There are several resources created and a few services. The one most interesting to us is the proxy-public service of type LoadBalancer, which exposes JupyterHub to the public network:
|
||||
|
||||
```
|
||||
$ kubectl get services -n jupyterhub
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hub ClusterIP 10.254.209.133 <none> 8081/TCP 18d
|
||||
proxy-api ClusterIP 10.254.86.239 <none> 8001/TCP 18d
|
||||
proxy-public LoadBalancer 10.254.168.141 64.225.131.136 80:31027/TCP 18d
|
||||
|
||||
```
|
||||
|
||||
The External IP of the proxy-public service will be initially in *<pending>* state. Refresh this command and after 2-5 minutes, you will see the floating IP assigned to the service. You can then type this IP into the browser.
|
||||
|
||||
First, you will enter the login screen. Provide any combination of dummy login and password, after a moment JupyterHub gets loaded into the browser:
|
||||
|
||||

|
||||
|
||||
JupyterHub is now working on HTTP and a direct IP address and you can use it as is.
|
||||
|
||||
Warning
|
||||
|
||||
If in the next step you start running a JupyterHub on HTTPS, you will not be able to run it as a HTTP service unless it has been relaunched.
|
||||
|
||||
Step 5 Run on HTTPS[](#step-5-run-on-https "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
JupyterHub Helm chart enables HTTPS deployments natively. Once we deployed the chart above, we can simply upgrade the chart to enable serving it on HTTPS. Under the hood, it will generate the certificates using Let’s Encrypt certificate authority.
|
||||
|
||||
In order to do enable HTTPS, prepare a file for the configuration override e.g. *jupyter-https-values.yaml* with the following contents (adjust the email and domain to your own):
|
||||
|
||||
**jupyter-https-values.yaml**
|
||||
|
||||
```
|
||||
proxy:
|
||||
https:
|
||||
enabled: true
|
||||
hosts:
|
||||
- mysampledomain.info
|
||||
letsencrypt:
|
||||
contactEmail: [email protected]
|
||||
|
||||
```
|
||||
|
||||
Then upgrade the chart with the following **upgrade** command:
|
||||
|
||||
```
|
||||
helm upgrade -n jupyterhub jupyterhub jupyterhub/jupyterhub -f jupyter-https-values.yaml
|
||||
|
||||
```
|
||||
|
||||
As noted in Prerequisite No. 4, you should have an available registered domain so that you can now point it to address that the LoadBalancer for service **proxy-public** returned above. Please ensure that the records in your domain registrar are correctly associated. Concretely, we’ve associated the A record set of *mysampledomain.info* with the record **64.225.131.136** (the public IP address or our service). Once this is done, the JupyterHub gets served on HTTPS:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
For the production environment: replace the dummy authenticator with an alternative authentication mechanism, ensure persistence by e.g. connecting to a Postgres database. These steps are beyond the scope of this article.
|
||||
@ -0,0 +1,212 @@
|
||||
Kubernetes cluster observability with Prometheus and Grafana on CloudFerro Cloud[](#kubernetes-cluster-observability-with-prometheus-and-grafana-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================================================
|
||||
|
||||
Complex systems deployed on Kubernetes take advantage of multiple Kubernetes resources. Such deployments often consist of a number of namespaces, pods and many other entities, which contribute to consuming the cluster resources.
|
||||
|
||||
To allow proper insight into how the cluster resources are utilized, and enable optimizing their use, one needs a functional cluster observability setup.
|
||||
|
||||
In this article we will present the use of a popular open-source observability stack consisting of Prometheus and Grafana.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Prometheus
|
||||
> * Install Grafana
|
||||
> * Access Prometheus as datasource to Grafana
|
||||
> * Add cluster observability dashboard
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **A cluster created on** **cloud**
|
||||
|
||||
Kubernetes cluster available. For guideline on creating a Kubernetes cluster refer to [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 3 **Familiarity with Helm**
|
||||
|
||||
For more information on using Helm and installing apps with Helm on Kubernetes, refer to [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 4 **Access to kubectl command line**
|
||||
|
||||
The instructions for activation of **kubectl** are provided in: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
1. Install Prometheus with Helm[](#install-prometheus-with-helm "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
Prometheus is an open-source monitoring and alerting toolkit, widely used in System Administration and DevOps domains. Prometheus comes with a timeseries database, which can store metrics generated by variety of other systems and software tools. It provides a query language called PromQL to efficiently access this data. In our case, we will use Prometheus to get access to the metrics generated by our Kubernetes cluster.
|
||||
|
||||
We will use the Prometheus distribution delivered via the Bitnami, so the first step is to download Bitnami to our local Helm repository cache. To do so, type in the following command:
|
||||
|
||||
```
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
|
||||
```
|
||||
|
||||
Next, download the Prometheus Helm chart:
|
||||
|
||||
```
|
||||
helm install prometheus bitnami/kube-prometheus
|
||||
|
||||
```
|
||||
|
||||
With the above commands correctly applied, the result should be similar to the following:
|
||||
|
||||
```
|
||||
NAME: prometheus
|
||||
LAST DEPLOYED: Thu Nov 2 09:22:38 2023
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
CHART NAME: kube-prometheus
|
||||
CHART VERSION: 8.21.2
|
||||
APP VERSION: 0.68.0
|
||||
|
||||
```
|
||||
|
||||
Note that we are deploying the Helm chart to the default namespace for simplicity. For production, you might consider using a dedicated namespace.
|
||||
|
||||
Behind the scenes, several Prometheus pods are launched by the chart, which can be verified as follows:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
|
||||
...
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 2m39s
|
||||
prometheus-kube-prometheus-blackbox-exporter-5cf8597545-22wxc 1/1 Running 0 2m51s
|
||||
prometheus-kube-prometheus-operator-69584c98f-7wwrg 1/1 Running 0 2m51s
|
||||
prometheus-kube-state-metrics-db4f67c5c-h77lb 1/1 Running 0 2m51s
|
||||
prometheus-node-exporter-8twzf 1/1 Running 0 2m51s
|
||||
prometheus-node-exporter-sc8d7 1/1 Running 0 2m51s
|
||||
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 2m39s
|
||||
|
||||
```
|
||||
|
||||
Similarily, several dedicated Kubernetes services are also deployed. The service *prometheus-kube-prometheus-prometheus* exposes the Prometheus dashboard. To access this service in the browser on default port 9090, type in the following command:
|
||||
|
||||
```
|
||||
kubectl port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090
|
||||
|
||||
```
|
||||
|
||||
Then access your browser via *localhost:9090* to see the result similar to the following:
|
||||
|
||||

|
||||
|
||||
Notice, when you start typing kube in the search results, the autocomplete suggest some of the metrics that are available from our Kubernetes cluster. Along with the Helm chart installation, these metrics got exposed to Prometheus, so they are stored in Prometheus database and can be queried for.
|
||||
|
||||

|
||||
|
||||
You can select one of the metrics and hit **Execute** button to process the query for statistics of this metrics. For example, insert the following expression
|
||||
|
||||
```
|
||||
kube_pod_info{namespace="default"}
|
||||
|
||||
```
|
||||
|
||||
to query for all pods in the default namespace. (Further elaboration about the capabilities of Prometheus GUI and PromQL syntax is beyond the scope of this article.)
|
||||
|
||||

|
||||
|
||||
2. Install Grafana[](#install-grafana "Permalink to this headline")
|
||||
--------------------------------------------------------------------
|
||||
|
||||
The next step is to install Grafana. We already added the Bitnami repository when installing Prometheus, so Grafana repository was also added to our local cache. We only need to install Grafana.
|
||||
|
||||
Note that if you want to keep an active browser session of Prometheus from the previous step, you will need to start another Linux terminal to proceed with the below installation guideline.
|
||||
|
||||
By default, Grafana chart will be installed with a random auto-generated admin password. We can overwrite one of the Helm settings to define our own password, in this case: *ownpassword*, for simplicity of the demo:
|
||||
|
||||
```
|
||||
helm install grafana bitnami/grafana --set admin.password=ownpassword
|
||||
|
||||
```
|
||||
|
||||
If you prefer to stick to the defaults, instead of the above command, use the following commands to install the chart and extract the auto-generated password:
|
||||
|
||||
```
|
||||
helm install grafana bitnami/grafana
|
||||
echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
|
||||
|
||||
```
|
||||
|
||||
There will be a single pod generated by the chart installation. Ensure to wait until this pod is ready before proceeding with the further steps:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
grafana-fb6877dbc-5jvjc 1/1 Running 0 65s
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
Now, similarly as with Prometheus we can access Grafana dashboard locally in the browser via the port-forward command:
|
||||
|
||||
```
|
||||
kubectl port-forward svc/grafana 8080:3000
|
||||
|
||||
```
|
||||
|
||||
Then access the Grafana dashboard by entering *localhost:8080* in the browser:
|
||||
|
||||

|
||||
|
||||
Type the login: *admin* and the password *ownpassword* (or the auto-generated password you extracted in the earlier step).
|
||||
|
||||
3. Add Prometheus as datasource to Grafana[](#add-prometheus-as-datasource-to-grafana "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step we will setup Grafana to use our Prometheus installation as a datasource.
|
||||
|
||||
To proceed, click on **Home** menu in the left upper corner of Grafana UI, select **Connections** and then **Data sources**:
|
||||
|
||||

|
||||
|
||||
Then select **Add data source** and choose *Prometheus* as datasource type. You will enter the following screen:
|
||||
|
||||

|
||||
|
||||
Just change “Prometheus server URL” field to <http://prometheus-kube-prometheus-prometheus.default.svc.cluster.local:9090> which represents the address of the Prometheus Kubernetes service in charge of exposing the metrics.
|
||||
|
||||
Hit the **Save and test** button. If all went well, you will see the following screen:
|
||||
|
||||

|
||||
|
||||
4. Add cluster observability dashboard[](#add-cluster-observability-dashboard "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We could be building a Kubernetes observability dashboard from the scratch, but we will much rather utilize one of the open-source dashboards already available.
|
||||
|
||||
To proceed, select the Dashboards section from the collapsible menu in top left corner and click **Import**:
|
||||
|
||||

|
||||
|
||||
Then in the *import via grafana.com* field, enter **10000**, which is the ID of the Kubernetes observability dashboard from the *grafana.com* marketplace represented in: <https://grafana.com/grafana/dashboards/10000-kubernetes-cluster-monitoring-via-prometheus/>
|
||||
|
||||

|
||||
|
||||
Then another screen appears as per below. Change data source to Prometheus and hit **Import** button:
|
||||
|
||||

|
||||
|
||||
As the result, the Grafana Kubernetes observability dashboard gets populated:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can find and import many other dashboards for Kubernetes observability by browsing <https://grafana.com/grafana/dashboards/>. Some examples are dashboards with IDs: 315, 15758, 15761 or many more.
|
||||
|
||||
The following article shows another approach to creating a Kubernetes dashboard:
|
||||
|
||||
[Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum](Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
@ -0,0 +1,380 @@
|
||||
Private container registries with Harbor on CloudFerro Cloud Kubernetes[](#private-container-registries-with-harbor-on-brand-name-kubernetes "Permalink to this headline")
|
||||
===========================================================================================================================================================================
|
||||
|
||||
A fundamental component of the container-based ecosystem are *container registries*, used for storing and distributing container images. There are a few popular public container registries, which serve this purpose in a software-as-a-service model and the most popular is [DockerHub](https://hub.docker.com/).
|
||||
|
||||
In this article, we are using [Harbor](https://goharbor.io/), which is a popular open-source option for running private registries. It is compliant with [OCI (Open Container Initiative](https://opencontainers.org/)), which makes it suitable to work with standard container images. It ships with multiple enterprise-ready features out of the box.
|
||||
|
||||
Benefits of using your own private container registry[](#benefits-of-using-your-own-private-container-registry "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
When you **deploy your own private container registry**, the benefits would be, amongst others:
|
||||
|
||||
> * full control of the storage of your images and the way of accessing them
|
||||
> * privacy for proprietary and private images
|
||||
> * customized configuration for logging, authentication etc.
|
||||
|
||||
You can also use *Role-based access control* on Harbor project level to specify and enforce which users have permission to publish updated images, to consume the available ones and so on.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Deploy Harbor private registry with Bitnami-Harbor Helm chart
|
||||
> * Access Harbor from browser
|
||||
> * Associate the A record of your domain to Harbor’s IP address
|
||||
> * Create a project in Harbor
|
||||
> * Create a Dockerfile for our custom image
|
||||
> * Ensure trust from our local Docker instance
|
||||
> * Build our image locally
|
||||
> * Upload a Docker image to your Harbor instance
|
||||
> * Download a Docker image from your Harbor instance
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **A cluster on CloudFerro-Cloud cloud**
|
||||
|
||||
A Kubernetes cluster on CloudFerro Cloud cloud. Follow guidelines in this article [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
No. 3 **kubectl operational**
|
||||
|
||||
**kubectl** CLI tool installed and pointing to your cluster via KUBECONFIG environment variable. Article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html) provides further guidance.
|
||||
|
||||
No. 4 **Familiarity with deploying Helm charts**
|
||||
|
||||
See this article:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 5 **Domain purchased from a registrar**
|
||||
|
||||
You should own a domain, purchased from any registrar (domain reseller). Obtaining a domain from registrars is not covered in this article.
|
||||
|
||||
No. 6 **Use DNS service in Horizon to link Harbor service to the domain name**
|
||||
|
||||
This is optional. Here is the article with detailed information:
|
||||
|
||||
[DNS as a Service on CloudFerro Cloud Hosting](../cloud/DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html)
|
||||
|
||||
No. 7 **Docker installed on your machine**
|
||||
|
||||
See [How to install and use Docker on Ubuntu 24.04](../cloud/How-to-use-Docker-on-CloudFerro-Cloud.html).
|
||||
|
||||
Deploy Harbor private registry with Bitnami-Harbor Helm chart[](#deploy-harbor-private-registry-with-bitnami-harbor-helm-chart "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The first step to deploy Harbor private registry is to create a dedicated namespace to host Harbor artifacts:
|
||||
|
||||
```
|
||||
kubectl create ns harbor
|
||||
|
||||
```
|
||||
|
||||
Then we add Bitnami repository to Helm:
|
||||
|
||||
```
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
|
||||
```
|
||||
|
||||
We will then prepare a configuration file, which we can use to control various parameters of our deployment. If you want to have a view of all possible configuration parameters, you can download the default configuration *values.yaml*:
|
||||
|
||||
```
|
||||
helm show values bitnami/harbor > values.yaml
|
||||
|
||||
```
|
||||
|
||||
You can then see the configuration parameters with
|
||||
|
||||
```
|
||||
cat values.yaml
|
||||
|
||||
```
|
||||
|
||||
Otherwise to proceed with the article, use *nano* editor to create new file **harbor-values.yaml**
|
||||
|
||||
```
|
||||
nano harbor-values.yaml
|
||||
|
||||
```
|
||||
|
||||
and paste the following contents:
|
||||
|
||||
```
|
||||
externalURL: mysampledomain.info
|
||||
nginx:
|
||||
tls:
|
||||
commonName: mysampledomain.info
|
||||
adminPassword: Harbor12345
|
||||
|
||||
```
|
||||
|
||||
These settings deploy Harbor portal as a service of LoadBalancer type, and the SSL termination is delegated to NGINX that gets deployed along as a Kubernetes pod.
|
||||
|
||||
Warning
|
||||
|
||||
We use mysampledomain.info for demonstration purposes only. Please replace this with a real domain you own while running the code in this article.
|
||||
|
||||
For demonstration we also use a simple password, which can be replaced after the initial login.
|
||||
|
||||
Now install the chart with the following command:
|
||||
|
||||
```
|
||||
helm install harbor bitnami/harbor --values harbor-values.yaml -n harbor
|
||||
|
||||
```
|
||||
|
||||
The output should be similar to the following:
|
||||
|
||||
```
|
||||
NAME: harbor
|
||||
LAST DEPLOYED: Tue Aug 1 15:48:44 2023
|
||||
NAMESPACE: harbor-bitnami
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
CHART NAME: harbor
|
||||
CHART VERSION: 16.6.5
|
||||
APP VERSION: 2.8.1
|
||||
|
||||
** Please be patient while the chart is being deployed **
|
||||
|
||||
1. Get the Harbor URL:
|
||||
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
Watch the status with: 'kubectl get svc --namespace harbor-bitnami -w harbor'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace harbor-bitnami harbor --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
|
||||
echo "Harbor URL: http://$SERVICE_IP/"
|
||||
|
||||
2. Login with the following credentials to see your Harbor application
|
||||
|
||||
echo Username: "admin"
|
||||
echo Password: $(kubectl get secret --namespace harbor-bitnami harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 -d)
|
||||
|
||||
```
|
||||
|
||||
Access Harbor from browser[](#access-harbor-from-browser "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
With the previous steps followed, you should be able to access the Harbor portal. The following command will display all of the services deployed:
|
||||
|
||||
```
|
||||
kubectl get services -n harbor
|
||||
|
||||
```
|
||||
|
||||
Here they are:
|
||||
|
||||
```
|
||||
$ kubectl get services -n harbor-bitnami
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
harbor LoadBalancer 10.254.208.73 64.225.133.148 80:32417/TCP,443:31448/TCP,4443:31407/TCP 4h2m
|
||||
harbor-chartmuseum ClusterIP 10.254.11.204 <none> 80/TCP 4h2m
|
||||
harbor-core ClusterIP 10.254.209.231 <none> 80/TCP 4h2m
|
||||
harbor-jobservice ClusterIP 10.254.228.203 <none> 80/TCP 4h2m
|
||||
harbor-notary-server ClusterIP 10.254.189.61 <none> 4443/TCP 4h2m
|
||||
harbor-notary-signer ClusterIP 10.254.81.205 <none> 7899/TCP 4h2m
|
||||
harbor-portal ClusterIP 10.254.217.77 <none> 80/TCP 4h2m
|
||||
harbor-postgresql ClusterIP 10.254.254.0 <none> 5432/TCP 4h2m
|
||||
harbor-postgresql-hl ClusterIP None <none> 5432/TCP 4h2m
|
||||
harbor-redis-headless ClusterIP None <none> 6379/TCP 4h2m
|
||||
harbor-redis-master ClusterIP 10.254.137.87 <none> 6379/TCP 4h2m
|
||||
harbor-registry ClusterIP 10.254.2.234 <none> 5000/TCP,8080/TCP 4h2m
|
||||
harbor-trivy ClusterIP 10.254.249.99 <none> 8080/TCP 4h2m
|
||||
|
||||
```
|
||||
|
||||
Explaining the purpose of several artifacts is beyond the scope of this article. The key service that is interesting to us at this stage is *harbor*, which got deployed as LoadBalancer type with public IP **64.225.134.148**.
|
||||
|
||||
Associate the A record of your domain to Harbor’s IP address[](#associate-the-a-record-of-your-domain-to-harbor-s-ip-address "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The final step is to associate the A record of your domain to the Harbor’s IP address.
|
||||
|
||||
Create or edit the A record through your domain registrar
|
||||
: The exact steps will vary from one registrar to another so explaining them is out of scope of this article.
|
||||
|
||||
Create or edit the A record through the DNS as a service available in your CloudFerro Cloud account
|
||||
: This is explained in Prerequisite No. 6. Use commands **DNS** –> **Zones** and select the name of the site you are using instead of *mysampledomain.info*, then click on **Record Sets**. In column **Type**, there will be type **A - Address record** and click on **Update** field on the right side to enter or change the value in that row:
|
||||
|
||||

|
||||
|
||||
In this screenshot, the value **64.225.134.148** is already entered into that **Update** field – you will, of course, here supply your own IP value instead.
|
||||
|
||||
With the above steps completed, you can access *harbor* from the expected URL, in our case: <https://mysampledomain.info>. Since the chart generated self-signed certificates, you will first need to accept the “Not Secure” warning provided by the browser:
|
||||
|
||||

|
||||
|
||||
Note
|
||||
|
||||
This warning will vary from one browser to another.
|
||||
|
||||
To log in to your instance, use these as the login details
|
||||
|
||||
> | | |
|
||||
> | --- | --- |
|
||||
> | login | admin |
|
||||
> | password | Harbor12345 |
|
||||
|
||||
Create a project in Harbor[](#create-a-project-in-harbor "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
When you log in to Harbor, you enter the **Projects** section:
|
||||
|
||||

|
||||
|
||||
A *project* in Harbor is a separate space where containers can be placed. An image needs to be placed in a scope of a specific project. As a Harbor admin, you can also apply **Role-Based Access Control** on the Harbor project level, so that only specific users can access or perform certain operations within a scope of a given project.
|
||||
|
||||
To create a new project, click on **New Project** button. In this article, we will upload a public image that can be accessed by anyone, and let it be called simply *myproject*:
|
||||
|
||||

|
||||
|
||||
Create a Dockerfile for our custom image[](#create-a-dockerfile-for-our-custom-image "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The Harbor service is running and we can use it to upload our Docker images. We will generate a minimal image, so just create an empty folder, called *helloharbor*, with a single Docker file (called *Dockerfile*)
|
||||
|
||||
**Dockerfile**
|
||||
|
||||
```
|
||||
mkdir helloharbor
|
||||
cd helloharbor
|
||||
nano Dockerfile
|
||||
|
||||
```
|
||||
|
||||
and its contents be:
|
||||
|
||||
```
|
||||
FROM alpine
|
||||
CMD ["/bin/sh", "-c", "echo 'Hello Harbor!'"]
|
||||
|
||||
```
|
||||
|
||||
Ensure trust from our local Docker instance[](#ensure-trust-from-our-local-docker-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to build our Docker image in further steps and upload this image to Harbor, we need to ensure communication of our local Docker instance with Harbor. To fulfill this objective, proceed as follows:
|
||||
|
||||
### Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate[](#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate "Permalink to this headline")
|
||||
|
||||
Bypass Docker validating the domain certificate pointing to the domain where Harbor is running. Docker would not trust this certificate, because it is self-signed. To bypass this validation, create a file called *daemon.json* in */etc/docker* directory on your local machine:
|
||||
|
||||
```
|
||||
sudo chmod 777 /etc/docker
|
||||
|
||||
```
|
||||
|
||||
You are using **sudo** so will be asked to supply the password. Now create the file:
|
||||
|
||||
```
|
||||
nano /etc/docker/daemon.json
|
||||
|
||||
```
|
||||
|
||||
and fill in with this content, then save with **Ctrl-X, Y**:
|
||||
|
||||
```
|
||||
{
|
||||
"insecure-registries" : [ "mysampledomain.info" ]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
As always, replace *mysampledomain.info* with your own domain.
|
||||
|
||||
For production, you would rather set up proper HTTPS certificate for the domain.
|
||||
|
||||
### Ensure Docker trust - Step 2. Ensure Docker trusts the Harbor’s Certificate Authority[](#ensure-docker-trust-step-2-ensure-docker-trusts-the-harbor-s-certificate-authority "Permalink to this headline")
|
||||
|
||||
To do so, we download the **ca.crt** file from our Harbor portal instance from the **myproject** project view:
|
||||
|
||||

|
||||
|
||||
The exact way of installing the certificate will depend on the environment you are running Docker on:
|
||||
|
||||
Install the certificate on Linux
|
||||
: Create a nested directory path */etc/docker/certs.d/mysampledomain.info* and to this folder upload the **ca.crt** file:
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/docker/certs.d/mysampledomain.info
|
||||
sudo cp ~/ca.crt /etc/docker/certs.d/mysampledomain.info
|
||||
|
||||
```
|
||||
|
||||
Install the certificate on WSL2 running on Windows 10 or 11
|
||||
: In WSL2, you would need to upload the certificate to Windows ROOT CA store with the following sequence:
|
||||
|
||||
> * Click on Start and type **Manage Computer Certificates**
|
||||
> * Right-click on **Trusted Root Certification Authorities** then **All tasks** and **Import**
|
||||
> * Browse to the *ca.crt* file location and then keep pressing **Next** to complete the wizard
|
||||
> * Restart Docker from Docker Desktop menu
|
||||
|
||||
### Ensure Docker trust - Step 3. Restart Docker[](#ensure-docker-trust-step-3-restart-docker "Permalink to this headline")
|
||||
|
||||
Restart Docker with:
|
||||
|
||||
```
|
||||
sudo systemctl restart docker
|
||||
|
||||
```
|
||||
|
||||
Build our image locally[](#build-our-image-locally "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
After these steps, we can tag our image and build it locally (from the location where Dockerfile is placed):
|
||||
|
||||
```
|
||||
docker build -t mysampledomain.info/myproject/helloharbor .
|
||||
|
||||
```
|
||||
|
||||
Next we can log in to the Harbor portal with our *admin* login and *Harbor12345* password:
|
||||
|
||||
```
|
||||
docker login mysampledomain.info
|
||||
|
||||
```
|
||||
|
||||
Upload a Docker image to your Harbor instance[](#upload-a-docker-image-to-your-harbor-instance "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Lastly, push the image to the repo:
|
||||
|
||||
```
|
||||
docker push mysampledomain.info/myproject/helloharbor
|
||||
|
||||
```
|
||||
|
||||
The result will be similar to the following:
|
||||
|
||||

|
||||
|
||||
Download a Docker image from your Harbor instance[](#download-a-docker-image-from-your-harbor-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To demonstrate downloading images from our Harbor repository, we can first delete the local Docker image we created earlier.
|
||||
|
||||
```
|
||||
docker image rm mysampledomain.info/myproject/helloharbor
|
||||
|
||||
```
|
||||
|
||||
To verify, view it is not on our local images list:
|
||||
|
||||
```
|
||||
docker images
|
||||
|
||||
```
|
||||
|
||||
Then pull from Harbor remote:
|
||||
|
||||
```
|
||||
docker pull mysampledomain.info/myproject/helloharbor
|
||||
|
||||
```
|
||||
@ -0,0 +1,177 @@
|
||||
Sealed Secrets on CloudFerro Cloud Kubernetes[](#sealed-secrets-on-brand-name-kubernetes "Permalink to this headline")
|
||||
=======================================================================================================================
|
||||
|
||||
Sealed Secrets improve security of our Kubernetes deployments by enabling encrypted Kubernetes secrets. This allows to store such secrets in source control and follow GitOps practices of storing all configuration in code.
|
||||
|
||||
In this article we will install tools to work with Sealed Secrets and demonstrate using Sealed Secrets on CloudFerro Cloud cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install the Sealed Secrets controller
|
||||
> * Install the **kubeseal** command line utility
|
||||
> * Create a sealed secret
|
||||
> * Unseal the secret
|
||||
> * Verify
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Understand Helm deployments**
|
||||
|
||||
To install Sealed Secrets on Kubernetes cluster, we will use the appropriate Helm chart. The following article explains the procedure:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html)
|
||||
|
||||
No. 3 **Kubernetes cluster**
|
||||
|
||||
General explanation of how to create a Kubernetes cluster is here:
|
||||
|
||||
[How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
For new cluster, using the latest version of the cluster template is always recommended. This article was tested with Kubernetes 1.25.
|
||||
|
||||
No. 4 **Access to cluster with kubectl**
|
||||
|
||||
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html)
|
||||
|
||||
Step 1 Install the Sealed Secrets controller[](#step-1-install-the-sealed-secrets-controller "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to use Sealed Secrets we will first install the Sealed Secrets controller to our Kubernetes cluster. We can use Helm for this purpose and the first step is to download the Helm repository. To add the repo locally use the following command:
|
||||
|
||||
```
|
||||
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
|
||||
```
|
||||
|
||||
The next step is to install the SealedSecrets controller chart. We need to install it to the namespace **kube-system**. Note we also override the name of the controller, so that it corresponds to the default name used by the CLI utility **kubeseal** which we will install in the following section.
|
||||
|
||||
```
|
||||
helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets
|
||||
|
||||
```
|
||||
|
||||
The chart downloads several resources to our cluster. The key ones are:
|
||||
|
||||
> * **SealedSecret Custom Resource Definition (CRD)** - defines the template for sealed secrets that will be created on the cluster
|
||||
> * The **SealedSecrets controller pod** running in the kube-system namespace.
|
||||
|
||||
Step 2 Install the kubeseal command line utility[](#step-2-install-the-kubeseal-command-line-utility "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Kubeseal CLI tool is used for encrypting secrets using the public certificate of the controller. To proceed, install **kubeseal** with the following set of commands:
|
||||
|
||||
```
|
||||
KUBESEAL_VERSION='0.23.0'
|
||||
wget "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION:?}/kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz"
|
||||
tar -xvzf kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz kubeseal
|
||||
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
|
||||
|
||||
```
|
||||
|
||||
You can verify that **kubeseal** was properly installed by running:
|
||||
|
||||
```
|
||||
kubeseal --version
|
||||
|
||||
```
|
||||
|
||||
which will return result similar to the following:
|
||||
|
||||

|
||||
|
||||
Step 3 Create a sealed secret[](#step-3-create-a-sealed-secret "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
We can use Sealed Secrets to encrypt the secrets, which can be decrypted only by the controller running on the cluster.
|
||||
|
||||
A sealed secret needs to be created based off a regular, unencrypted Kubernetes secret. However, we don’t want to commit this base secret to our Kubernetes cluster. We also do not want to create a permanent file with the unencrypted secret contents, to avoid accidentally committing it to source control.
|
||||
|
||||
Therefore we will use **kubectl** to create a regular secret only temporarily, using **–dry-run=client** parameter. The secret has a key **foo** and value **bar**. **kubectl** outputs this temporary secret, we then pipe this output to **kubeseal** utility. **kubeseal** seals (encrypts) the secret and saves it to a file called **sealed-secret.yaml**.
|
||||
|
||||
```
|
||||
kubectl create secret generic mysecret \
|
||||
--dry-run=client \
|
||||
--from-literal=foo=bar -o yaml | kubeseal \
|
||||
--format yaml > mysecret.yaml
|
||||
|
||||
```
|
||||
|
||||
When we view the file we can see the contents are encrypted and safe to store in source control.
|
||||
|
||||
Step 4 Unseal the secret[](#step-4-unseal-the-secret "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
To unseal the secret and make it available and usable in the cluster, we perform the following command:
|
||||
|
||||
```
|
||||
kubectl create -f mysecret.yaml
|
||||
|
||||
```
|
||||
|
||||
This, after few seconds, generates a regular Kubernetes secret which is readable to our cluster. We can verify this with these two commands:
|
||||
|
||||
```
|
||||
kubectl get secret mysecret -o yaml
|
||||
echo YmFy | base64 --decode
|
||||
|
||||
```
|
||||
|
||||
The former command extracts output the yaml of the secret, while the latter decodes the value of the data stored under key **foo** which outputs the expected result: **bar**.
|
||||
|
||||
The results can also be seen on the below screen:
|
||||
|
||||

|
||||
|
||||
Step 5 Verify[](#step-5-verify "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
The generated secret can be used as a regular Kubernetes secret. To test, create a file **test-pod.yaml** with the following contents:
|
||||
|
||||
**test-pod.yaml**
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
env:
|
||||
- name: TEST_VAR
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysecret
|
||||
key: foo
|
||||
|
||||
```
|
||||
|
||||
This launches a minimal pod called **nginx** which is based on nginx server container image. In the container inside the pod, we create an environment variable called **TEST\_VAR**. The value of the variable is assigned from our secret **mysecret** by the available key **foo**. Apply the example with the following command:
|
||||
|
||||
```
|
||||
kubectl apply -f test-pod.yaml
|
||||
|
||||
```
|
||||
|
||||
Then enter the container inside the **nginx** pod:
|
||||
|
||||
```
|
||||
kubectl exec -it nginx -- sh
|
||||
|
||||
```
|
||||
|
||||
The command prompt will change to **#**, meaning the command you enter is executed inside the container. Execute the **printenv** command to see environment variables. We can see our variable **TEST\_VAR** with the value **bar**, as expected:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Sealed Secrets present a viable alternative to secret management using additional tools such as HashiCorp-Vault. For additional information, see [Installing HashiCorp Vault on CloudFerro Cloud Magnum](Installing-HashiCorp-Vault-on-CloudFerro-Cloud-Magnum.html).
|
||||
@ -0,0 +1,223 @@
|
||||
Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum[](#using-dashboard-to-access-kubernetes-cluster-post-deployment-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================================
|
||||
|
||||
After the Kubernetes cluster has been created, you can access it through command line tool, **kubectl**, or you can access it through a visual interface, called the **Kubernetes dashboard**. *Dashboard* is a GUI interface to Kubernetes cluster, much the same as **kubectl** as a CLI interface to the Kubernetes cluster.
|
||||
|
||||
This article shows how to install Kubernetes dashboard.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Deploying the dashboard
|
||||
> * Creating a sample user
|
||||
> * Creating secret for admin-user
|
||||
> * Getting the bearer token for authentication to dashboard
|
||||
> * Creating a separate terminal window for proxy access
|
||||
> * Running the dashboard in browser
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Cluster and kubectl should be already operational**
|
||||
|
||||
To eventually set up a cluster and connect it to the **kubectl** tool, see this article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
The important intermediary result of that article is a command like this:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/user/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
Note the exact command which in your case sets up the value of **KUBECONFIG** variable as you will need it to start a new terminal window from which the dashboard will run.
|
||||
|
||||
Step 1 Deploying the Dashboard[](#step-1-deploying-the-dashboard "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
Install it with the following command:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
|
||||
|
||||
```
|
||||
|
||||
The result is
|
||||
|
||||

|
||||
|
||||
Step 2 Creating a sample user[](#step-2-creating-a-sample-user "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Next, you create a bearer token which will serve as an authorization token for the Dashboard. To that end, you will create two local files and “send” them to the cloud using the **kubectl** command. The first file is called *dashboard-adminuser.yaml* and its contents are
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
```
|
||||
|
||||
Use a text editor of your choice to create that file, on MacOS or Linux you can use *nano*, like this:
|
||||
|
||||
```
|
||||
nano dashboard-adminuser.yaml
|
||||
|
||||
```
|
||||
|
||||
Install that file on the Kubernetes cluster with this command:
|
||||
|
||||
```
|
||||
kubectl apply -f dashboard-adminuser.yaml
|
||||
|
||||
```
|
||||
|
||||
The second file to create is
|
||||
|
||||
```
|
||||
nano dashboard-clusterolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
and its contents should be:
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
```
|
||||
|
||||
The command to send it to the cloud is
|
||||
|
||||
```
|
||||
kubectl apply -f dashboard-clusterolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 3 Create secret for admin-user[](#step-3-create-secret-for-admin-user "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
We have to manually create token for admin user.
|
||||
|
||||
Create file **admin-user-token.yaml**
|
||||
|
||||
```
|
||||
nano admin-user-token.yaml
|
||||
|
||||
```
|
||||
|
||||
Enter the following code:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: admin-user-token
|
||||
namespace: kubernetes-dashboard
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: "admin-user"
|
||||
type: kubernetes.io/service-account-token
|
||||
|
||||
```
|
||||
|
||||
Execute it with
|
||||
|
||||
```
|
||||
kubectl apply -f admin-user-token.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 4 Get the bearer token for authentication to dashboard[](#step-4-get-the-bearer-token-for-authentication-to-dashboard "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The final step is to get the bearer token, which is a long string that will authenticate calls to Dashboard:
|
||||
|
||||
```
|
||||
kubectl -n kubernetes-dashboard get secret admin-user-token -o jsonpath="{.data.token}" | base64 --decode
|
||||
|
||||
```
|
||||
|
||||
The bearer token string will be printed in terminal screen.
|
||||
|
||||

|
||||
|
||||
Copy it to a text editor, it will be needed after you access the Dashboard UI through a HTTPS call.
|
||||
|
||||
Note
|
||||
|
||||
If the last character of the bearer token string is *%*, it may be a character that denotes the end of the string but is not a part of it. If you copy the bearer string and it is not recognized, try copying it without this ending character *%*.
|
||||
|
||||
Step 5 Create a separate terminal window for proxy access[](#step-5-create-a-separate-terminal-window-for-proxy-access "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We shall now use a proxy server for Kubernetes API server. The proxy server
|
||||
|
||||
> * handles certificates automatically when accessing Kubernetes API,
|
||||
> * connects to API extensions or dashboards (like in this article),
|
||||
> * enables testing of API calls locally before automating them in scripts.
|
||||
|
||||
To enable the connection, start a **separate** terminal window and first set up the config command for that window:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/home/user/k8sdir/config
|
||||
|
||||
```
|
||||
|
||||
*Change that address to point to your own config file on your computer.*
|
||||
|
||||
The next command in that new window is:
|
||||
|
||||
```
|
||||
kubectl proxy
|
||||
|
||||
```
|
||||
|
||||
The server is activated on port **8001**:
|
||||
|
||||

|
||||
|
||||
Step 6 See the dashboard in browser[](#step-6-see-the-dashboard-in-browser "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
Then enter this address into the browser:
|
||||
|
||||
```
|
||||
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Enter the token, click on **Sign In** and get the Dashboard UI for the Kubernetes cluster.
|
||||
|
||||

|
||||
|
||||
The Kubernetes Dashboard organizes working with the cluster in a visual and interactive way. For instance, click on *Nodes* on the left sides to see the nodes that the *k8s-cluster* has.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can still use **kubectl** or alternate with using the **Dashboard**. Either way, you can
|
||||
|
||||
> * deploy apps on the cluster,
|
||||
> * access multiple clusters,
|
||||
> * create load balancers,
|
||||
> * access applications in the cluster using port forwarding,
|
||||
> * use Service to access application in a cluster,
|
||||
> * list container images in the cluster
|
||||
> * use Services, Deployments and all other resources in a Kubernetes cluster.
|
||||
@ -0,0 +1,236 @@
|
||||
Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum[](#using-kubernetes-ingress-on-brand-name-cloud-name-openstack-magnum "Permalink to this headline")
|
||||
==================================================================================================================================================================
|
||||
|
||||
The Ingress feature in Kubernetes can be associated with routing the traffic from outside of the cluster to the services within the cluster. With Ingress, multiple Kubernetes services can be exposed using a single Load Balancer.
|
||||
|
||||
In this article, we will provide insight into how Ingress is implemented on the cloud. We will also demonstrate a practical example of exposing Kubernetes services using Ingress on the cloud. In the end, you will be able to create one or more sites and services running on a Kubernetes cluster. The services you create in this way will
|
||||
|
||||
> * run on the same IP address without need of creating extra LoadBalancer per service and will also
|
||||
> * automatically enjoy all of the Kubernetes cluster benefits – reliability, scalability etc.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Create Magnum Kubernetes cluster with NGINX Ingress enabled
|
||||
> * Build and expose Nginx and Apache webservers for testing
|
||||
> * Create Ingress Resource
|
||||
> * Verify that Ingress can access both testing servers
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Basic knowledge of Kubernetes fundamentals**
|
||||
|
||||
Basic knowledge of Kubernetes fundamentals will come handy: cluster creation, pods, deployments, services and so on.
|
||||
|
||||
No. 3 **Access to kubectl command**
|
||||
|
||||
To install necessary software (if you haven’t done so already), see article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
The net result of following instructions in that and the related articles will be
|
||||
|
||||
> * a cluster formed, healthy and ready to be used, as well as
|
||||
> * enabling access to the cluster from the local machine (i.e. having *kubectl* command operational).
|
||||
|
||||
Step 1 Create a Magnum Kubernetes cluster with NGINX Ingress enabled[](#step-1-create-a-magnum-kubernetes-cluster-with-nginx-ingress-enabled "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
When we create a Kubernetes cluster on the cloud, we can deploy it with a preconfigured ingress setup. This requires minimal setting and is described in this help section: [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html).
|
||||
|
||||
Such a cluster is deployed with an NGINX *ingress controller* and the default *ingress backend*. The role of the controller is to enable the provisioning of the infrastructure e.g. the (virtual) load balancer. The role of the backend is to provide access to this infrastructure in line with the rules defined by the **ingress resource** (explained later).
|
||||
|
||||
We can verify the availability of these artifacts by typing the following command:
|
||||
|
||||
```
|
||||
kubectl get pods -n kube-system
|
||||
|
||||
```
|
||||
|
||||
The output should be similar to the one below. We see that there is an ingress controller created, and also an ingress backend, both running as pods on our cluster.
|
||||
|
||||
```
|
||||
kubectl get pods -n kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
magnum-nginx-ingress-controller-zxgj8 1/1 Running 0 65d
|
||||
magnum-nginx-ingress-default-backend-9dfb4c685-8fjdv 1/1 Running 0 83d
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
There is also an ingress class available in the default namespace:
|
||||
|
||||
```
|
||||
kubectl get ingressclass
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
nginx k8s.io/ingress-nginx <none> 7m36s
|
||||
|
||||
```
|
||||
|
||||
Step 2 Creating services for Nginx and Apache webserver[](#step-2-creating-services-for-nginx-and-apache-webserver "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You are now going to build and expose two minimal applications:
|
||||
|
||||
> * Nginx server
|
||||
> * Apache webserver
|
||||
|
||||
They will be both exposed from a single public IP address using a single default ingress Load Balancer. The web pages served from each server will be accessible in the browser with a unified routing scheme. In a similar fashion, one could mix and match applications written in a variety of other technologies.
|
||||
|
||||
First, let’s create the Nginx server app. For brevity, we use the command line with default settings:
|
||||
|
||||
```
|
||||
kubectl create deployment nginx-web --image=nginx
|
||||
kubectl expose deployment nginx-web --type=NodePort --port=80
|
||||
|
||||
```
|
||||
|
||||
Similarly, we create the Apache app:
|
||||
|
||||
```
|
||||
kubectl create deployment apache-web --image=httpd
|
||||
kubectl expose deployment apache-web --type=NodePort --port=80
|
||||
|
||||
```
|
||||
|
||||
The above actions result in creating a service for each app, which can be inspected using the below command. Behind each service, there is a deployment and a running pod.
|
||||
|
||||
```
|
||||
kubectl get services
|
||||
|
||||
```
|
||||
|
||||
You should see an output similar to the following:
|
||||
|
||||
```
|
||||
kubectl get services
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
apache-web NodePort 10.254.80.182 <none> 80:32660/TCP 75s
|
||||
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 84d
|
||||
nginx-web NodePort 10.254.101.230 <none> 80:32532/TCP 36m
|
||||
|
||||
```
|
||||
|
||||
The services were created with the type *NodePort*, which is a required type to work with ingress. Therefore, they are not yet exposed under a public IP. The servers are, however, already running and serving their default welcome pages.
|
||||
|
||||
You could verify that by assigning a floating IP to one of the nodes (see [How to Add or Remove Floating IP’s to your VM on CloudFerro Cloud](../networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-CloudFerro-Cloud.html)). Then SSH to the node and run the following command:
|
||||
|
||||
```
|
||||
curl <name-of-node>:<port-number>
|
||||
|
||||
```
|
||||
|
||||
E.g. for the scenario above we see:
|
||||
|
||||
```
|
||||
curl ingress-tqwzjwu2lw7p-node-1:32660
|
||||
<html><body><h1>It works!</h1></body></html>
|
||||
|
||||
```
|
||||
|
||||
Step 3 Create Ingress Resource[](#step-3-create-ingress-resource "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
To expose application to a public IP address, you will need to define an Ingress Resource. Since both applications will be available from the same IP address, the ingress resource will define the detailed rules of what gets served in which route. In this example, the */apache* route will be served from the Apache service, and all other routes will be served by the Nginx service.
|
||||
|
||||
Note
|
||||
|
||||
There are multiple ways the routes can be configured, we present here just a fraction of the capability.
|
||||
|
||||
Create a YAML file called *my-ingress-resource.yaml* with the following contents:
|
||||
|
||||
```
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: example-ingress
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /*
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx-web
|
||||
port:
|
||||
number: 80
|
||||
- path: /apache
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: apache-web
|
||||
port:
|
||||
number: 80
|
||||
|
||||
```
|
||||
|
||||
And deploy with:
|
||||
|
||||
```
|
||||
kubectl apply -f my-ingress-resource.yaml
|
||||
|
||||
```
|
||||
|
||||
After some time (usually 2 to 5 minutes), verify that the floating IP has been assigned to the ingress:
|
||||
|
||||
```
|
||||
kubectl get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
example-ingress nginx * 64.225.130.77 80 3m16s
|
||||
|
||||
```
|
||||
|
||||
Note
|
||||
|
||||
The address **64.225.130.77** is generated randomly and in your case it will be different. Be sure to copy and use the address shown by **kubectl get ingress**.
|
||||
|
||||
Step 4 Verify that it works[](#step-4-verify-that-it-works "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Copy the ingress floating IP in the browser, followed by some example routes. You should see an output similar to the one below. Here is the screenshot for the */apache* route:
|
||||
|
||||

|
||||
|
||||
This screenshot shows what happens on any other route – it defaults to Nginx:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You now have two of the most popular web servers installed as services within a Kubernetes cluster. Here are some ideas how to use this setup:
|
||||
|
||||
**Create another service on the same server**
|
||||
|
||||
To create another service under the same IP address, repeat the entire procedure with another endpoint name instead of */apache*. Don’t forget to add the apropriate entry into the YAML file.
|
||||
|
||||
**Add other endpoints for use with Nginx**
|
||||
|
||||
You can create other endpoints and use Nginx as the basic server instead of Apache.
|
||||
|
||||
**Use images other than nginx and httpd**
|
||||
|
||||
There are many sources of containers on the Internet but the most popular catalog is [dockerhub.com](https://hub.docker.com/search?q=). It contains operating systems images with preinstalled software you want to use, which will save you the effort of downloading and testing the installation.
|
||||
|
||||
**Microservices**
|
||||
|
||||
Instead of putting all of the code and data onto one virtual machine, the Kubernetes way is to deploy multiple custom containers. A typical setup would be like this:
|
||||
|
||||
> * pod No. 1 would contain a database, say, MariaDB, as a backend,
|
||||
> * pod No. 2 could contain PHPMyAdmin for a front end to the database,
|
||||
> * pod No. 3 could contain installation of WordPress, which is the front end for the site visitor
|
||||
> * pod No. 4 could contain your proprietary code for WordPress plugins.
|
||||
|
||||
Each of these pods will take code from a specialized image. If you want to edit a part of the code, you just update the relevant Docker image on docker hub and redeploy.
|
||||
|
||||
**Use DNS to create a domain name for the server**
|
||||
|
||||
You can use a DNS service to connect a proper domain name to the IP address used in this article. With the addition of a Cert Manager and a free service such as Let’s Encrypt, an ingress will be running HTTPS protocol in a straightforward way.
|
||||
@ -0,0 +1,350 @@
|
||||
Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on CloudFerro Cloud OpenStack Magnum[](#volume-based-vs-ephemeral-based-storage-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=====================================================================================================================================================================================================================================
|
||||
|
||||
Containers in Kubernetes store files on-disk and if the container crashes, the data will be lost. A new container can replace the old one but the data will not survive. Another problem that appears is when containers running in a pod need to share files.
|
||||
|
||||
That is why Kubernetes has another type of file storage, called *volumes*. They can be either *persistent* or *ephemeral*, as measured against the lifetime of a pod:
|
||||
|
||||
> * Ephemeral volumes are deleted when the pod is deleted, while
|
||||
> * Persistent volumes continue to exist even if the pod it is attached to does not exist any more.
|
||||
|
||||
The concept of volumes was first popularized by Docker, where it was a directory on disk, or within a container. In CloudFerro Cloud OpenStack hosting, the default docker storage is configured to use ephemeral disk of the instance. This can be changed by specifying docker volume size during cluster creation, symbolically like this (see below for the full command to generate a new cluster using **–docker-volume-size**):
|
||||
|
||||
```
|
||||
openstack coe cluster create --docker-volume-size 50
|
||||
|
||||
```
|
||||
|
||||
This means that a persistent volume of 50 GB will be created and attached to the pod. Using **–docker-volume-size** is a way to both reserve the space and declare that the storage will be persistent.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to create a cluster when **–docker-volume-size** is used
|
||||
> * How to create a pod manifest with *emptyDir* as volume
|
||||
> * How to create a pod with that manifest
|
||||
> * How to execute *bash* commands in the container
|
||||
> * How to save a file into persistent storage
|
||||
> * How to demonstrate that the attached volume is persistent
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
1 **Hosting**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
2 **Creating clusters with CLI**
|
||||
|
||||
The article [How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html) will introduce you to creation of clusters using a command line interface.
|
||||
|
||||
3 **Connect openstack client to the cloud**
|
||||
|
||||
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html)
|
||||
|
||||
4 **Check available quotas**
|
||||
|
||||
Before creating additional cluster check the state of the resources with Horizon commands **Computer** => **Overview**.
|
||||
|
||||
5 **Private and public keys**
|
||||
|
||||
An SSH key-pair created in OpenStack dashboard. To create it, follow this article [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html). You will have created keypair called “sshkey” and you will be able to use it for this tutorial as well.
|
||||
|
||||
6 **Types of Volumes**
|
||||
|
||||
Types of volumes are described in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/).
|
||||
|
||||
Step 1 - Create Cluster Using **–docker-volume-size**[](#step-1-create-cluster-using-docker-volume-size "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You are going to create a new cluster called *dockerspace* that will use parameter **–docker-volume-size** using the following command:
|
||||
|
||||
```
|
||||
openstack coe cluster create dockerspace
|
||||
--cluster-template k8s-1.23.16-cilium-v1.0.3
|
||||
--keypair sshkey
|
||||
--master-count 1
|
||||
--node-count 2
|
||||
--docker-volume-size 50
|
||||
--master-flavor eo1.large
|
||||
--flavor eo2.large
|
||||
|
||||
```
|
||||
|
||||
After a few minutes the new cluster **dockerspace** will be created.
|
||||
|
||||
Click on **Container Infra** => **Clusters** to show the three clusters in the system: *authenabled*, *k8s-cluster* and *dockerspace*.
|
||||
|
||||

|
||||
|
||||
Here are their instances (after clicking on **Compute** => **Instances**):
|
||||
|
||||

|
||||
|
||||
They will have at least two instances each, one for the master and one for the worker node. *dockerspace* has three instances as it has two worker nodes, created with flavor *eo2.large*.
|
||||
|
||||
So far so good, nothing out of the ordinary. Click on **Volumes** => **Volumes** to show the list of volumes:
|
||||
|
||||

|
||||
|
||||
If **–docker-volume-size** is not turned on, only instances with *etcd-volume* in their names would appear here, as is the case for clusters *authenabled* and *k8s-cluster*. If it is turned on, additional volumes would appear, one for each node. *dockerspace* will, therefore, have one instance for master and two instances for worker nodes.
|
||||
|
||||
Note the column **Attached**. All nodes for *dockerspace* use **/dev/vdb** for storage, which is a fact that will be important later on.
|
||||
|
||||
As specified during creation, *docker-volumes* have size of 50 GB each.
|
||||
|
||||
In this step, you have created a new cluster with docker storage turned on and then you verified that the main difference lies in creation of volumes for the cluster.
|
||||
|
||||
Step 2 - Create Pod Manifest[](#step-2-create-pod-manifest "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To create a pod, you need to use a file in *yaml* format that defines the parameters of the pod. Use command
|
||||
|
||||
```
|
||||
nano redis.yaml
|
||||
|
||||
```
|
||||
|
||||
to create file called *redis.yaml* and copy the following rows into it:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: redis-storage
|
||||
mountPath: /data/redis
|
||||
volumes:
|
||||
- name: redis-storage
|
||||
emptyDir: {}
|
||||
|
||||
```
|
||||
|
||||
This is how it will look like in the terminal:
|
||||
|
||||

|
||||
|
||||
You are creating a *Pod*, its name will be *redis*, and it will occupy one container also called *redis*. The content of that container will be an image called **redis**.
|
||||
|
||||
Redis is a well known database and its image is prepared in advance so can be pulled off directly from a repository. If you were implementing your own application, the best way would be to release it through Docker and pull from its repository.
|
||||
|
||||
New volume is called *redis-storage* and its directory will be */data/redis*. The name of the volume will again be *redis-storage* and it will be of type *emptyDir*.
|
||||
|
||||
An *emptyDir* volume is initially empty and is first created when a Pod is assigned to a node. It will exist as long as that Pod is running there and if the Pod is removed, the related data in *emptyDir* will be deleted permanently. However, the data in an *emptyDir* volume is safe across container crashes.
|
||||
|
||||
Besides *emptyDir*, about a dozen other volume types could have been used here: *awsElasticBlockStore*, *azureDisk*, *cinder* and so on.
|
||||
|
||||
In this step, you have prepared pod manifest with which you will create the pod in the next step.
|
||||
|
||||
Step 3 - Create a Pod on Node **0** of *dockerspace*[](#step-3-create-a-pod-on-node-0-of-dockerspace "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you will create a new pod on node **0** of *dockerspace* cluster.
|
||||
|
||||
First see what pods are available in the cluster:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
|
||||
```
|
||||
|
||||
This may produce error line such as this one:
|
||||
|
||||
```
|
||||
The connection to the server localhost:8080 was refused - did you specify the right host or port?
|
||||
|
||||
```
|
||||
|
||||
That will happen in case you did not set up the kubectl parameters as specified in Prerequisites No. 3. You will now set it up for access to *dockerstate*:
|
||||
|
||||
```
|
||||
mkdir dockerspacedir
|
||||
|
||||
openstack coe cluster config
|
||||
--dir dockerspacedir
|
||||
--force
|
||||
--output-certs
|
||||
dockerspace
|
||||
|
||||
```
|
||||
|
||||
First create a new directory, *dockerspacedir*, where the config file for access to the cluster will reside, then execute the **cluster config** command. The output will be a line like this:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/Users/duskosavic/CloudferroDocs/dockerspacedir/config
|
||||
|
||||
```
|
||||
|
||||
Copy it and enter again as the command in terminal. That will give **kubectl** app access to the cluster. Create the pod with this command:
|
||||
|
||||
```
|
||||
kubectl apply -f redis.yaml
|
||||
|
||||
```
|
||||
|
||||
It will read parameters in *redis.yaml* file and send them to the cluster.
|
||||
|
||||
Here is the command to access all pods, if any:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 0/1 ContainerCreating 0 7s
|
||||
|
||||
```
|
||||
|
||||
Repeat the command after a few seconds and see the difference:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 81s
|
||||
|
||||
```
|
||||
|
||||
In this step, you have created a new pod on cluster *dockerspace* and it is running.
|
||||
|
||||
In the next step, you will enter the container and start issuing commands just like you would in any other Linux environment.
|
||||
|
||||
Step 4 - Executing *bash* Commands in the Container[](#step-4-executing-bash-commands-in-the-container "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you will start **bash** shell in the container, which in Linux is equivalent to start running the operating system:
|
||||
|
||||
```
|
||||
kubectl exec -it redis -- /bin/bash
|
||||
|
||||
```
|
||||
|
||||
The following listing is a reply:
|
||||
|
||||
```
|
||||
root@redis:/data# df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
overlay 50G 1.4G 49G 3% /
|
||||
tmpfs 64M 0 64M 0% /dev
|
||||
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
|
||||
/dev/vdb 50G 1.4G 49G 3% /data
|
||||
/dev/vda4 32G 4.6G 27G 15% /etc/hosts
|
||||
shm 64M 0 64M 0% /dev/shm
|
||||
tmpfs 3.9G 16K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
|
||||
tmpfs 3.9G 0 3.9G 0% /proc/acpi
|
||||
tmpfs 3.9G 0 3.9G 0% /proc/scsi
|
||||
tmpfs 3.9G 0 3.9G 0% /sys/firmware
|
||||
|
||||
```
|
||||
|
||||
This is what it would look like in the terminal:
|
||||
|
||||

|
||||
|
||||
Note that the prompt changed to
|
||||
|
||||
```
|
||||
root@redis:/data#
|
||||
|
||||
```
|
||||
|
||||
which means you are now issuing commands within the container itself. The pod operates as Fedora 33 and you can use **df** to see the volumes and their sizes. Command
|
||||
|
||||
```
|
||||
df -h
|
||||
|
||||
```
|
||||
|
||||
lists sizes of files and directories in a human fashion (the usual meaning of parameter **-h** would be *Help*, while here it is short for *Human*).
|
||||
|
||||
In this step, you have activated the container operating system.
|
||||
|
||||
Step 5 - Saving a File Into Persistent Storage[](#step-5-saving-a-file-into-persistent-storage "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you are going to test the longevity of files on persistent storage. You will first
|
||||
|
||||
> * save a file into the */data/redis* directory, then
|
||||
> * kill the Redis process, which in turn will
|
||||
> * kill the container; finally, you will
|
||||
> * re-enter the pod,
|
||||
|
||||
where you will find the file intact.
|
||||
|
||||
Note that **dev/vdb** has 50 GB in size in the above listing and connect it to the column **Attached To** in the **Volumes** => **Volumes** listing:
|
||||
|
||||

|
||||
|
||||
In its own turn, it is tied to an instance:
|
||||
|
||||

|
||||
|
||||
That instance is injected into the container and – being an independent instance – acts as persistent storage to the pod.
|
||||
|
||||
Create a file on the *redis* container:
|
||||
|
||||
```
|
||||
cd /data/redis/
|
||||
echo Hello > test-file
|
||||
|
||||
```
|
||||
|
||||
Install software to see the **PID** number of *Redis* process in the container
|
||||
|
||||
```
|
||||
apt-get update
|
||||
apt-get install procps
|
||||
ps aux
|
||||
|
||||
```
|
||||
|
||||
These are the running processes:
|
||||
|
||||

|
||||
|
||||
Take the **PID** number for *Redis* process (here it is **1**), and eliminate it with command
|
||||
|
||||
```
|
||||
kill 1
|
||||
|
||||
```
|
||||
|
||||
That will first kill the container and then exit its command line.
|
||||
|
||||
In this step, you have created a file and killed the container that contains the file. This sets up the ground for testing whether the files survive container crash.
|
||||
|
||||
Step 6 - Check the File Saved in Previous Step[](#step-6-check-the-file-saved-in-previous-step "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you will find out whether the file *test-file* is still existing.
|
||||
|
||||
Enter the pod again, activate its **bash** shell and see whether the file has survived:
|
||||
|
||||
```
|
||||
kubectl exec -it redis -- /bin/bash
|
||||
cd redis
|
||||
ls
|
||||
|
||||
test-file
|
||||
|
||||
```
|
||||
|
||||
Yes, the file *test-file* is still there. The persistent storage for the pod contains it in path */data/redis*:
|
||||
|
||||

|
||||
|
||||
In this step, you have entered the pod again and found out that the file has survived intact. That was expected, as volumes of type *emptyDir* will survive container crashes as long as the pod is existing.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
*emptyDir* survives container crashes but will disappear when the pod disappears. Other volume types may survive loss of pods better. For instance:
|
||||
|
||||
> * *awsElasticBlockStore* will have the volume unmounted when the pod is gone; being unmounted and not destroyed, it will persist the data it is containing. This type of volume can have pre-populated data and can share the data among the pods.
|
||||
> * *cephfs* can also have pre-populated data and share them among the pods, but can additionally also be mounted by multiple writers at the same time.
|
||||
|
||||
Other constraints may also apply. Some of those volume types will require their own servers to be activated first, or that all nodes on which Pods are running need be of the same type and so on. Prerequisite No. 6 will list all types of volumes for Kubernetes clusters so study it and apply to your own Kubernetes apps.
|
||||
2
docs/kubernetes/kubernetes.html.md
Normal file
2
docs/kubernetes/kubernetes.html.md
Normal file
@ -0,0 +1,2 @@
|
||||
KUBERNETES[](#kubernetes "Permalink to this headline")
|
||||
=======================================================
|
||||
Reference in New Issue
Block a user