link icon replaced

This commit is contained in:
govardhan
2025-06-19 14:09:10 +05:30
parent 60adbde60c
commit 172f8e2b34
158 changed files with 996 additions and 996 deletions

View File

@ -1,4 +1,4 @@
Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum[](#automatic-kubernetes-cluster-upgrade-on-brand-name-openstack-magnum "Permalink to this headline")
Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum[🔗](#automatic-kubernetes-cluster-upgrade-on-brand-name-openstack-magnum "Permalink to this headline")
===============================================================================================================================================================================
Warning

View File

@ -1,4 +1,4 @@
Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum[](#autoscaling-kubernetes-cluster-resources-on-brand-name-openstack-magnum "Permalink to this headline")
Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum[🔗](#autoscaling-kubernetes-cluster-resources-on-brand-name-openstack-magnum "Permalink to this headline")
=======================================================================================================================================================================================
When **autoscaling of Kubernetes clusters** is turned on, the system can
@ -9,7 +9,7 @@ When **autoscaling of Kubernetes clusters** is turned on, the system can
This article explains various commands to resize or scale the cluster and will lead to a command to automatically create an autoscalable Kubernetes cluster for OpenStack Magnum.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Definitions of horizontal, vertical and nodes scaling
@ -18,7 +18,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Get cluster template labels from Horizon interface
> * Get cluster template labels from the CLI
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -43,19 +43,19 @@ Step 2 of article [How to Create a Kubernetes Cluster Using CloudFerro Cloud Ope
There are three different autoscaling features that a Kubernetes cloud can offer:
Horizontal Pod Autoscaler[](#horizontal-pod-autoscaler "Permalink to this headline")
Horizontal Pod Autoscaler[🔗](#horizontal-pod-autoscaler "Permalink to this headline")
-------------------------------------------------------------------------------------
Scaling Kubernetes cluster horizontally means increasing or decreasing the number of running pods, depending on the actual demands at run time. Parameters to take into account are the usage of CPU and memory, as well as the desired minimum and maximum numbers of pod replicas.
Horizontal scaling is also known as “scaling out” and is shorthened as HPA.
Vertical Pod Autoscaler[](#vertical-pod-autoscaler "Permalink to this headline")
Vertical Pod Autoscaler[🔗](#vertical-pod-autoscaler "Permalink to this headline")
---------------------------------------------------------------------------------
Vertical scaling (or “scaling up”, VPA) is adding or subtracting resources to and from an existing machine. If more CPUs are needed, add them. When they are not needed, shut some of them down.
Cluster Autoscaler[](#cluster-autoscaler "Permalink to this headline")
Cluster Autoscaler[🔗](#cluster-autoscaler "Permalink to this headline")
-----------------------------------------------------------------------
HPA and VPA reorganize the usage of resources and the number of pods, however, there may come a time when the size of the system itself prevents from satisfying the demand. The solution is to autoscale the cluster itself, to increase or decrease the number of nodes on which the pods will run on.
@ -64,7 +64,7 @@ Once the number of nodes is adjusted, the pods and other resources need to rebal
All three models of autoscaling can be combined together.
Define Autoscaling When Creating a Cluster[](#define-autoscaling-when-creating-a-cluster "Permalink to this headline")
Define Autoscaling When Creating a Cluster[🔗](#define-autoscaling-when-creating-a-cluster "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------
You can define autoscaling parameters while defining a new cluster, using window called **Size** in the cluster creation wizard:
@ -79,7 +79,7 @@ Warning
If you decide to use NGINX Ingress option while defining a cluster, NGINX ingress will run as 3 replicas on 3 separate nodes. This will override the minimum number of nodes in Magnum autoscaler.
Autoscaling Node Groups at Run Time[](#autoscaling-node-groups-at-run-time "Permalink to this headline")
Autoscaling Node Groups at Run Time[🔗](#autoscaling-node-groups-at-run-time "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
The autoscaler in Magnum uses Node Groups. Node groups can be used to create workers with different flavors. The default-worker node group is automatically created when cluster is provisioned. Node groups have lower and upper limits of node count. This is the command to print them out for a given cluster:
@ -143,7 +143,7 @@ the result will now be with a corrected value:
```
How Autoscaling Detects Upper Limit[](#how-autoscaling-detects-upper-limit "Permalink to this headline")
How Autoscaling Detects Upper Limit[🔗](#how-autoscaling-detects-upper-limit "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
The first version of Autoscaling would take the current upper limit of autoscaling in variable *node\_count* and add 1 to it. If the command to create a cluster were
@ -179,7 +179,7 @@ Any additional node group must include concrete *max\_node\_count* attribute.
See Prerequisites No. 4 for detailed examples of using the **openstack coe nodegroup** family of commands.
Autoscaling Labels for Clusters[](#autoscaling-labels-for-clusters "Permalink to this headline")
Autoscaling Labels for Clusters[🔗](#autoscaling-labels-for-clusters "Permalink to this headline")
-------------------------------------------------------------------------------------------------
There are three labels for clusters that influence autoscaling:
@ -198,7 +198,7 @@ List clusters with **Container Infra** => **Cluster** and click on the name of t
If true, it is enabled, the cluster will autoscale.
Create New Cluster Using CLI With Autoscaling On[](#create-new-cluster-using-cli-with-autoscaling-on "Permalink to this headline")
Create New Cluster Using CLI With Autoscaling On[🔗](#create-new-cluster-using-cli-with-autoscaling-on "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------
The command to create a cluster with CLI must encompass all of the usual parameters as well as **all of the labels** needed for the cluster to function. The peculiarity of the syntax is that label parameters must be one single string, without any blanks inbetween.
@ -239,7 +239,7 @@ Three worker node addresses are active: **10.0.0.102**, **10.0.0.27**, and **10.
There is no traffic to the cluster so the autoscaling immediately kicked in. A minute or two after the creation was finished, the number of worker nodes fell down by one, to addresses **10.0.0.27** and **10.0.0.194** that is autoscaling at work.
Nodegroups With Worker Role Will Be Automatically Autoscalled[](#nodegroups-with-worker-role-will-be-automatically-autoscalled "Permalink to this headline")
Nodegroups With Worker Role Will Be Automatically Autoscalled[🔗](#nodegroups-with-worker-role-will-be-automatically-autoscalled "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Autoscaler automaticaly detects all new nodegroups with “worker” role assigned.
@ -317,7 +317,7 @@ openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
![state_again.png](../_images/state_again.png)
How to Obtain All Labels From Horizon Interface[](#how-to-obtain-all-labels-from-horizon-interface "Permalink to this headline")
How to Obtain All Labels From Horizon Interface[🔗](#how-to-obtain-all-labels-from-horizon-interface "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------
Use **Container Infra** => **Clusters** and click on the cluster name. You will get plain text in browser, just copy the rows under **Labels** and paste them to the text editor of your choice.
@ -326,7 +326,7 @@ Use **Container Infra** => **Clusters** and click on the cluster name. You will
In text editor, manually remove line ends and make one string without breaks and carriage returns, then paste it back to the command.
How To Obtain All Labels From the CLI[](#how-to-obtain-all-labels-from-the-cli "Permalink to this headline")
How To Obtain All Labels From the CLI[🔗](#how-to-obtain-all-labels-from-the-cli "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------
There is a special command which will produce labels from a cluster:
@ -342,12 +342,12 @@ This is the result:
That is *yaml* format, as specified by the **-f** parameter. The rows represent label values and your next action is to create one long string without line breaks as in the previous example, then form the CLI command.
Use Labels String When Creating Cluster in Horizon[](#use-labels-string-when-creating-cluster-in-horizon "Permalink to this headline")
Use Labels String When Creating Cluster in Horizon[🔗](#use-labels-string-when-creating-cluster-in-horizon "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------
The long labels string can also be used when creating the cluster manually, i.e. from the Horizon interface. The place to insert those labels is described in *Step 4 Define Labels* in Prerequisites No. 2.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Autoscaling is similar to autohealing of Kubernetes clusters and both bring automation to the table. They also guarantee that the system will autocorrect as long as it is within its basic parameters. Use autoscaling of cluster resources as much as you can!

View File

@ -1,7 +1,7 @@
Backup of Kubernetes Cluster using Velero[](#backup-of-kubernetes-cluster-using-velero "Permalink to this headline")
Backup of Kubernetes Cluster using Velero[🔗](#backup-of-kubernetes-cluster-using-velero "Permalink to this headline")
=====================================================================================================================
What is Velero[](#what-is-velero "Permalink to this headline")
What is Velero[🔗](#what-is-velero "Permalink to this headline")
---------------------------------------------------------------
[Velero](https://velero.io) is the official open source project from VMware. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backed up objects can be restored on the same cluster, or on a new one. Using a package like Velero is essential for any serious development in the Kubernetes cluster.
@ -10,7 +10,7 @@ In essence, you create object store under OpenStack, either using Horizon or Swi
Velero has its own CLI command system so it is possible to automate creation of backups using cron jobs.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Getting EC2 Client Credentials
@ -21,7 +21,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Example 1 Basics of Restoring an Application
> * Example 2 Snapshot of Restoring an Application
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -79,7 +79,7 @@ Either way, we shall assume that there is a container called “bucketnew”:
Supply your own unique name while working through this article.
Before Installing Velero[](#before-installing-velero "Permalink to this headline")
Before Installing Velero[🔗](#before-installing-velero "Permalink to this headline")
-----------------------------------------------------------------------------------
We shall install Velero on Ubuntu 22.04; using other Linux distributions would be similar.
@ -93,7 +93,7 @@ sudo apt update && sudo apt upgrade
It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
### Installation step 1 Getting EC2 client credentials[](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")
### Installation step 1 Getting EC2 client credentials[🔗](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")
First fetch EC2 credentials from OpenStack. They are necessary to access private bucket (container). Generate them on your own by executing the following commands:
@ -105,7 +105,7 @@ openstack ec2 credentials list
Save somewhere the *Access Key* and the *Secret Key*. They will be needed in the next step, in which you set up a Velero configuration file.
### Installation step 2 Adjust the configuration file - “values.yaml”[](#installation-step-2-adjust-the-configuration-file-values-yaml "Permalink to this headline")
### Installation step 2 Adjust the configuration file - “values.yaml”[🔗](#installation-step-2-adjust-the-configuration-file-values-yaml "Permalink to this headline")
Now create or adjust a configuration file for Velero. Use text editor of your choice to create that file. On MacOS or Linux, for example, you can use **nano**, like this:
@ -462,7 +462,7 @@ schedules:
```
### Installation step 3 Creating namespace[](#installation-step-3-creating-namespace "Permalink to this headline")
### Installation step 3 Creating namespace[🔗](#installation-step-3-creating-namespace "Permalink to this headline")
Velero must be installed in an eponymous namespace, *velero*. This is the command to create it:
@ -472,7 +472,7 @@ namespace/velero created
```
### Installation step 4 Installing Velero with a Helm chart[](#installation-step-4-installing-velero-with-a-helm-chart "Permalink to this headline")
### Installation step 4 Installing Velero with a Helm chart[🔗](#installation-step-4-installing-velero-with-a-helm-chart "Permalink to this headline")
Here are the commands to install Velero by means of a Helm chart:
@ -538,7 +538,7 @@ velero-1721031498 Opaque 1 3d1h
```
### Installation step 5 Installing Velero CLI[](#installation-step-5-installing-velero-cli "Permalink to this headline")
### Installation step 5 Installing Velero CLI[🔗](#installation-step-5-installing-velero-cli "Permalink to this headline")
The final step is to install Velero CLI Command Line Interface suitable for working from the terminal window on your operating system.
@ -595,7 +595,7 @@ velero help
```
Working with Velero[](#working-with-velero "Permalink to this headline")
Working with Velero[🔗](#working-with-velero "Permalink to this headline")
-------------------------------------------------------------------------
So far, we have
@ -645,7 +645,7 @@ This is the result in terminal window:
![three_backups_mybackup.png](../_images/three_backups_mybackup.png)
Example 1 Basics of Restoring an Application[](#example-1-basics-of-restoring-an-application "Permalink to this headline")
Example 1 Basics of Restoring an Application[🔗](#example-1-basics-of-restoring-an-application "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------
Let us now demonstrate how to restore a Kubernetes application. Let us first clone one example app from GitHub. Execute this:
@ -704,7 +704,7 @@ nginx-backup New 0 0 <nil> n/a <none>
```
Example 2 Snapshot of restoring an application[](#example-2-snapshot-of-restoring-an-application "Permalink to this headline")
Example 2 Snapshot of restoring an application[🔗](#example-2-snapshot-of-restoring-an-application "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------
Start the sample nginx app:
@ -748,7 +748,7 @@ Run `velero restore describe nginx-backup-20220728015234` or `velero restore log
```
Delete a Velero backup[](#delete-a-velero-backup "Permalink to this headline")
Delete a Velero backup[🔗](#delete-a-velero-backup "Permalink to this headline")
-------------------------------------------------------------------------------
There are two ways to delete a backup made by Velero.
@ -769,10 +769,10 @@ Delete all data in object/block storage
will delete the backup resource including all data in object/block storage
Removing Velero from the cluster[](#removing-velero-from-the-cluster "Permalink to this headline")
Removing Velero from the cluster[🔗](#removing-velero-from-the-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------
### Uninstall Velero[](#uninstall-velero "Permalink to this headline")
### Uninstall Velero[🔗](#uninstall-velero "Permalink to this headline")
To uninstall Velero release:
@ -781,14 +781,14 @@ helm uninstall velero-1721031498 --namespace velero
```
### To delete Velero namespace[](#to-delete-velero-namespace "Permalink to this headline")
### To delete Velero namespace[🔗](#to-delete-velero-namespace "Permalink to this headline")
```
kubectl delete namespace velero
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Now that Velero is up and running, you can integrate it into your routine. It will be useful in all classical backups scenarios for disaster recovery, cluster and namespace migration, testing and development, application rollbacks, compliance and auditing and so on. Apart from these broad use cases, Velero will help with specific Kubernetes cluster tasks for backing up, such as:

View File

@ -1,9 +1,9 @@
CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image[](#ci-cd-pipelines-with-gitlab-on-brand-name-kubernetes-building-a-docker-image "Permalink to this headline")
CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image[🔗](#ci-cd-pipelines-with-gitlab-on-brand-name-kubernetes-building-a-docker-image "Permalink to this headline")
===================================================================================================================================================================================================
GitLab provides an isolated, private code registry and space for collaboration on code by teams. It also offers a broad range of code deployment automation capabilities. In this article, we will explain how to automate building a Docker image of your app.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Add your public key to GitLab and access GitLab from your command line
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Create pipeline to build your apps Docker image using Kaniko
> * Trigger pipeline build
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -52,7 +52,7 @@ See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud
Here, we use the key pair to connect to GitLab instance that we previously installed in Prerequisite No. 3.
Step 1 Add your public key to GitLab and access GitLab from your command line[](#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line "Permalink to this headline")
Step 1 Add your public key to GitLab and access GitLab from your command line[🔗](#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In order to access your GitLab instance from the command line, GitLab uses SSH-based authentication. To ensure your console uses these keys for authentication by default, ensure your keys are stored in the **~/.ssh** folder and are called **id\_rsa** (private key) and **id\_rsa.pub** (public key).
@ -78,7 +78,7 @@ You should see an output similar to the following:
![image-2024-5-10_16-30-8.png](../_images/image-2024-5-10_16-30-8.png)
Step 2 Create project in GitLab and add sample application code[](#step-2-create-project-in-gitlab-and-add-sample-application-code "Permalink to this headline")
Step 2 Create project in GitLab and add sample application code[🔗](#step-2-create-project-in-gitlab-and-add-sample-application-code "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
We will first add a sample application in GitLab. This is a minimal Python-Flask application, its code can be downloaded from this CloudFerro Cloud [GitHub repository accompanying this Knowledge Base](https://github.com/CloudFerro/K8s-samples/tree/main/HelloWorld-Docker-image-Flask).
@ -135,7 +135,7 @@ When we enter GitLab GUI, we can see that our changes are committed:
![image-2024-4-26_17-57-57.png](../_images/image-2024-4-26_17-57-57.png)
Step 3 Define environment variables with your DockerHub coordinates in GitLab[](#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab "Permalink to this headline")
Step 3 Define environment variables with your DockerHub coordinates in GitLab[🔗](#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We want to create a CI/CD pipeline that will, upon a new commit, build a Docker image of our app and push it to Docker Hub container registry. Let us use environment variables in GitLab to enable connection to the Docker registry. Use the following keys and values:
@ -172,7 +172,7 @@ Scroll down to the section “Variables”and fill in the respective forms. In t
Now that the values of variables are set up, we will use them in our CI/CD pipeline.
Step 4 Create a pipeline to build your apps Docker image using Kaniko[](#step-4-create-a-pipeline-to-build-your-app-s-docker-image-using-kaniko "Permalink to this headline")
Step 4 Create a pipeline to build your apps Docker image using Kaniko[🔗](#step-4-create-a-pipeline-to-build-your-app-s-docker-image-using-kaniko "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The CI/CD pipeline that we are creating in GitLab will have only one job that
@ -216,7 +216,7 @@ Fill in and save the contents of a standardized configuration file
Build and publish the container image to DockerHub
: The second command builds and publishes the container image to DockerHub.
Step 5 Trigger pipeline build[](#step-5-trigger-pipeline-build "Permalink to this headline")
Step 5 Trigger pipeline build[🔗](#step-5-trigger-pipeline-build "Permalink to this headline")
---------------------------------------------------------------------------------------------
A commit triggers the pipeline to run. After adding the file, publish changes to the repository with the following set of commands:
@ -236,7 +236,7 @@ Also when browsing our Docker registry, the image is published:
![image-2024-4-29_14-16-12.png](../_images/image-2024-4-29_14-16-12.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Add your unit and integration tests to this pipeline. They can be added as additional steps in the **gitlab-ci.yml** file. A complete reference can be found here: <https://docs.gitlab.com/ee/ci/yaml/>

View File

@ -1,15 +1,15 @@
Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-horizon-and-cli-on-brand-name "Permalink to this headline")
Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud[🔗](#configuring-ip-whitelisting-for-openstack-load-balancer-using-horizon-and-cli-on-brand-name "Permalink to this headline")
===============================================================================================================================================================================================================================
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Horizon and CLI commands. The configuration will limit access to your cluster through load balancer.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Prepare Your Environment
> * Whitelist the load balancer via the CLI
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -42,11 +42,11 @@ pip install python-octaviaclient
```
### Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
### Prepare Your Environment[🔗](#prepare-your-environment "Permalink to this headline")
First of all, you have to find **id** of your load balancer and its listener.
#### Horizon:[](#horizon "Permalink to this headline")
#### Horizon:[🔗](#horizon "Permalink to this headline")
To find a load balancer **id**, go to **Project** >> **Network** >> **Load Balancers** and find that one which is associated with your cluster (its name will be with prefix of your cluster name).
@ -56,7 +56,7 @@ Click on load balancer name (in this case `lb-testing-ih347dstxyl2-api_lb_fixed-
![whitelisting_again-2.png](../_images/whitelisting_again-2.png)
#### CLI[](#cli "Permalink to this headline")
#### CLI[🔗](#cli "Permalink to this headline")
To use CLI to find the listener, you have to know the following two cluster parameters:
@ -102,7 +102,7 @@ show 2d6b335f-fb05-4496-8593-887f7e2c49cf \
```
### Whitelist the load balancer via the CLI[](#whitelist-the-load-balancer-via-the-cli "Permalink to this headline")
### Whitelist the load balancer via the CLI[🔗](#whitelist-the-load-balancer-via-the-cli "Permalink to this headline")
We now have the listener and the IP addresses which will be whitelisted. This is the command that will set up the whitelisting:
@ -116,7 +116,7 @@ openstack loadbalancer listener set \
![whitelisting_again-3.png](../_images/whitelisting_again-3.png)
State of Security: Before and After[](#state-of-security-before-and-after "Permalink to this headline")
State of Security: Before and After[🔗](#state-of-security-before-and-after "Permalink to this headline")
--------------------------------------------------------------------------------------------------------
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
@ -124,7 +124,7 @@ Before implementing IP whitelisting, the load balancer accepts traffic from all
> * Only specified IPs can access the load balancer.
> * Unauthorized access attempts are denied.
Verification Tools[](#verification-tools "Permalink to this headline")
Verification Tools[🔗](#verification-tools "Permalink to this headline")
-----------------------------------------------------------------------
Various tools can ensure the protection is installed and active:
@ -141,7 +141,7 @@ curl
Wireshark
: (free): For packet-level analysis.
### Testing using curl and livez[](#testing-using-curl-and-livez "Permalink to this headline")
### Testing using curl and livez[🔗](#testing-using-curl-and-livez "Permalink to this headline")
Here is how we could test it:
@ -209,7 +209,7 @@ curl: (28) Connection timed out after 5000 milliseconds
Whitelisting prevents traffic from all IP addresses apart from those that are allowed by **--allowed-cidr**.
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
### Testing with nmap[🔗](#testing-with-nmap "Permalink to this headline")
To test with **nmap**:
@ -218,7 +218,7 @@ nmap -p <PORT> <LOAD_BALANCER_IP>
```
### Testing with curl directly[](#testing-with-curl-directly "Permalink to this headline")
### Testing with curl directly[🔗](#testing-with-curl-directly "Permalink to this headline")
To test with **curl**:
@ -227,7 +227,7 @@ curl http://<LOAD_BALANCER_IP>
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You can wrap up this procedure with Terraform and apply to a larger number of load balancers. See [Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-CloudFerro-Cloud.html.md)

View File

@ -1,9 +1,9 @@
Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-terraform-on-brand-name "Permalink to this headline")
Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud[🔗](#configuring-ip-whitelisting-for-openstack-load-balancer-using-terraform-on-brand-name "Permalink to this headline")
===================================================================================================================================================================================================================
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Terraform. The configuration will limit access to your cluster through load balancer.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Get necessary load balancer and cluster data from the Prerequisites
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Run terraform
> * Test and verify that protection of load balancer via whitelisting works
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -48,14 +48,14 @@ You can also use Horizon commands **Identity** > **Application Credentials**
Log in to your account using this unrestricted credential.
Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
Prepare Your Environment[🔗](#prepare-your-environment "Permalink to this headline")
-----------------------------------------------------------------------------------
Work through article in Prerequisite No. 2 from which we will derive all the input parameters, using Horizon and CLI commands.
Also, authenticate through application credential you got from Prerequisite No. 4.
Configure Terraform for whitelisting[](#configure-terraform-for-whitelisting "Permalink to this headline")
Configure Terraform for whitelisting[🔗](#configure-terraform-for-whitelisting "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
Instead of performing the whitelisting procedure manually, we can use Terraform and store the procedure in the remote repo.
@ -136,7 +136,7 @@ resource "openstack_lb_listener_v2" "k8s_api_listener" {
```
Import Existing Load Balancer Listener[](#import-existing-load-balancer-listener "Permalink to this headline")
Import Existing Load Balancer Listener[🔗](#import-existing-load-balancer-listener "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------
Since Terraform 1.5 can import your resource in declarative way.
@ -158,7 +158,7 @@ terraform import openstack_lb_listener_v2.k8s_api_listener "<your-listener-id>"
```
Run Terraform[](#run-terraform "Permalink to this headline")
Run Terraform[🔗](#run-terraform "Permalink to this headline")
-------------------------------------------------------------
**Terraform Execute**
@ -217,7 +217,7 @@ Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
```
Tests[](#tests "Permalink to this headline")
Tests[🔗](#tests "Permalink to this headline")
---------------------------------------------
By default, Magnum LB does not have any access restrictions.
@ -268,7 +268,7 @@ curl: (28) Connection timed out after 5000 milliseconds
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Compare with [Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html.md)

View File

@ -1,11 +1,11 @@
Create and access NFS server from Kubernetes on CloudFerro Cloud[](#create-and-access-nfs-server-from-kubernetes-on-brand-name "Permalink to this headline")
Create and access NFS server from Kubernetes on CloudFerro Cloud[🔗](#create-and-access-nfs-server-from-kubernetes-on-brand-name "Permalink to this headline")
=============================================================================================================================================================
In order to enable simultaneous read-write storage to multiple pods running on a Kubernetes cluster, we can use an NFS server.
In this guide we will create an NFS server on a virtual machine, create file share on this server and demonstrate accessing it from a Kubernetes pod.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Set up an NFS server on a VM
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Make the share available
> * Deploy a test pod on the cluster
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -41,7 +41,7 @@ No. 4 **kubectl access to the Kubernetes cloud**
As usual when working with Kubernetes clusters, you will need to use the **kubectl** command: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
1. Set up NFS server on a VM[](#set-up-nfs-server-on-a-vm "Permalink to this headline")
1. Set up NFS server on a VM[🔗](#set-up-nfs-server-on-a-vm "Permalink to this headline")
----------------------------------------------------------------------------------------
As a prerequisite to create an NFS server on a VM, first from the Network tab in Horizon create a security group allowing ingress traffic from port **2049**.
@ -54,7 +54,7 @@ When the VM is created, you can see that it has private address assigned. For th
Set up floating IP on the VM server, just to enable SSH to this VM.
2. Set up a share folder on the NFS server[](#set-up-a-share-folder-on-the-nfs-server "Permalink to this headline")
2. Set up a share folder on the NFS server[🔗](#set-up-a-share-folder-on-the-nfs-server "Permalink to this headline")
--------------------------------------------------------------------------------------------------------------------
SSH to the VM, then run:
@ -95,7 +95,7 @@ Edit the */etc/exports* file and add the following line:
This indicates that all nodes on the cluster network can access this share, with subfolders, in read-write mode.
3. Make the share available[](#make-the-share-available "Permalink to this headline")
3. Make the share available[🔗](#make-the-share-available "Permalink to this headline")
--------------------------------------------------------------------------------------
Run the below command to make the share available:
@ -114,7 +114,7 @@ sudo systemctl restart nfs-kernel-server
Exit from the NFS server VM.
4. Deploy a test pod on the cluster[](#deploy-a-test-pod-on-the-cluster "Permalink to this headline")
4. Deploy a test pod on the cluster[🔗](#deploy-a-test-pod-on-the-cluster "Permalink to this headline")
------------------------------------------------------------------------------------------------------
Ensure you can access your cluster with **kubectl**. Have a file *test-pod.yaml* with the following contents:

View File

@ -1,7 +1,7 @@
Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#creating-additional-nodegroups-in-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[🔗](#creating-additional-nodegroups-in-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
===============================================================================================================================================================================================================
The Benefits of Using Nodegroups[](#the-benefits-of-using-nodegroups "Permalink to this headline")
The Benefits of Using Nodegroups[🔗](#the-benefits-of-using-nodegroups "Permalink to this headline")
---------------------------------------------------------------------------------------------------
A *nodegroup* is a group of nodes from a Kubernetes cluster that have the same configuration and run the users containers. One and the same cluster can have various nodegroups within it, so instead of creating several independent clusters, you may create only one and then separate the groups into the nodegroups.
@ -18,7 +18,7 @@ Other uses of nodegroup roles also include:
> * if your Kubernetes environment is small on resources, you can create a minimal Kubernetes cluster and later on add nodegroups and thus enhance the number of control and worker nodes.
> * Nodes in a group can be created, upgraded and deleted individually, without affecting the rest of the cluster.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * The structure of command **openstack coe nodelist**
@ -31,7 +31,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * How to resize a nodegroup
> * The benefits of using nodegroups in Kubernetes clusters
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -50,7 +50,7 @@ No. 4 **Check available quotas**
Before creating additional node groups check the state of the resources with Horizon commands **Computer** => **Overview**. See [Dashboard Overview Project Quotas And Flavors Limits on CloudFerro Cloud](../cloud/Dashboard-Overview-Project-Quotas-And-Flavors-Limits-on-CloudFerro-Cloud.html.md).
Nodegroup Subcommands[](#nodegroup-subcommands "Permalink to this headline")
Nodegroup Subcommands[🔗](#nodegroup-subcommands "Permalink to this headline")
-----------------------------------------------------------------------------
Once you create a Kubernetes cluster on OpenStack Magnum, there are five *nodegroup* commands at your disposal:
@ -70,7 +70,7 @@ openstack coe nodegroup update
With this, you can repurpose the cluster to include various images, change volume access, set up max and min values for the number of nodes and so on.
Step 1 Access the Current State of Clusters and Their Nodegroups[](#step-1-access-the-current-state-of-clusters-and-their-nodegroups "Permalink to this headline")
Step 1 Access the Current State of Clusters and Their Nodegroups[🔗](#step-1-access-the-current-state-of-clusters-and-their-nodegroups "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is which clusters are available in the system:
@ -97,7 +97,7 @@ to list default nodegroups for those two clusters, *kubelbtrue* and *k8s-cluster
The **default-worker** node group cannot be removed or reconfigured so plan ahead when creating the base cluster.
Step 2 How to Create a New Nodegroup[](#step-2-how-to-create-a-new-nodegroup "Permalink to this headline")
Step 2 How to Create a New Nodegroup[🔗](#step-2-how-to-create-a-new-nodegroup "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
In this step you learn about the parameters available for the **nodegroup create** command. This is the general structure:
@ -146,7 +146,7 @@ Still in Horizon, click on commands **Contaner Infra** => **Clusters** => **k8s-
![cluster_inside.png](../_images/cluster_inside.png)
Step 3 Using **role** to Filter Nodegroups in the Cluster[](#step-3-using-role-to-filter-nodegroups-in-the-cluster "Permalink to this headline")
Step 3 Using **role** to Filter Nodegroups in the Cluster[🔗](#step-3-using-role-to-filter-nodegroups-in-the-cluster "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------
It is possible to filter node groups according to the role. Here is the command to show only the *test* nodegroup:
@ -162,7 +162,7 @@ Several node groups can share the same role name.
The roles can be used to schedule the nodes when using the **kubectl** command directly on the cluster.
Step 4 Show Details of the Nodegroup Created[](#step-4-show-details-of-the-nodegroup-created "Permalink to this headline")
Step 4 Show Details of the Nodegroup Created[🔗](#step-4-show-details-of-the-nodegroup-created "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------
Command **show** presents the details of a nodegroup in various formats *json*, *table*, *shell*, *value* or *yaml*. The default is *table* but use parameter **max-width** to limit the number of columns in it:
@ -174,7 +174,7 @@ openstack coe nodegroup show --max-width 80 k8s-cluster testing
![table_testing.png](../_images/table_testing.png)
Step 5 Delete the Existing Nodegroup[](#step-5-delete-the-existing-nodegroup "Permalink to this headline")
Step 5 Delete the Existing Nodegroup[🔗](#step-5-delete-the-existing-nodegroup "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
In this step you shall try to create a nodegroup with small footprint:
@ -212,7 +212,7 @@ Regardless of the way, the instances will not be deleted immediately, but rather
The default master and worker node groups cannot be deleted but all the others can.
Step 6 Update the Existing Nodegroup[](#step-6-update-the-existing-nodegroup "Permalink to this headline")
Step 6 Update the Existing Nodegroup[🔗](#step-6-update-the-existing-nodegroup "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
In this step you will directly update the existing nodegroup, rather than adding and deleting them in a row. The example command is:
@ -226,7 +226,7 @@ Instead of **replace**, it is also possible to use verbs **add** and **delete**.
In the above example, you are setting up the minimum value of nodes to 1. (Previously it was **0** as parameter **min\_node\_count** was not specified and its default value is **0**.)
Step 7 Resize the Nodegroup[](#step-7-resize-the-nodegroup "Permalink to this headline")
Step 7 Resize the Nodegroup[🔗](#step-7-resize-the-nodegroup "Permalink to this headline")
-----------------------------------------------------------------------------------------
Resizing the *nodegroup* is similar to resizing the cluster, with the addition of parameter **nodegroup**. Currently, the number of nodes in group *testing* is 2. Make it **1**:

View File

@ -1,9 +1,9 @@
Default Kubernetes cluster templates in CloudFerro Cloud Cloud[](#default-kubernetes-cluster-templates-in-brand-name-cloud "Permalink to this headline")
Default Kubernetes cluster templates in CloudFerro Cloud Cloud[🔗](#default-kubernetes-cluster-templates-in-brand-name-cloud "Permalink to this headline")
=========================================================================================================================================================
In this article we shall list Kubernetes cluster templates available on CloudFerro Cloud and explain the differences among them.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * List available templates on your cloud
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Overview and benefits of *localstorage* templates
> * Example of creating *localstorage* template using HMD and HMAD flavors
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -45,7 +45,7 @@ If template name contains “vgpu”, this template can be used to create so-cal
To learn how to set up vGPU in Kubernetes clusters on CloudFerro Cloud cloud, see [Deploying vGPU workloads on CloudFerro Cloud Kubernetes](Deploying-vGPU-workloads-on-CloudFerro-Cloud-Kubernetes.html.md).
Templates available on your cloud[](#templates-available-on-your-cloud "Permalink to this headline")
Templates available on your cloud[🔗](#templates-available-on-your-cloud "Permalink to this headline")
-----------------------------------------------------------------------------------------------------
The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.
@ -72,7 +72,7 @@ FRA1-2
The converse is also true, you may want to select the cloud that you want to use according to the type of cluster that you would want to use. For instance, you would have to select WAW3-1 cloud if you wanted to use vGPU on your cluster.
How to choose a proper template[](#how-to-choose-a-proper-template "Permalink to this headline")
How to choose a proper template[🔗](#how-to-choose-a-proper-template "Permalink to this headline")
-------------------------------------------------------------------------------------------------
**Standard templates**
@ -103,17 +103,17 @@ If the application does not require a great many operations, then a standard tem
You can also dig deeper and choose the template according to the the network plugin used.
### Network plugins for Kubernetes clusters[](#network-plugins-for-kubernetes-clusters "Permalink to this headline")
### Network plugins for Kubernetes clusters[🔗](#network-plugins-for-kubernetes-clusters "Permalink to this headline")
Kubernetes cluster templates at CloudFerro Cloud cloud use *calico* or *cilium* plugins for controlling network traffic. Both are [CNI](https://www.cncf.io/projects/kubernetes/) compliant. *Calico* is the default plugin, meaning that if the template name does not specify the plugin, the *calico* driver is used. If the template name specifies *cilium* then, of course, the *cilium* driver is used.
### Calico (the default)[](#calico-the-default "Permalink to this headline")
### Calico (the default)[🔗](#calico-the-default "Permalink to this headline")
[Calico](https://projectcalico.docs.tigera.io/about/about-calico) uses BGP protocol to move network packets towards IP addresses of the pods. *Calico* can be faster then its competitors but its most remarkable feature is support for *network policies*. With those, you can define which pods can send and receive traffic and also manage the security of the network.
*Calico* can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. *Calico* policies can be used on its own or together with the Kubernetes network policies.
### Cilium[](#cilium "Permalink to this headline")
### Cilium[🔗](#cilium "Permalink to this headline")
[Cilium](https://cilium.io/) is drawing its power from a technology called *eBPF*. It exposes programmable hooks to the network stack in Linux kernel. *eBPF* uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, *eBPF* enables Linux to watch over Kubernetes and react appropriately.
@ -125,7 +125,7 @@ With *Cilium*, the relationships amongst various cluster parts are as follows:
Using *Cilium* especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.
Overview and benefits of *localstorage* templates[](#overview-and-benefits-of-localstorage-templates "Permalink to this headline")
Overview and benefits of *localstorage* templates[🔗](#overview-and-benefits-of-localstorage-templates "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------
Compared to standard templates, the *localstorage* templates may be a better fit for *resources intensive* apps.
@ -153,7 +153,7 @@ You would use an HMD flavor mainly for the master node(s) in the cluster.
In WAW3-2 cloud, you would use flavors starting with HMAD instead of HMD.
Example parameters to create a new cluster with localstorage and NVMe[](#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme "Permalink to this headline")
Example parameters to create a new cluster with localstorage and NVMe[🔗](#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For general discussion of parameters, see Prerequisite No. 4. What follows is a simplified example, geared to creation of cluster using *localstorage*.

View File

@ -1,18 +1,18 @@
Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud[](#deploy-keycloak-on-kubernetes-with-a-sample-app-on-brand-name "Permalink to this headline")
Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud[🔗](#deploy-keycloak-on-kubernetes-with-a-sample-app-on-brand-name "Permalink to this headline")
===================================================================================================================================================================
[Keycloak](https://www.keycloak.org/) is a large Open-Source Identity Management suite capable of handling a wide range of identity-related use cases.
Using Keycloak, it is straightforward to deploy a robust authentication/authorization solution for your applications. After the initial deployment, you can easily configure it to meet new identity-related requirements, e.g. multi-factor authentication, federation to social-providers, custom password policies, and many others.
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
What We Are Going To Do[🔗](#what-we-are-going-to-do "Permalink to this headline")
---------------------------------------------------------------------------------
> * Deploy Keycloak on a Kubernetes cluster
> * Configure Keycloak: create a realm, client and a user
> * Deploy a sample Python web application using Keycloak for authentication
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -31,7 +31,7 @@ No. 4 **Familiarity with OpenID Connect (OIDC) terminology**
Certain familiarity with OpenID Connect (OIDC) terminology is required. Some key terms will be briefly explained in this article.
Step 1 Deploy Keycloak on Kubernetes[](#step-1-deploy-keycloak-on-kubernetes "Permalink to this headline")
Step 1 Deploy Keycloak on Kubernetes[🔗](#step-1-deploy-keycloak-on-kubernetes "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
Lets first create a dedicated Kubernetes namespace for Keycloak. This is optional, but good practice:
@ -73,7 +73,7 @@ This is full screen view of the Keycloak window:
![keycloack_full_screen.png](../_images/keycloack_full_screen.png)
Step 2 Create Keycloak realm[](#step-2-create-keycloak-realm "Permalink to this headline")
Step 2 Create Keycloak realm[🔗](#step-2-create-keycloak-realm "Permalink to this headline")
-------------------------------------------------------------------------------------------
In Keycloak terminology, a *realm* is a dedicated space for managing an isolated subset of users, roles and other related entities. Keycloak has initially a master realm used for administration of Keycloak itself.
@ -92,7 +92,7 @@ When the realm is created (and selected), we operate within this realm:
In the left upper corner, instead of **master** now is the name of the selected realm, **myrealm**.
Step 3 Create and configure Keycloak client[](#step-3-create-and-configure-keycloak-client "Permalink to this headline")
Step 3 Create and configure Keycloak client[🔗](#step-3-create-and-configure-keycloak-client "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
Clients are entities in Keycloak that can request Keycloak to authenticate users. In practical terms, they can be thought of as representation of individual applications that want to utilize Keycloak-managed authentication/authorization.
@ -132,7 +132,7 @@ Web origins
After hitting **Save**, your client is created. You can then modify the previously selected settings of the created client, and add new, more specific ones. There are vast possibilities for further customization depending on your app specifics, this is however beyond the scope of this article.
Step 4 Create a User in Keycloak[](#step-4-create-a-user-in-keycloak "Permalink to this headline")
Step 4 Create a User in Keycloak[🔗](#step-4-create-a-user-in-keycloak "Permalink to this headline")
---------------------------------------------------------------------------------------------------
After creating the Client, we will proceed to creating our first User in Keycloak. In order to do so, click on the Users tab on the left and then **Create New User**:
@ -145,7 +145,7 @@ Next, we will set up password credentials for the newly created user. Select **C
![image2023-6-15_8-43-7.png](../_images/image2023-6-15_8-43-7.png)
Step 5 Retrieve client secret from Keycloak[](#step-5-retrieve-client-secret-from-keycloak "Permalink to this headline")
Step 5 Retrieve client secret from Keycloak[🔗](#step-5-retrieve-client-secret-from-keycloak "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
Once we have the Keycloak set up, we will need to extract the client *secret*, so that Keycloak establishes trust with our application.
@ -160,7 +160,7 @@ Once in tab **Credentials**, the secret will become accessible through field **C
For privacy reasons, in the screeshot above, it is painted yellow. In your case, take note of its value, as in the next step you will need to paste it into the application code.
Step 6 Create a Flask web app utilizing Keycloak authentication[](#step-6-create-a-flask-web-app-utilizing-keycloak-authentication "Permalink to this headline")
Step 6 Create a Flask web app utilizing Keycloak authentication[🔗](#step-6-create-a-flask-web-app-utilizing-keycloak-authentication "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
To build the app, we will use Flask, which is a lightweight Python-based web framework. Keycloak supports wide range of other technologies as well. We will use Flask-OIDC library, which expands Flask with capability to run OpenID Connect authentication/authorization scenarios.
@ -269,7 +269,7 @@ Note that *app.py* creates 3 routes:
`/logout`
: Entering this route logs the user out.
Step 7 Test the application[](#step-7-test-the-application "Permalink to this headline")
Step 7 Test the application[🔗](#step-7-test-the-application "Permalink to this headline")
-----------------------------------------------------------------------------------------
To test the application, execute the following command from the working directory in which file *app.py* is placed:

View File

@ -1,11 +1,11 @@
Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud[](#deploying-https-services-on-magnum-kubernetes-in-brand-name-cloud-name-cloud "Permalink to this headline")
Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud[🔗](#deploying-https-services-on-magnum-kubernetes-in-brand-name-cloud-name-cloud "Permalink to this headline")
======================================================================================================================================================================================
Kubernetes makes it very quick to deploy and publicly expose an application, for example using the LoadBalancer service type. Sample deployments, which demonstrate such capability, are usually served with HTTP. Deploying a production-ready service, secured with HTTPS, can also be done smoothly, by using additional tools.
In this article, we show how to deploy a sample HTTPS-protected service on CloudFerro Cloud cloud.
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We are Going to Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install Cert Managers Custom Resource Definitions
@ -15,7 +15,7 @@ What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Associate the domain with NGINX Ingress
> * Create and Deploy an Ingress Resource
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -50,7 +50,7 @@ This is optional. Here is the article with detailed information:
[DNS as a Service on CloudFerro Cloud Hosting](../cloud/DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html.md)
Step 1 Install Cert Managers Custom Resource Definitions (CRDs)[](#step-1-install-cert-manager-s-custom-resource-definitions-crds "Permalink to this headline")
Step 1 Install Cert Managers Custom Resource Definitions (CRDs)[🔗](#step-1-install-cert-manager-s-custom-resource-definitions-crds "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
We assume you have your
@ -92,7 +92,7 @@ Warning
Magnum introduces a few pod security policies (PSP) which provide some extra safety precautions for the cluster, but will cause conflict with the CertManager Helm chart. PodSecurityPolicy is deprecated until Kubernetes v. 1.25, but still supported in version of Kubernetes 1.21 to 1.23 available on CloudFerro Cloud cloud. The commands below may produce warnings about deprecation but the installation should continue nevertheless.
Step 2 Install CertManager Helm chart[](#step-2-install-certmanager-helm-chart "Permalink to this headline")
Step 2 Install CertManager Helm chart[🔗](#step-2-install-certmanager-helm-chart "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------
We assume you have installed Helm according to the article mentioned in Prerequisite No. 5. The result of that article will be file *my-values.yaml* and in order to ensure correct deployment of CertManager Helm chart, we will need to
@ -148,7 +148,7 @@ or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
We see that *cert-manager* is deployed successfully but also get a hint that *ClusterIssuer* or an *Issuer* resource has to be installed as well. Our next step is to install a sample service into the cluster and then continue with creation and deployment of an *Issuer*.
Step 3 Create a Deployment and a Service[](#step-3-create-a-deployment-and-a-service "Permalink to this headline")
Step 3 Create a Deployment and a Service[🔗](#step-3-create-a-deployment-and-a-service "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
Lets deploy NGINX service as a standard example of a Kubernetes app. First we create a standard Kubernetes deployment and then a service of type *NodePort*. Write the following contents to file *my-nginx.yaml* :
@ -199,7 +199,7 @@ kubectl apply -f my-nginx.yaml
```
Step 4 Create and Deploy an Issuer[](#step-4-create-and-deploy-an-issuer "Permalink to this headline")
Step 4 Create and Deploy an Issuer[🔗](#step-4-create-and-deploy-an-issuer "Permalink to this headline")
-------------------------------------------------------------------------------------------------------
Now install an *Issuer*. It is a custom Kubernetes resource and represents Certificate Authority (CA), which ensures that our HTTPS are signed and therefore trusted by the browsers. CertManager supports different issuers, in our example we will use Lets Encrypt, that uses ACME protocol.
@ -236,7 +236,7 @@ kubectl apply -f my-nginx-issuer.yaml
As a result, the *Issuer* gets deployed, and a *Secret* called *letsencrypt-secret* with a private key is deployed as well.
Step 5 Associate the Domain with NGINX Ingress[](#step-5-associate-the-domain-with-nginx-ingress "Permalink to this headline")
Step 5 Associate the Domain with NGINX Ingress[🔗](#step-5-associate-the-domain-with-nginx-ingress "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------
To see the site in browser, your HTTPS certificate will need to be associated with a specific domain. To follow along, you should have a real domain already registered at a domain registrar.
@ -249,7 +249,7 @@ Now, at your domain registrar you need to associate the A record of the domain w
You can also use the DNS command in Horizon to connect the domain name you have with the cluster. See Prerequisite No. 7 for additional details.
Step 6 Create and Deploy an Ingress Resource[](#step-6-create-and-deploy-an-ingress-resource "Permalink to this headline")
Step 6 Create and Deploy an Ingress Resource[🔗](#step-6-create-and-deploy-an-ingress-resource "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------
The final step is to deploy the *Ingress* resource. This will perform the necessary steps to initiate the certificate signing request with the CA and ultimately provide the HTTPS certificate for your service. In order to proceed, place the contents below into file *my-nginx-ingress.yaml*. Replace **mysampledomain.eu** with your domain.
@ -296,7 +296,7 @@ If all works well, the effort is complete and after a couple of minutes we shoul
![image2022-12-9_10-55-8.png](../_images/image2022-12-9_10-55-8.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
The article [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html.md) shows how to create an HTTP based service or a site.

View File

@ -1,9 +1,9 @@
Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud[](#deploying-helm-charts-on-magnum-kubernetes-clusters-on-brand-name-cloud-name-cloud "Permalink to this headline")
Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud[🔗](#deploying-helm-charts-on-magnum-kubernetes-clusters-on-brand-name-cloud-name-cloud "Permalink to this headline")
==================================================================================================================================================================================================
Kubernetes is a robust and battle-tested environment for running apps and services, yet it could be time consuming to manually provision all resources required to run a production-ready deployment. This article introduces [Helm](https://helm.sh/) as a package manager for Kubernetes. With it, you will be able to quickly deploy complex Kubernetes applications, consisting of code, databases, user interfaces and more.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Background - How Helm works
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Deploy Helm chart on a cluster
> * Customize chart deployment
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -44,7 +44,7 @@ Code samples in this article assume you are running Ubuntu 20.04 LTS or similar
[How to create a Linux VM and access it from Linux command line on CloudFerro Cloud](../cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-CloudFerro-Cloud.html.md)
Background - How Helm works[](#background-how-helm-works "Permalink to this headline")
Background - How Helm works[🔗](#background-how-helm-works "Permalink to this headline")
---------------------------------------------------------------------------------------
A usual sequence of deploying an application on Kubernetes entails:
@ -60,7 +60,7 @@ For each standard deployment of an application on Kubernetes (e.g. a database, a
Helm charts are designed to cover a broad set of use cases required for deploying an application. The application can be then simply launched on a cluster with a few commands within seconds. Some specific customizations for an individual deployment can be then easily adjusted by overriding the default *values.yaml* file.
Install Helm[](#install-helm "Permalink to this headline")
Install Helm[🔗](#install-helm "Permalink to this headline")
-----------------------------------------------------------
You can install Helm on your own development machine. To install, download the installer file from the Helm release page, change file permission, and run the installation:
@ -81,7 +81,7 @@ $ helm version
For other operating systems, use the [link to download Helm installation files](https://phoenixnap.com/kb/install-helm) and proceed analogously.
Add a Helm repository[](#add-a-helm-repository "Permalink to this headline")
Add a Helm repository[🔗](#add-a-helm-repository "Permalink to this headline")
-----------------------------------------------------------------------------
Helm charts are distributed using repositories. For example, a single repository can host several Helm charts from a certain provider. For the purpose of this article, we will add the Bitnami repository that contains their versions of multiple useful Helm charts e.g. Redis, Grafana, Elasticsearch, or others. You can run it using the following command:
@ -102,7 +102,7 @@ The following image shows just a start of all the available apps from *bitnami*
![search_repo.png](../_images/search_repo.png)
Helm chart repositories[](#helm-chart-repositories "Permalink to this headline")
Helm chart repositories[🔗](#helm-chart-repositories "Permalink to this headline")
---------------------------------------------------------------------------------
In the above example, we knew where to find a repository with Helm charts. There are other repositories and they are usually hosted on GitHub or ArtifactHub. Let us have a look at the [apache page in ArtifactHUB](https://artifacthub.io/packages/helm/bitnami/apache):
@ -115,7 +115,7 @@ Click on the DEFAULT VALUES option (yellow highlight) and see contents of the de
In this file (or in additional tabular information on the chart page), you can check which parameters are enabled for customization, and which are their default values.
Check whether kubectl has access to the cluster[](#check-whether-kubectl-has-access-to-the-cluster "Permalink to this headline")
Check whether kubectl has access to the cluster[🔗](#check-whether-kubectl-has-access-to-the-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------
To proceed further, verify that you have your KUBECONFIG environment variable exported and pointing to a running clusters *kubeconfig* file (see Prerequisite No. 4). If there is need, export this environment variable:
@ -134,7 +134,7 @@ kubectl get nodes
That will serve as the confirmation that you have access to the cluster.
Deploy a Helm chart on a cluster[](#deploy-a-helm-chart-on-a-cluster "Permalink to this headline")
Deploy a Helm chart on a cluster[🔗](#deploy-a-helm-chart-on-a-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------
Now that we know where to find repositories with hundreds of charts to choose from, lets deploy one of them to our cluster.
@ -177,7 +177,7 @@ Note that the floating IP generation can take a couple of minutes to appear. Aft
![apache_ip.png](../_images/apache_ip.png)
Customizing the chart deployment[](#customizing-the-chart-deployment "Permalink to this headline")
Customizing the chart deployment[🔗](#customizing-the-chart-deployment "Permalink to this headline")
---------------------------------------------------------------------------------------------------
We just saw how quick it was to deploy a Helm chart with the default settings. Usually, before running the chart in production, you will need to adjust a few settings to meet your requirements.
@ -229,7 +229,7 @@ We can see that the application is now exposed to a new port 8080, which can be
![trag_8080.png](../_images/trag_8080.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Deploy other useful services using Helm charts: [Argo Workflows](https://artifacthub.io/packages/helm/bitnami/argo-workflows), [JupyterHub](https://artifacthub.io/packages/helm/jupyterhub/jupyterhub), [Vault](https://artifacthub.io/packages/helm/hashicorp/vault) amongst many others that are available.

View File

@ -1,4 +1,4 @@
Deploying vGPU workloads on CloudFerro Cloud Kubernetes[](#deploying-vgpu-workloads-on-brand-name-kubernetes "Permalink to this headline")
Deploying vGPU workloads on CloudFerro Cloud Kubernetes[🔗](#deploying-vgpu-workloads-on-brand-name-kubernetes "Permalink to this headline")
===========================================================================================================================================
Utilizing GPU (Graphical Processing Units) presents a highly efficient alternative for fast, highly parallel processing of demanding computational tasks such as image processing, machine learning and many others.
@ -7,7 +7,7 @@ In cloud environment, virtual GPU units (vGPU) are available with certain Virtua
We will present three alternative ways for adding vGPU capability to your Kubernetes cluster, based on your required scenario. For each, you should be able to verify the vGPU installation and test it by running vGPU workload.
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * **Scenario No. 1** - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created **after** June 21st 2023
@ -17,7 +17,7 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
> * Test vGPU workload
> * Add non-GPU nodegroup to a GPU-first cluster
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -44,7 +44,7 @@ No. 4 **Familiarity with the notion of nodegroups**
[Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-CloudFerro-Cloud-OpenStack-Magnum.html.md).
vGPU flavors per cloud[](#vgpu-flavors-per-cloud "Permalink to this headline")
vGPU flavors per cloud[🔗](#vgpu-flavors-per-cloud "Permalink to this headline")
-------------------------------------------------------------------------------
Below is the list of GPU flavors in each cloud, applicable for using with Magnum Kubernetes service.
@ -80,7 +80,7 @@ FRA1-2
> | **vm.l40s.2** | 8 | 29.8 GB | 80 GB | Yes |
> | **vm.l40s.8** | 32 | 119.22 GB | 320 GB | Yes |
Hardware comparison between RTX A6000 and NVIDIA L40S[](#hardware-comparison-between-rtx-a6000-and-nvidia-l40s "Permalink to this headline")
Hardware comparison between RTX A6000 and NVIDIA L40S[🔗](#hardware-comparison-between-rtx-a6000-and-nvidia-l40s "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------
The NVIDIA L40S is designed for 24x7 enterprise data center operations and optimized to deploy at scale. As compared to A6000, NVIDIA L40S is better for
@ -90,7 +90,7 @@ The NVIDIA L40S is designed for 24x7 enterprise data center operations and optim
> * real-time ray tracing applications and is
> * faster for in memory-intensive tasks.
Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[](#id1 "Permalink to this table")
Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[🔗](#id1 "Permalink to this table")
| Specification | NVIDIA RTX A60001 | NVIDIA L40S1 |
| --- | --- | --- |
@ -103,7 +103,7 @@ Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[](#id1 "Permalink to th
| **Performance** | Strong performance for diverse workloads | Superior AI and machine learning performance |
| **Use Cases** | 3D rendering, video editing, AI development | Data center, large-scale AI, enterprise applications |
Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023[](#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023 "Permalink to this headline")
Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023[🔗](#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023 "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In order to create a new nodegroup, called **gpu**, with one node vGPU flavor, say, **vm.a6000.2**, we can use the following Magnum CLI command:
@ -140,7 +140,7 @@ We get:
The result is that a new nodegroup called **gpu** is created in the cluster and that it is using the GPU flavor.
Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023[](#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023 "Permalink to this headline")
Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023[🔗](#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023 "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The instructions are the same as in the previous scenario, with the exception of adding an additional label:
@ -194,7 +194,7 @@ openstack coe nodegroup list $CLUSTER_ID_OLDER --max-width 120
```
Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup[](#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup "Permalink to this headline")
Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup[🔗](#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To create a new vGPU-enabled cluster, you can use the usual Horizon commands, selecting one of the existing templates with **vgu** in their names:
@ -226,7 +226,7 @@ openstack coe cluster create k8s-gpu-with_template \
```
### Verify the vGPU installation[](#verify-the-vgpu-installation "Permalink to this headline")
### Verify the vGPU installation[🔗](#verify-the-vgpu-installation "Permalink to this headline")
You can verify that vGPU-enabled nodes were properly added to your cluster, by checking the **nvidia-device-plugin** deployed in the cluster, to the **nvidia-device-plugin** namespace. The command to list the contents of the **nvidia** namespace is:
@ -282,7 +282,7 @@ kubectl describe node k8s-gpu-with-template-lfs5335ymxcn-node-0 | grep 'Taints'
```
### Run test vGPU workload[](#run-test-vgpu-workload "Permalink to this headline")
### Run test vGPU workload[🔗](#run-test-vgpu-workload "Permalink to this headline")
We can run a sample workload on vGPU. To do so, create a YAML manifest file **vgpu-pod.yaml**, with the following contents:
@ -339,7 +339,7 @@ Done
```
Add non-GPU nodegroup to a GPU-first cluster[](#add-non-gpu-nodegroup-to-a-gpu-first-cluster "Permalink to this headline")
Add non-GPU nodegroup to a GPU-first cluster[🔗](#add-non-gpu-nodegroup-to-a-gpu-first-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------
We refer to GPU-first clusters as the ones created with **worker\_type=gpu** flag. For example, in cluster created with Scenario No. 3, the default nodegroup consists of vGPU nodes.

View File

@ -1,9 +1,9 @@
Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster[](#enable-kubeapps-app-launcher-on-brand-name-magnum-kubernetes-cluster "Permalink to this headline")
Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster[🔗](#enable-kubeapps-app-launcher-on-brand-name-magnum-kubernetes-cluster "Permalink to this headline")
=================================================================================================================================================================================
[Kubeapps](https://kubeapps.dev/) app-launcher enables quick deployments of applications on your Kubernetes cluster, with convenient graphical user interface. In this article we provide guidelines for creating Kubernetes cluster with Kubeapps feature enabled, and deploying sample applications.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Brief background - deploying applications on Kubernetes
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Launch sample application from Kubeapps
> * Current limitations
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -37,14 +37,14 @@ No. 5 **Access to CloudFerro clouds**
Kubeapps is available on one of the clouds: WAW3-2, FRA1-2, WAW3-1.
Background[](#background "Permalink to this headline")
Background[🔗](#background "Permalink to this headline")
-------------------------------------------------------
Deploying complex applications on Kubernetes becomes notably more efficient and convenient with Helm. Adding to this convenience, **Kubeapps**, an app-launcher with Graphical User Interface (GUI), provides a user-friendly starting point for application management. This GUI allows to deploy and manage applications on your K8s cluster, limiting the need for deep command-line expertise.
Kubeapps app-launcher can be enabled during cluster creation time. It will run as a local service, accessible from browser.
Create Kubernetes cluster with Kubeapps quick-launcher enabled[](#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled "Permalink to this headline")
Create Kubernetes cluster with Kubeapps quick-launcher enabled[🔗](#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Creating Kubernetes cluster with Kubeapps enabled, follows the generic guideline described in Prerequisite No. 2.
@ -67,7 +67,7 @@ Inserting these labels is shown in the image below:
![image-2024-2-13_13-15-17.png](../_images/image-2024-2-13_13-15-17.png)
Access Kubeapps service locally from your browser[](#access-kubeapps-service-locally-from-your-browser "Permalink to this headline")
Access Kubeapps service locally from your browser[🔗](#access-kubeapps-service-locally-from-your-browser "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------
Once the cluster is created, access the Linux console. You should have **kubectl** command line tool available, as specified in Prerequisite No. 3.
@ -100,7 +100,7 @@ You can now operate Kubeapps:
![image-2024-2-13_15-48-38.png](../_images/image-2024-2-13_15-48-38.png)
Launch sample application from Kubeapps[](#launch-sample-application-from-kubeapps "Permalink to this headline")
Launch sample application from Kubeapps[🔗](#launch-sample-application-from-kubeapps "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------
Clicking on “Catalog” exposes a long list of applications available for downloads from Kubeapps app-store.
@ -136,7 +136,7 @@ The results will be similar to this:
![image-2024-2-13_16-29-35.png](../_images/image-2024-2-13_16-29-35.png)
Current limitations[](#current-limitations "Permalink to this headline")
Current limitations[🔗](#current-limitations "Permalink to this headline")
-------------------------------------------------------------------------
Both Kubeapps and Helm charts deployed by this launcher are open-source projects, which are continuously evolving. The versions installed on CloudFerro Cloud cloud provide a snapshot of this development, as a convenience feature.

View File

@ -1,11 +1,11 @@
GitOps with Argo CD on CloudFerro Cloud Kubernetes[](#gitops-with-argo-cd-on-brand-name-kubernetes "Permalink to this headline")
GitOps with Argo CD on CloudFerro Cloud Kubernetes[🔗](#gitops-with-argo-cd-on-brand-name-kubernetes "Permalink to this headline")
=================================================================================================================================
Argo CD is a continuous deployment tool for Kubernetes, designed with GitOps and Infrastructure as Code (IaC) principles in mind. It automatically ensures that the state of applications deployed on a Kubernetes cluster is always in sync with a dedicated Git repository where we define such desired state.
In this article we will demonstrate installing Argo CD on a Kubernetes cluster and deploying an application using this tool.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install Argo CD
@ -14,7 +14,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Create and deploy Argo CD application resource
> * View the deployed resources
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -47,7 +47,7 @@ No. 7 **Access to exemplary Flask application**
You should have access to the [example Flask application](https://github.com/CloudFerro/K8s-samples/tree/main/Flask-K8s-deployment), to be downloaded from GitHub in the article. It will serve as an example of a minimal application and by changing it, we will demonstrate that Argo CD is capturing those changes in a continual manner.
Step 1 Install Argo CD[](#step-1-install-argo-cd "Permalink to this headline")
Step 1 Install Argo CD[🔗](#step-1-install-argo-cd "Permalink to this headline")
-------------------------------------------------------------------------------
Lets install Argo CD first, under the following assumptions:
@ -74,7 +74,7 @@ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/st
```
Step 2 Access Argo CD from your browser[](#step-2-access-argo-cd-from-your-browser "Permalink to this headline")
Step 2 Access Argo CD from your browser[🔗](#step-2-access-argo-cd-from-your-browser "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------
The Argo CD web application by default is not accessible from the browser. To enable this, change the applicable service from **ClusterIP** to **LoadBalancer** type with the command:
@ -110,7 +110,7 @@ After typing in your credentials to the login form, you get transferred to the f
![image-2024-5-14_15-34-0.png](../_images/image-2024-5-14_15-34-0.png)
Step 3 Create a Git repository[](#step-3-create-a-git-repository "Permalink to this headline")
Step 3 Create a Git repository[🔗](#step-3-create-a-git-repository "Permalink to this headline")
-----------------------------------------------------------------------------------------------
You need to create a git repository first. The state of the application on your Kubernetes cluster will be synced to the state of this repo. It is recommended that it is a separate repository from your application code, to avoid triggering the CI pipelines whenever we change the configuration.
@ -123,7 +123,7 @@ Create the repository first, we call ours **argocd-sample**. While filling in th
In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance. In the place denoted with a blue rectangle, you should enter your user name; usually, it will be **root** but can be anything else. If there already are some users defined in GitLab, their names will appear in a drop-down menu.
Step 4 Download Flask application[](#step-4-download-flask-application "Permalink to this headline")
Step 4 Download Flask application[🔗](#step-4-download-flask-application "Permalink to this headline")
-----------------------------------------------------------------------------------------------------
The next goal is to download two yaml files to a folder called **ArgoCD-sample** and its subfolder **deployment**.
@ -146,7 +146,7 @@ rm K8s-samples/ -rf
Files **deployment.yaml** and **service.yaml** deploy a sample Flask application on Kubernetes and expose it as a service. These are typical minimal examples for deployment and service and can be obtained from the CloudFerro Kubernetes samples repository.
Step 5 Push your app deployment configurations[](#step-5-push-your-app-deployment-configurations "Permalink to this headline")
Step 5 Push your app deployment configurations[🔗](#step-5-push-your-app-deployment-configurations "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------
Then you need to upload files **deployment.yaml** and **service.yaml** files to the remote repository. Since you are using git, you perform the upload by *syncing* your local repo with the remote. First initiate the repo locally, then push the files to your remote with the following commands (replace to your own git repository instance):
@ -165,7 +165,7 @@ As a result, at this point, we have the two files available in remote repository
![image-2024-5-17_11-20-27.png](../_images/image-2024-5-17_11-20-27.png)
Step 6 Create Argo CD application resource[](#step-6-create-argo-cd-application-resource "Permalink to this headline")
Step 6 Create Argo CD application resource[🔗](#step-6-create-argo-cd-application-resource "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------
Argo CD configuration for a specific application is defined using an application custom resource. Such resource connects a Kubernetes cluster with a repository where deployment configurations are stored.
@ -227,7 +227,7 @@ spec.destination.server
spec.destination.namespace
: The namespace in the cluster where the application will be deployed.
Step 7 Deploy Argo CD application[](#step-7-deploy-argo-cd-application "Permalink to this headline")
Step 7 Deploy Argo CD application[🔗](#step-7-deploy-argo-cd-application "Permalink to this headline")
-----------------------------------------------------------------------------------------------------
After we created the **application.yaml** file, the next step is to commit it and push to the remote repo. We can do this with the following commands:
@ -246,7 +246,7 @@ kubectl apply -f application.yaml
```
Step 8 View the deployed resources[](#step-8-view-the-deployed-resources "Permalink to this headline")
Step 8 View the deployed resources[🔗](#step-8-view-the-deployed-resources "Permalink to this headline")
-------------------------------------------------------------------------------------------------------
After performing the steps above, switch views to the Argo CD UI. We can see that our application appears on the list of applications and that the state to be applied on the cluster was properly captured from the Git repo. It will take a few minutes to complete the deployment of resources on the cluster:
@ -263,7 +263,7 @@ After clicking on the applications box, we can also see the details of all th
With the default settings, Argo CD will poll the Git repository every 3 minutes to capture the desired state of the cluster. If any changes in the repo are detected, the applications on the cluster will be automatically relaunched with the new configuration applied.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
* test applying changes to the deployment in the repository (e.g. commit a deployment with different image in the container spec), verify ArgoCD capturing the change and changing the cluster state

View File

@ -1,4 +1,4 @@
HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud[](#http-request-based-autoscaling-on-k8s-using-prometheus-and-keda-on-brand-name "Permalink to this headline")
HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud[🔗](#http-request-based-autoscaling-on-k8s-using-prometheus-and-keda-on-brand-name "Permalink to this headline")
===================================================================================================================================================================================================
Kubernetes pod autoscaler (HPA) natively utilizes CPU and RAM metrics as the default triggers for increasing or decreasing number of pods. While this is often sufficient, there can be use cases where scaling on custom metrics is preferred.
@ -11,7 +11,7 @@ Note
We will use *NGINX web server* to demonstrate the app, and *NGINX ingress* to deploy it and collect metrics. Note that *NGINX web server* and *NGINX ingress* are two separate pieces of software, with two different purposes.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install NGINX ingress on Magnum cluster
@ -23,7 +23,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Deploy KEDA ScaledObject
> * Test with Locust
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -47,7 +47,7 @@ This article will introduce you to Helm charts on Kubernetes:
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html.md)
Install NGINX ingress on Magnum cluster[](#install-nginx-ingress-on-magnum-cluster "Permalink to this headline")
Install NGINX ingress on Magnum cluster[🔗](#install-nginx-ingress-on-magnum-cluster "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------
Please type in the following commands to download the *ingress-nginx* Helm repo and then install the chart. Note we are using a custom namespace *ingress-nginx* as well as setting the options to enable Prometheus metrics.
@ -77,7 +77,7 @@ ingress-nginx-controller LoadBalancer 10.254.118.18 64.225.135.67 80:315
We get **64.225.135.67**. Instead of that value, use the EXTERNAL-IP value you get in your terminal after running the above command.
Install Prometheus[](#install-prometheus "Permalink to this headline")
Install Prometheus[🔗](#install-prometheus "Permalink to this headline")
-----------------------------------------------------------------------
In order to install Prometheus, please apply the following command on your cluster:
@ -89,7 +89,7 @@ kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
Note that this is Prometheus installation customized for NGINX Ingress and already installs to the *ingress-nginx* namespace by default, so no need to provide the namespace flag or create one.
Install Keda[](#install-keda "Permalink to this headline")
Install Keda[🔗](#install-keda "Permalink to this headline")
-----------------------------------------------------------
With below steps, create a separate namespace for Keda artifacts, download the repo and install the Keda-Core chart:
@ -104,7 +104,7 @@ helm install keda kedacore/keda --version 2.3.0 --namespace keda
```
Deploy a sample app[](#deploy-a-sample-app "Permalink to this headline")
Deploy a sample app[🔗](#deploy-a-sample-app "Permalink to this headline")
-------------------------------------------------------------------------
With the above steps completed, we can deploy a simple application. It will be an NGINX web server, serving a simple “Welcome to nginx!” page. Note, we create a deployment and then expose this deployment as a service of type ClusterIP. Create a file *app-deployment.yaml* in your favorite editor:
@ -154,7 +154,7 @@ kubectl apply -f app-deployment.yaml -n ingress-nginx
We are deploying this application into the *ingress-nginx* namespace where also the ingress installation and Prometheus is hosted. For production scenarios, you might want to have better isolation of application vs. infrastructure, this is however beyond the scope of this article.
Deploy our app ingress[](#deploy-our-app-ingress "Permalink to this headline")
Deploy our app ingress[🔗](#deploy-our-app-ingress "Permalink to this headline")
-------------------------------------------------------------------------------
Our application is already running and exposed in our cluster, but we want to also expose it publicly. For this purpose we will use NGINX ingress, which will also act as a proxy to register the request metrics. Create a file *app-ingress.yaml* with the following contents:
@ -204,7 +204,7 @@ After typing the IP address with the prefix (replace with your own floating IP w
![welcome_nginx.png](../_images/welcome_nginx.png)
Access Prometheus dashboard[](#access-prometheus-dashboard "Permalink to this headline")
Access Prometheus dashboard[🔗](#access-prometheus-dashboard "Permalink to this headline")
-----------------------------------------------------------------------------------------
To access Prometheus dashboard we can port-forward the running prometheus-server to our localhost. This could be useful for troubleshooting. We have the *prometheus-server* running as a *NodePort* service, which can be verified per below:
@ -231,7 +231,7 @@ Then enter *localhost:9090* in your browser, you will see the Prometheus dashboa
![prometheus-dashboard_9090.png](../_images/prometheus-dashboard_9090.png)
Deploy KEDA ScaledObject[](#deploy-keda-scaledobject "Permalink to this headline")
Deploy KEDA ScaledObject[🔗](#deploy-keda-scaledobject "Permalink to this headline")
-----------------------------------------------------------------------------------
Keda ScaledObject is a custom resource which will enable scaling our application based on custom metrics. In the YAML manifest we define what will be scaled (the nginx deployment), what are the conditions for scaling, and the definition and configuration of the trigger, in this case Prometheus. Prepare a file *scaled-object.yaml* with the following contents:
@ -271,7 +271,7 @@ kubectl apply -f scaled-object.yaml -n ingress-nginx
```
Test with Locust[](#test-with-locust "Permalink to this headline")
Test with Locust[🔗](#test-with-locust "Permalink to this headline")
-------------------------------------------------------------------
We can now test whether the scaling works as expected. We will use *Locust* for this, which is a load testing tool. To quickly deploy *Locust* as LoadBalancer service type, enter the following commands:
@ -321,7 +321,7 @@ nginx-85b98978db-6zcdw 1/1 Running 0
```
Cooling down[](#cooling-down "Permalink to this headline")
Cooling down[🔗](#cooling-down "Permalink to this headline")
-----------------------------------------------------------
After hitting “Stop” in Locust, the pods will scale down to one replica, in line with the value of *coolDownPeriod* parameter, which is defined in the Keda ScaledObject. Its default value is 300 seconds. If you want to change it, use command

View File

@ -1,15 +1,15 @@
How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum[](#how-to-access-kubernetes-cluster-post-deployment-using-kubectl-on-brand-name-openstack-magnum "Permalink to this headline")
How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum[🔗](#how-to-access-kubernetes-cluster-post-deployment-using-kubectl-on-brand-name-openstack-magnum "Permalink to this headline")
===================================================================================================================================================================================================================================
In this tutorial, you start with a freshly installed Kubernetes cluster on Cloudferro OpenStack server and connect the main Kubernetes tool, **kubectl** to the cloud.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * How to connect **kubectl** to the OpenStack Magnum server
> * How to access clusters with **kubectl**
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -53,7 +53,7 @@ No. 4 **Connect openstack client to the cloud**
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html.md).
The Plan[](#the-plan "Permalink to this headline")
The Plan[🔗](#the-plan "Permalink to this headline")
---------------------------------------------------
> * Follow up the steps listed in Prerequisite No. 2 and install **kubectl** on the platform of your choice.
@ -62,7 +62,7 @@ The Plan[](#the-plan "Permalink to this headline")
You are then going to connect **kubectl** to the Cloud.
Step 1 Create directory to download the certificates[](#step-1-create-directory-to-download-the-certificates "Permalink to this headline")
Step 1 Create directory to download the certificates[🔗](#step-1-create-directory-to-download-the-certificates "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------
Create a new directory called *k8sdir* into which the certificates will be downloaded:
@ -90,7 +90,7 @@ Note
In Linux, a file may or may not have an extension, while on Windows, it must have an extension.
Step 2A Download Certificates From the Server using the CLI commands[](#step-2a-download-certificates-from-the-server-using-the-cli-commands "Permalink to this headline")
Step 2A Download Certificates From the Server using the CLI commands[🔗](#step-2a-download-certificates-from-the-server-using-the-cli-commands "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
You will use command
@ -158,7 +158,7 @@ This is the entire procedure in terminal window:
![download_config_cli.png](../_images/download_config_cli.png)
Step 2B Download Certificates From the Server using Horizon commands[](#step-2b-download-certificates-from-the-server-using-horizon-commands "Permalink to this headline")
Step 2B Download Certificates From the Server using Horizon commands[🔗](#step-2b-download-certificates-from-the-server-using-horizon-commands "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
You can download the config file from Horizon directly to your computer. First list the clusters with command **Container Infra** -> **Clusters**, find the cluster and click on the rightmost drop-down menu in its column:
@ -185,7 +185,7 @@ export KUBECONFIG=/home/dusko/k8sdir/k8s-cluster_config-1.yaml
Depending on your environment, you may need to open a new terminal window to make the above command work.
Step 3 Verify That kubectl Has Access to the Cloud[](#step-3-verify-that-kubectl-has-access-to-the-cloud "Permalink to this headline")
Step 3 Verify That kubectl Has Access to the Cloud[🔗](#step-3-verify-that-kubectl-has-access-to-the-cloud "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------
See basic data about the cluster with the following command:
@ -219,7 +219,7 @@ kubectl options
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
With **kubectl** operational, you can

View File

@ -1,4 +1,4 @@
How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#how-to-create-api-server-loadbalancer-for-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[🔗](#how-to-create-api-server-loadbalancer-for-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
===============================================================================================================================================================================================================================
Load balancer can be understood both as
@ -8,7 +8,7 @@ Load balancer can be understood both as
There is an option to create load balancer while creating the Kubernetes cluster but you can also create it without. This article will show you how to access the cluster even if you did not specify load balancer at the creation time.
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
What We Are Going To Do[🔗](#what-we-are-going-to-do "Permalink to this headline")
---------------------------------------------------------------------------------
> * Create a cluster called NoLoadBalancer with one master node and no load balancer
@ -18,7 +18,7 @@ What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headlin
> * Use parameter **insecure-skip-tls-verify=true** to override server security
> * Verify that **kubectl** is working normally, which means that you have full access to the Kubernetes cluster
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -37,7 +37,7 @@ No. 4 **Connect to the Kubernetes Cluster in Order to Use kubectl**
Article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md) will show you how to connect your local machine to the existing Kubernetes cluster.
How To Enable or Disable Load Balancer for Master Nodes[](#how-to-enable-or-disable-load-balancer-for-master-nodes "Permalink to this headline")
How To Enable or Disable Load Balancer for Master Nodes[🔗](#how-to-enable-or-disable-load-balancer-for-master-nodes "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------
A default state for the Kubernetes cluster in CloudFerro Cloud OpenStack Magnum hosting is to have no load balancer set up in advance. You can decide to have a load balancer created together with the basic Kubernetes cluster by checking on option **Enable Load Balancer for Master Nodes** in window **Network** when creating a cluster through Horizon interface. (See **Prerequisite No. 3** for the complete procedure.)
@ -58,7 +58,7 @@ Regardless of the number of master nodes you have specified, checking this field
If you accept the default state of **unchecked**, no load balancer will be created. However, without any load balancer “in front” of the cluster, the cluster API is being exposed only within the Kubernetes network. You save on the existence of the load balancer but the direct connection from local machine to the cluster is lost.
One Master Node, No Load Balancer and the Problem It All Creates[](#one-master-node-no-load-balancer-and-the-problem-it-all-creates "Permalink to this headline")
One Master Node, No Load Balancer and the Problem It All Creates[🔗](#one-master-node-no-load-balancer-and-the-problem-it-all-creates "Permalink to this headline")
------------------------------------------------------------------------------------------------------------------------------------------------------------------
To show exactly what the problem is, use
@ -75,7 +75,7 @@ kubectl get nodes
but it will not work. If there were load balancer “in front of the cluster”, it would work, but here there isnt so it wont. The rest of this article will show you how to still make it work, using the fact that the master node of the cluster has its own load balancer for kube-api.
Step 1 Create a Cluster With One Master Node and No Load Balancer[](#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer "Permalink to this headline")
Step 1 Create a Cluster With One Master Node and No Load Balancer[🔗](#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Create cluster *NoLoadBalancer* as explained in Prerequisite No. 3. Let there be
@ -107,7 +107,7 @@ Addresses starting with 10.0… are usually reserved for local networks, meaning
![nodes_address.png](../_images/nodes_address.png)
Step 2 Create Floating IP for Master Node[](#step-2-create-floating-ip-for-master-node "Permalink to this headline")
Step 2 Create Floating IP for Master Node[🔗](#step-2-create-floating-ip-for-master-node "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------
Here are the instances that serve as nodes for that cluster:
@ -126,7 +126,7 @@ This is the result:
The IP number is **64.225.135.112** you are going to use it later on, to change *config* file for access to the Kubernetes cluster.
Step 3 **Create config File for Kubernetes Cluster**[](#step-3-create-config-file-for-kubernetes-cluster "Permalink to this headline")
Step 3 **Create config File for Kubernetes Cluster**[🔗](#step-3-create-config-file-for-kubernetes-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------
You are now going to connect to *NoLoadBalancer* cluster in spite of it not having a load balancer from the very start. To that end, create a config file to connect to the cluster, with the following command:
@ -163,7 +163,7 @@ server: https://10.0.0.54:6443
```
Step 4 Swap Existing Floating IP Address for the Network Address[](#step-4-swap-existing-floating-ip-address-for-the-network-address "Permalink to this headline")
Step 4 Swap Existing Floating IP Address for the Network Address[🔗](#step-4-swap-existing-floating-ip-address-for-the-network-address "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Now back to Horizon interface and execute commands **Compute** -> **Instances** to see the addresses for master node of the *NoLoadBalancer* cluster:
@ -193,7 +193,7 @@ The line should look like this:
Save the edited file. In case of **nano**, those will be commands `Control-x`, `Y` and pressing `Enter` on the keyboard.
Step 4 Add Parameter insecure-skip-tls-verify=true to Make kubectl Work[](#step-4-add-parameter-insecure-skip-tls-verify-true-to-make-kubectl-work "Permalink to this headline")
Step 4 Add Parameter insecure-skip-tls-verify=true to Make kubectl Work[🔗](#step-4-add-parameter-insecure-skip-tls-verify-true-to-make-kubectl-work "Permalink to this headline")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Try again to activate kubectl and again it will fail. To make it work, add parameter **insecure-skip-tls-verify=true**:

View File

@ -1,7 +1,7 @@
How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon[](#how-to-install-openstack-and-magnum-clients-for-command-line-interface-to-brand-name-horizon "Permalink to this headline")
How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon[🔗](#how-to-install-openstack-and-magnum-clients-for-command-line-interface-to-brand-name-horizon "Permalink to this headline")
=================================================================================================================================================================================================================================
How To Issue Commands to the OpenStack and Magnum Servers[](#how-to-issue-commands-to-the-openstack-and-magnum-servers "Permalink to this headline")
How To Issue Commands to the OpenStack and Magnum Servers[🔗](#how-to-issue-commands-to-the-openstack-and-magnum-servers "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------
There are three ways of working with Kubernetes clusters within Openstack Magnum and Horizon modules:
@ -18,14 +18,14 @@ CLI commands are issued from desktop computer or server in the cloud. This appro
Both the Horizon and the CLI use HTTPS requests internally and in an interactive manner. You can, however, write your own software to automate and/or change the state of the server, in real time.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * How to install the CLI OpenStack and Magnum clients
> * How to connect the CLI to the Horizon server
> * Basic examples of using OpenStack and Magnum clients
Notes On Python Versions and Environments for Installation[](#notes-on-python-versions-and-environments-for-installation "Permalink to this headline")
Notes On Python Versions and Environments for Installation[🔗](#notes-on-python-versions-and-environments-for-installation "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------
OpenStack is written in Python so you need to first install a Python working environment and then install the OpenStack clients. Officially, OpenStack runs only on Python 2.7 but you will most likely only be able to install a version 3.x of Python. During the installation, adjust accordingly the numbers of Python versions mentioned in the documentation.
@ -42,7 +42,7 @@ Note
If you decide to install Python and the OpenStack clients on a virtual machine, you will need SSH keys in order to be able to enter the working environment. See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html.md).
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -71,7 +71,7 @@ No. 5 **Connect openstack command to the cloud**
After the successful installation of **openstack** command, it should be connected to the cloud. Follow this article for technical details: [How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication](../accountmanagement/How-to-activate-OpenStack-CLI-access-to-CloudFerro-Cloud-cloud-using-one-or-two-factor-authentication.html.md).
Step 1 Install the CLI for Kubernetes on OpenStack Magnum[](#step-1-install-the-cli-for-kubernetes-on-openstack-magnum "Permalink to this headline")
Step 1 Install the CLI for Kubernetes on OpenStack Magnum[🔗](#step-1-install-the-cli-for-kubernetes-on-openstack-magnum "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------
In this step, you are going to install clients for commands **openstack** and **coe**, from modules OpenStack and Magnum, respectively.
@ -92,7 +92,7 @@ pip install python-magnumclient
```
Step 2 How to Use the OpenStack Client[](#step-2-how-to-use-the-openstack-client "Permalink to this headline")
Step 2 How to Use the OpenStack Client[🔗](#step-2-how-to-use-the-openstack-client "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------
In this step, you are going to start using the OpenStack client you have installed and connected to the cloud.
@ -109,7 +109,7 @@ The preferred way, however, is typing the keyword **openstack**, followed by par
Openstack commands may have dozens of parameters so it is better to compose the command in an independent text editor and then copy and paste it into the terminal.
The Help Command[](#the-help-command "Permalink to this headline")
The Help Command[🔗](#the-help-command "Permalink to this headline")
-------------------------------------------------------------------
To learn about the available commands and their parameters, type **help** after the command. If applied to the keyword **openstack** itself, it will write out a very long list of commands, which may come useful as an orientation. It may start out like this:
@ -144,7 +144,7 @@ openstack network list
![network_list.png](../_images/network_list.png)
Step 4 How to Use the Magnum Client[](#step-4-how-to-use-the-magnum-client "Permalink to this headline")
Step 4 How to Use the Magnum Client[🔗](#step-4-how-to-use-the-magnum-client "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
OpensStack command for the server is **openstack** but for Magnum, the command is not **magnum** as one would expect, but **coe**, for *container orchestration engine*. Therefore, the commands for clusters will always start with **openstack coe**.
@ -177,7 +177,7 @@ after clicking on **Container Infra** => **Clusters**.
Prerequisite No. 5 offers more technical info about the Magnum client.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
In this tutorial you have

View File

@ -1,9 +1,9 @@
How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum[](#how-to-use-command-line-interface-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum[🔗](#how-to-use-command-line-interface-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
=========================================================================================================================================================================================================================
In this article you shall use Command Line Interface (CLI) to speed up testing and creation of Kubernetes clusters on OpenStack Magnum servers.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * The advantages of using CLI over the Horizon graphical interface
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Reasons why the cluster may fail to create
> * CLI commands to delete a cluster
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -46,12 +46,12 @@ No. 7 **Autohealing of Kubernetes Clusters**
To learn more about autohealing of Kubernetes clusters, follow this official article [What is Magnum Autohealer?](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/magnum-auto-healer/using-magnum-auto-healer.md).
The Advantages of Using the CLI[](#the-advantages-of-using-the-cli "Permalink to this headline")
The Advantages of Using the CLI[🔗](#the-advantages-of-using-the-cli "Permalink to this headline")
-------------------------------------------------------------------------------------------------
You can use the CLI and Horizon interface interchangeably, but there are at least three advantages in using CLI.
### Reproduce Commands Through Cut & Paste[](#reproduce-commands-through-cut-paste "Permalink to this headline")
### Reproduce Commands Through Cut & Paste[🔗](#reproduce-commands-through-cut-paste "Permalink to this headline")
Here is a command to list flavors in the system
@ -72,7 +72,7 @@ and only then get the list of flavors to choose from:
A bonus is that keeping commands in a text editor automatically creates documentation for the server and cluster.
### CLI Commands Can Be Automated[](#cli-commands-can-be-automated "Permalink to this headline")
### CLI Commands Can Be Automated[🔗](#cli-commands-can-be-automated "Permalink to this headline")
You can use available automation. The result of the following Ubuntu pipeline is the url for communication from **kubectl** to the Kubernetes cluster:
@ -106,11 +106,11 @@ awk '/ api_address /{print $4}')
is searching for the line starting with *api\_address* and extracting its value *https://64.225.132.135:6443*. The final result is exported to the system variable KUBERNETES\_URL, thus automatically setting it up for use by Kubernetes cluster command **kubectl** when accessing the cloud.
### CLI Yields Access to All of the Existing OpenStack and Magnum Parameters[](#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters "Permalink to this headline")
### CLI Yields Access to All of the Existing OpenStack and Magnum Parameters[🔗](#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters "Permalink to this headline")
CLI commands offer access to a larger set of parameters than is available through Horizon. For instance, in Horizon, the default length of time allowed for creation of a cluster is 60 minutes while in CLI, you can set it to other values of choice.
### Debugging OpenStack and Magnum Commands[](#debugging-openstack-and-magnum-commands "Permalink to this headline")
### Debugging OpenStack and Magnum Commands[🔗](#debugging-openstack-and-magnum-commands "Permalink to this headline")
To see what is actually happening behind the scenes, when executing client commands, add parameter **debug**:
@ -121,7 +121,7 @@ openstack coe cluster list --debug
The output will be several screens long, consisting of GET and POST web calls, with dozens of parameters shown on screen. (The output is too voluminous to reproduce here.)
How to Enter OpenStack Commands[](#how-to-enter-openstack-commands "Permalink to this headline")
How to Enter OpenStack Commands[🔗](#how-to-enter-openstack-commands "Permalink to this headline")
-------------------------------------------------------------------------------------------------
Note
@ -183,7 +183,7 @@ Warning
If you are new to Kubernetes please, at first, create clusters only directly using the default cluster template.
Once you get more experience, you can start creating your own cluster templates and here is how to do it using CLI.
OpenStack Command for Creation of Cluster[](#openstack-command-for-creation-of-cluster "Permalink to this headline")
OpenStack Command for Creation of Cluster[🔗](#openstack-command-for-creation-of-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------
In this step you can create a new cluster using either the default cluster template or any of the templates that you have already created.
@ -265,7 +265,7 @@ Copy and paste the above command into the terminal where OpenStack and Magnum cl
![cli_newcluster.png](../_images/cli_newcluster.png)
How To Check Upon the Status of the Cluster[](#how-to-check-upon-the-status-of-the-cluster "Permalink to this headline")
How To Check Upon the Status of the Cluster[🔗](#how-to-check-upon-the-status-of-the-cluster "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
The command to show the status of clusters is
@ -302,7 +302,7 @@ Note
It is out of scope of this article to describe how to delete elements through Horizon interface. Make sure that quotas are available before new cluster creation.
Failure to Create a Cluster[](#failure-to-create-a-cluster "Permalink to this headline")
Failure to Create a Cluster[🔗](#failure-to-create-a-cluster "Permalink to this headline")
-----------------------------------------------------------------------------------------
There are many reasons why a cluster may fail to create. Maybe the state of system quotas is not optimal, maybe there is a mismatch between the parameters of the cluster and the parameters in the rest of the cloud. For example, if you base the creation of cluster on the default cluster template, it will use Fedora distribution and require 10 GiB of memory. It may clash with *docker-volume-size* if that was set up to be larger then 10 GiB.
@ -319,7 +319,7 @@ If the creation process failed prematurely, then
> * change parameters and
> * run the cluster creation command again.
CLI Commands to Delete a Cluster[](#cli-commands-to-delete-a-cluster "Permalink to this headline")
CLI Commands to Delete a Cluster[🔗](#cli-commands-to-delete-a-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------
If the cluster failed to create, it is still taking up system resources. Delete it with command such as
@ -360,7 +360,7 @@ Deleting clusters that were not installed properly has freed up a significant am
In this step you have successfuly deleted the clusters whose creation has stopped prematurely, thus paving the way to the creation of the next cluster under slightly different circumstances.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
In this tutorial, you have used the CLI commands to generate cluster templates as well as clusters themselves. Also, if the cluster process failed, how to free up the system resources and try again.

View File

@ -1,15 +1,15 @@
How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum[](#how-to-create-a-kubernetes-cluster-using-brand-name-openstack-magnum "Permalink to this headline")
How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum[🔗](#how-to-create-a-kubernetes-cluster-using-brand-name-openstack-magnum "Permalink to this headline")
=================================================================================================================================================================================
In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Creating a new Kubernetes cluster using one of the default cluster templates
> * Visual interpretation of created networks and Kubernetes cluster nodes
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -28,7 +28,7 @@ An SSH key-pair created in OpenStack dashboard. To create it, follow this articl
The key pair created in that article is called “sshkey”. You will use it as one of the parameters for creation of the Kubernetes cluster.
Step 1 Create New Cluster Screen[](#step-1-create-new-cluster-screen "Permalink to this headline")
Step 1 Create New Cluster Screen[🔗](#step-1-create-new-cluster-screen "Permalink to this headline")
---------------------------------------------------------------------------------------------------
Click on **Container Infra** and then on **Clusters**.
@ -85,7 +85,7 @@ This is what the screen looks like when all the data have been entered:
Click on lower right button **Next** or on option **Size** from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.
Step 2 Define Master and Worker Nodes[](#step-2-define-master-and-worker-nodes "Permalink to this headline")
Step 2 Define Master and Worker Nodes[🔗](#step-2-define-master-and-worker-nodes "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------
In general terms, *master nodes* are used to host the internal infrastructure of the cluster, while the *worker nodes* are used to host the K8s applications.
@ -130,7 +130,7 @@ Here is what the screen **Size** looks like when all the data are entered:
To proceed, click on lower right button **Next** or on option **Network** from the left main menu.
Step 3 Defining Network and LoadBalancer[](#step-3-defining-network-and-loadbalancer "Permalink to this headline")
Step 3 Defining Network and LoadBalancer[🔗](#step-3-defining-network-and-loadbalancer "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
This is the last of mandatory screens and the blue **Submit** button in the lower right corner is now active. (If it is not, use screen button **Back** to fix values in previous screens.)
@ -171,7 +171,7 @@ Use of ingress is a more advanced feature, related to load balancing the traffic
If you are just starting with Kubernetes, you will rather not require this feature immediately, so you could leave this option out.
Step 4 Advanced options[](#step-4-advanced-options "Permalink to this headline")
Step 4 Advanced options[🔗](#step-4-advanced-options "Permalink to this headline")
---------------------------------------------------------------------------------
**Option Management**
@ -194,7 +194,7 @@ Labels can change how the cluster creation is performed. There is a set of label
If you **turn on** the field **I do want to override Template and Workflow Labels** and if you use any of the *Template and Workflow Labels* by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.
Step 5 Forming of the Cluster[](#step-5-forming-of-the-cluster "Permalink to this headline")
Step 5 Forming of the Cluster[🔗](#step-5-forming-of-the-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------
Once you click on **Submit** button, OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.
@ -213,7 +213,7 @@ Click on the name of the cluster, *Kubernetes*, and see what it will look like i
![creation_in_progress2.png](../_images/creation_in_progress2.png)
Step 6 Review cluster state[](#step-6-review-cluster-state "Permalink to this headline")
Step 6 Review cluster state[🔗](#step-6-review-cluster-state "Permalink to this headline")
-----------------------------------------------------------------------------------------
Here is what OpenStack Magnum created for you as the result of filling in the data in those three screens:
@ -238,7 +238,7 @@ Node names start with *kubernetes* because that is the name of the cluster in lo
Resources tied up from one attempt of creating a cluster are **not** automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You now have a fully operational Kubernetes cluster. You can

View File

@ -1,9 +1,9 @@
How to create Kubernetes cluster using Terraform on CloudFerro Cloud[](#how-to-create-kubernetes-cluster-using-terraform-on-brand-name "Permalink to this headline")
How to create Kubernetes cluster using Terraform on CloudFerro Cloud[🔗](#how-to-create-kubernetes-cluster-using-terraform-on-brand-name "Permalink to this headline")
=====================================================================================================================================================================
In this article we demonstrate using [Terraform](https://www.terraform.io/) to deploy an OpenStack Magnum Kubernetes cluster on CloudFerro Cloud cloud.
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting account**
@ -40,7 +40,7 @@ Have Terraform installed locally or on a cloud VM - installation guidelines alon
After you finish working through that article, you will have access to the cloud via an active **openstack** command. Also, special environmental (**env**) variables (**OS\_USERNAME**, **OS\_PASSWORD**, **OS\_AUTH\_URL** and others) will be set up so that various programs can use them Terraform being the prime target here.
Define provider for Terraform[](#define-provider-for-terraform "Permalink to this headline")
Define provider for Terraform[🔗](#define-provider-for-terraform "Permalink to this headline")
---------------------------------------------------------------------------------------------
Terraform uses the notion of *provider*, which represents your concrete cloud environment and covers authentication. CloudFerro Cloud clouds are built complying with OpenStack technology and OpenStack is one of the standard types of providers for Terraform.
@ -80,7 +80,7 @@ The **auth\_url** is the only configuration option that shall be provided in the
Having this provider spec allows us to create a cluster in the following steps, but can also be reused to create other resources in your OpenStack environment e.g. virtual machines, volumes and many others.
Define cluster resource in Terraform[](#define-cluster-resource-in-terraform "Permalink to this headline")
Define cluster resource in Terraform[🔗](#define-cluster-resource-in-terraform "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
The second step is to define the exact specification of a resource that we want to create with Terraform. In our case we want to create a OpenStack Magnum cluster. In Terraform terminology, it will be an instance of **openstack\_containerinfra\_cluster\_v1** resource type. To proceed, create file **cluster.tf** which contains the specification of our cluster:
@ -132,7 +132,7 @@ In our example we operate on WAW3-2 cloud, where flavor **hmad.medium** is avail
The above configuration reflects a cluster where *loadbalancer* is placed in front of the master nodes, and where this loadbalancers flavor is **HA-large**. Customizing this default, similarly as with other more advanced defaults, would require creating a custom Magnum template, which is beyond the scope of this article.
Apply the configurations and create the cluster[](#apply-the-configurations-and-create-the-cluster "Permalink to this headline")
Apply the configurations and create the cluster[🔗](#apply-the-configurations-and-create-the-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------
Once both Terraform configurations described in previous steps are defined, we can apply them to create our cluster.
@ -174,7 +174,7 @@ The final lines of the output after successfully provisioning the cluster, shoul
![image-2024-6-18_18-1-53.png](../_images/image-2024-6-18_18-1-53.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Terraform can be used also to deploy additional applications to our cluster e.g. using Helm provider for Terraform. Check Terraform documentation for more details.

View File

@ -1,4 +1,4 @@
How to install Rancher RKE2 Kubernetes on CloudFerro Cloud[](#how-to-install-rancher-rke2-kubernetes-on-brand-name "Permalink to this headline")
How to install Rancher RKE2 Kubernetes on CloudFerro Cloud[🔗](#how-to-install-rancher-rke2-kubernetes-on-brand-name "Permalink to this headline")
=================================================================================================================================================
[RKE2](https://docs.rke2.io/) - Rancher Kubernetes Engine version 2 - is a Kubernetes distribution provided by SUSE. Running a self-managed RKE2 cluster in CloudFerro Cloud cloud is a viable option, especially for those seeking smooth integration with Rancher platform and customization options.
@ -11,7 +11,7 @@ An RKE2 cluster can be provisioned from Rancher GUI. However, in this article we
We also illustrate the coding techniques used, in case you want to enhance the RKE2 implementation further.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Perform the preliminary setup
@ -29,7 +29,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
The code is tested on Ubuntu 22.04.
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -96,7 +96,7 @@ One of the files downloaded from the above link will be **variables.tf**. It con
![customize_the_cloud.png](../_images/customize_the_cloud.png)
Step 1 Perform the preliminary setup[](#step-1-perform-the-preliminary-setup "Permalink to this headline")
Step 1 Perform the preliminary setup[🔗](#step-1-perform-the-preliminary-setup "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
Our objective is to create a Kubernetes cluster, which runs in the cloud environment. RKE2 software packages will be installed on cloud virtual machines playing roles of Kubernetes master and worker nodes. Also, several other OpenStack resources will be created along.
@ -110,7 +110,7 @@ As part of the preliminary setup to provision these resources we will:
We here provide the instruction to install the project, credentials, key pair and source locally the RC file.
### Preparation step 1 Create new project[](#preparation-step-1-create-new-project "Permalink to this headline")
### Preparation step 1 Create new project[🔗](#preparation-step-1-create-new-project "Permalink to this headline")
First step is to create a new project use Horizon UI. Click on Identity → Projects. Fill in the name of the project on the first tab:
@ -124,7 +124,7 @@ Then click on “Create Project”. Once the project is created, switch to the c
![image-2024-7-23_16-5-28.png](../_images/image-2024-7-23_16-5-28.png)
### Preparation step 2 Create application credentials[](#preparation-step-2-create-application-credentials "Permalink to this headline")
### Preparation step 2 Create application credentials[🔗](#preparation-step-2-create-application-credentials "Permalink to this headline")
The next step is to create an application credential that will be used to authenticate the OpenStack Cloud Controller Manager (used for automated load balancer provisioning). To create one, go to menu **Identity****Application Credentials**. Fill in the form as per the below example, passing all available roles (“member”, “load-balancer\_member”, “creator”, “reader”) roles to this credential. Set the expiry date to a date in the future.
@ -136,15 +136,15 @@ After clicking on **Create Application Credential**, copy both application ID an
Prerequisite No. 7 contains a complete guide to application credentials.
### Preparation step 3 Keypair operational[](#preparation-step-3-keypair-operational "Permalink to this headline")
### Preparation step 3 Keypair operational[🔗](#preparation-step-3-keypair-operational "Permalink to this headline")
Before continuing, ensure you have a keypair available. If you already had a keypair in your main project, this keypair will be available also for the newly created project. If you do not have one yet, create it from the left menu **Project****Compute****Key Pairs**. For additional details, visit Prerequisite No. 6.
### Preparation step 4 Authenticate to the newly formed project[](#preparation-step-4-authenticate-to-the-newly-formed-project "Permalink to this headline")
### Preparation step 4 Authenticate to the newly formed project[🔗](#preparation-step-4-authenticate-to-the-newly-formed-project "Permalink to this headline")
Lastly, download the RC file corresponding to the new project from Horizon GUI, then source this file in your local Linux terminal. See Prerequisite No. 4.
Step 2 Use Terraform configuration for RKE2 from CloudFerros GitHub repository[](#step-2-use-terraform-configuration-for-rke2-from-cloudferro-s-github-repository "Permalink to this headline")
Step 2 Use Terraform configuration for RKE2 from CloudFerros GitHub repository[🔗](#step-2-use-terraform-configuration-for-rke2-from-cloudferro-s-github-repository "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We added folder **rke2-terraform** to CloudFerros [K8s-samples GitHub repository](https://github.com/CloudFerro/K8s-samples/tree/main/rke2-terraform), from Prerequisite No. 11. This project includes configuration files to provision an RKE2 cluster on CloudFerro clouds and can be used as a starter pack for further customizations to your specific requirements.
@ -178,7 +178,7 @@ cloud-init-workers.yml.tpl
One of the primary functions of each *cloud-init* file is to install rke2 on both master and worker nodes.
Step 3 Provision an RKE2 cluster[](#step-3-provision-an-rke2-cluster "Permalink to this headline")
Step 3 Provision an RKE2 cluster[🔗](#step-3-provision-an-rke2-cluster "Permalink to this headline")
---------------------------------------------------------------------------------------------------
Lets provision an RKE2 Kubernetes cluster now. This will consist of the following steps:
@ -208,7 +208,7 @@ Note
Highly available control plane is currently not covered by this repository. Also, setting number of master nodes to a value other than 1 is **not** supported.
### Enter data in file terraform.tfvars[](#enter-data-in-file-terraform-tfvars "Permalink to this headline")
### Enter data in file terraform.tfvars[🔗](#enter-data-in-file-terraform-tfvars "Permalink to this headline")
The next step is to create file **terraform.tfvars**, with the following contents:
@ -236,7 +236,7 @@ Get application\_credential\_id
Get application\_credential\_secret
: The same, only for secret.
### Run Terraform to provision RKE2 cluster[](#run-terraform-to-provision-rke2-cluster "Permalink to this headline")
### Run Terraform to provision RKE2 cluster[🔗](#run-terraform-to-provision-rke2-cluster "Permalink to this headline")
This completes the set up part. We can now run the standard Terraform commands - **init**, **plan** and **apply** - to create our RKE2 cluster. The commands should be executed in the order provided below. Type **yes** when required to reconfirm the steps planned by Terraform.
@ -269,7 +269,7 @@ We can see that the cluster is provisioned correctly in our case, with both mast
![image-2024-7-24_14-9-18.png](../_images/image-2024-7-24_14-9-18.png)
Step 4 Demonstrate cloud-native integration covered by the repo[](#step-4-demonstrate-cloud-native-integration-covered-by-the-repo "Permalink to this headline")
Step 4 Demonstrate cloud-native integration covered by the repo[🔗](#step-4-demonstrate-cloud-native-integration-covered-by-the-repo "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
We can verify the automated provisioning of load balancers and public Floating IP by exposing a service of type LoadBalancer. The following **kubectl** commands will deploy and expose an **nginx** server in our RKE2 clusters default namespace:
@ -303,7 +303,7 @@ Ultimately, we can check the service is running as a public service in our brows
![image-2024-7-24_15-30-26.png](../_images/image-2024-7-24_15-30-26.png)
Implementation details[](#implementation-details "Permalink to this headline")
Implementation details[🔗](#implementation-details "Permalink to this headline")
-------------------------------------------------------------------------------
Explaining all of the techniques that went into production of RKE2 repository from Prerequisite No. 11 is out of scope of this article. However, here is an illustration of how at least one feature was implemented.
@ -366,7 +366,7 @@ openstack-cloud-controller-manager-bz7zt 1/1 Running 1 (4
```
Further customization[](#further-customization "Permalink to this headline")
Further customization[🔗](#further-customization "Permalink to this headline")
-----------------------------------------------------------------------------
Depending on your use case, further customization to the provided sample repository will be required to tune the Terraform configurations to provision an RKE2 cluster. We suggest evaluating the following enhancements:
@ -379,7 +379,7 @@ Depending on your use case, further customization to the provided sample reposit
To implement these features, you would need to simultaneously adjust definitions for both Terraform and Kubernetes resources. Covering those steps is, therefore, outside of scope of this article.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
In this article, you have created a proper Kubernetes solution using RKE2 cluster as a foundation.

View File

@ -1,17 +1,17 @@
Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud[](#implementing-ip-whitelisting-for-load-balancers-with-security-groups-on-brand-name "Permalink to this headline")
Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud[🔗](#implementing-ip-whitelisting-for-load-balancers-with-security-groups-on-brand-name "Permalink to this headline")
=============================================================================================================================================================================================================
In this article we describe how to use commands in Horizon, CLI and Terraform to secure load balancers for Kubernetes clusters in OpenStack by implementing IP whitelisting.
What Are We Going To Do[](#what-are-we-going-to-do "Permalink to this headline")
What Are We Going To Do[🔗](#what-are-we-going-to-do "Permalink to this headline")
---------------------------------------------------------------------------------
Introduction[](#introduction "Permalink to this headline")
Introduction[🔗](#introduction "Permalink to this headline")
-----------------------------------------------------------
Load balancers without proper restrictions are vulnerable to unauthorized access. By implementing IP whitelisting, only specified IP addresses are permitted to access the load balancer. You decide from which IP address it is possible to access the load balancers in particular and the Kubernetes cluster in general.
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -61,7 +61,7 @@ For complete introduction and installation of Terrafom on OpenStack see article
To use Terraform in this capacity, you will need to authenticate to the cloud using application credentials with **unrestricted** access. Check article [How to generate or use Application Credentials via CLI on CloudFerro Cloud](../cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-CloudFerro-Cloud.html.md)
Horizon: Whitelisting Load Balancers[](#horizon-whitelisting-load-balancers "Permalink to this headline")
Horizon: Whitelisting Load Balancers[🔗](#horizon-whitelisting-load-balancers "Permalink to this headline")
----------------------------------------------------------------------------------------------------------
We will whitelist load balancers by restricting the relevant ports in their security groups. In Horizon, use command **Network** > **Load Balancers** to see the list of load balancers:
@ -94,7 +94,7 @@ Choose which one you are going to edit; alternatively, you can create a new secu
Save and apply the changes.
### Verification[](#verification "Permalink to this headline")
### Verification[🔗](#verification "Permalink to this headline")
To confirm the configuration:
@ -102,7 +102,7 @@ To confirm the configuration:
2. View the security groups applied to the load balancers associated instances.
3. Ensure the newly added rule is visible.
CLI: Whitelisting Load Balancers[](#cli-whitelisting-load-balancers "Permalink to this headline")
CLI: Whitelisting Load Balancers[🔗](#cli-whitelisting-load-balancers "Permalink to this headline")
--------------------------------------------------------------------------------------------------
The OpenStack CLI provides a command-line method for implementing IP whitelisting.
@ -159,7 +159,7 @@ openstack server add security group <INSTANCE_ID> <SECURITY_GROUP_NAME>
```
### Verification[](#id1 "Permalink to this headline")
### Verification[🔗](#id1 "Permalink to this headline")
Verify the applied security group rules:
@ -175,7 +175,7 @@ openstack server show <INSTANCE_ID>
```
Terraform: Whitelisting Load Balancers[](#terraform-whitelisting-load-balancers "Permalink to this headline")
Terraform: Whitelisting Load Balancers[🔗](#terraform-whitelisting-load-balancers "Permalink to this headline")
--------------------------------------------------------------------------------------------------------------
Terraform is an Infrastructure as Code (IaC) tool that can automate the process of configuring IP whitelisting.
@ -243,7 +243,7 @@ openstack security group show <SECURITY_GROUP_ID>
```
State of Security: Before and after whitelisting the balancers[](#state-of-security-before-and-after-whitelisting-the-balancers "Permalink to this headline")
State of Security: Before and after whitelisting the balancers[🔗](#state-of-security-before-and-after-whitelisting-the-balancers "Permalink to this headline")
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
@ -251,7 +251,7 @@ Before implementing IP whitelisting, the load balancer accepts traffic from all
> * Only specified IPs can access the load balancer.
> * Unauthorized access attempts are denied.
### Verification Tools[](#verification-tools "Permalink to this headline")
### Verification Tools[🔗](#verification-tools "Permalink to this headline")
Various tools can ensure the protection is installed and active:
@ -267,21 +267,21 @@ curl
Wireshark
: (free): For packet-level analysis.
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
### Testing with nmap[🔗](#testing-with-nmap "Permalink to this headline")
```
nmap -p <PORT> <LOAD_BALANCER_IP>
```
### Testing with http and curl[](#testing-with-http-and-curl "Permalink to this headline")
### Testing with http and curl[🔗](#testing-with-http-and-curl "Permalink to this headline")
```
curl http://<LOAD_BALANCER_IP>
```
### Testing with curl and livez[](#testing-with-curl-and-livez "Permalink to this headline")
### Testing with curl and livez[🔗](#testing-with-curl-and-livez "Permalink to this headline")
This would be a typical response before changes:
@ -329,7 +329,7 @@ curl: (28) Connection timed out after 5000 milliseconds
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Compare with articles:

View File

@ -1,4 +1,4 @@
Install GitLab on CloudFerro Cloud Kubernetes[](#install-gitlab-on-brand-name-kubernetes "Permalink to this headline")
Install GitLab on CloudFerro Cloud Kubernetes[🔗](#install-gitlab-on-brand-name-kubernetes "Permalink to this headline")
=======================================================================================================================
Source control is essential for building professional software. Git has become synonym of a modern source control system and GitLab is one of most popular tools based on Git.
@ -7,7 +7,7 @@ GitLab can be deployed as your local instance to ensure privacy of the stored ar
In this article, we will install GitLab on a Kubernetes cluster in CloudFerro Cloud cloud.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Create a Floating IP and associate the A record in DNS
@ -15,7 +15,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Install GitLab Helm chart
> * Verify the installation
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -55,7 +55,7 @@ No. 5 **Proof of concept vs. production ready version of GitLab client**
In Step 3 below, you will create file **my-values-gitlab.yaml** to define the default configuration of the GitLab client. The values chosen there will provide for a solid quick-start, perhaps in the “proof of concept” phase of development. To customize for production, this reference will come handy: <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v7.11.1/values.yaml?ref_type=tags>
Step 1 Create a Floating IP and associate the A record in DNS[](#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns "Permalink to this headline")
Step 1 Create a Floating IP and associate the A record in DNS[🔗](#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Our GitLab client will run web application (GUI) exposed as a Kubernetes service. We will use GitLabs Helm chart, which will, as part of GitLabs installation,
@ -71,7 +71,7 @@ After closing the form, your new floating IP will appear on the list and let us
![a_record_in_dns.png](../_images/a_record_in_dns.png)
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
Step 2 Apply preliminary configuration[🔗](#step-2-apply-preliminary-configuration "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------
A condition to ensure compatibility with Kubernetes setup on CloudFerro Cloud clouds is to enable the Service Accounts provisioned by GitLab Helm chart to have sufficient access to reading scaling metrics. This can be done by creating an appropriate *rolebinding*.
@ -109,7 +109,7 @@ kubectl apply -f gitlab-rolebinding.yaml
```
Step 3 Install GitLab Helm chart[](#step-3-install-gitlab-helm-chart "Permalink to this headline")
Step 3 Install GitLab Helm chart[🔗](#step-3-install-gitlab-helm-chart "Permalink to this headline")
---------------------------------------------------------------------------------------------------
Now lets download GitLabs Helm repository with the following two commands:
@ -164,7 +164,7 @@ After this step, there will be several Kubernetes resources created.
![gitlab_get_pods.png](../_images/gitlab_get_pods.png)
Step 4 Verify the installation[](#step-4-verify-the-installation "Permalink to this headline")
Step 4 Verify the installation[🔗](#step-4-verify-the-installation "Permalink to this headline")
-----------------------------------------------------------------------------------------------
After a short while, when all the pods are up, we can access Gitlabs service entering the address: **gitlab.<yourdomain>**:
@ -182,7 +182,7 @@ This takes us to the following screen. From there we can utilize various feature
![image-2024-5-6_14-25-36.png](../_images/image-2024-5-6_14-25-36.png)
Errors during the installation[](#errors-during-the-installation "Permalink to this headline")
Errors during the installation[🔗](#errors-during-the-installation "Permalink to this headline")
-----------------------------------------------------------------------------------------------
In case you encounter errors during installation, which you cannot recover, it might be worth to start with fresh installation. Here is the command to delete the chart:
@ -194,7 +194,7 @@ helm uninstall gitlab -n gitlab
After that, you can restart the procedure from Step 2.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You now have a local instance of GitLab at your disposal. As next steps you could:

View File

@ -1,4 +1,4 @@
Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[](#install-and-run-argo-workflows-on-brand-name-cloud-name-magnum-kubernetes "Permalink to this headline")
Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[🔗](#install-and-run-argo-workflows-on-brand-name-cloud-name-magnum-kubernetes "Permalink to this headline")
================================================================================================================================================================================
[Argo Workflows](https://argoproj.github.io/argo-workflows/) enable running complex job workflows on Kubernetes. It can
@ -11,7 +11,7 @@ Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[](#insta
Argo applies a microservice-oriented, container-native approach, where each step of a workflow runs as a container.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Authenticate to the cluster
@ -21,7 +21,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Run Argo Workflows locally
> * Run sample workflow with two tasks
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -30,7 +30,7 @@ No. 1 **Account**
No. 2 **kubectl pointed to the Kubernetes cluster**
: If you are creating a new cluster, for the purposes of this article, call it *argo-cluster*. See [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
Authenticate to the cluster[](#authenticate-to-the-cluster "Permalink to this headline")
Authenticate to the cluster[🔗](#authenticate-to-the-cluster "Permalink to this headline")
-----------------------------------------------------------------------------------------
Let us authenticate to *argo-cluster*. Run from your local machine the following command to create a config file in the present working directory:
@ -49,7 +49,7 @@ export KUBECONFIG=/home/eouser/config
Run this command.
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
Apply preliminary configuration[🔗](#apply-preliminary-configuration "Permalink to this headline")
-------------------------------------------------------------------------------------------------
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. Argo Workflows will require some additional privileges in order to run correctly.
@ -89,7 +89,7 @@ kubectl apply -f argo-rolebinding.yaml
```
Install Argo Workflows[](#install-argo-workflows "Permalink to this headline")
Install Argo Workflows[🔗](#install-argo-workflows "Permalink to this headline")
-------------------------------------------------------------------------------
In order to deploy Argo on the cluster, run the following command:
@ -101,7 +101,7 @@ kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/dow
There is also an Argo CLI available for running jobs from command line. Installing it is outside of scope of this article.
Run Argo Workflows from the cloud[](#run-argo-workflows-from-the-cloud "Permalink to this headline")
Run Argo Workflows from the cloud[🔗](#run-argo-workflows-from-the-cloud "Permalink to this headline")
-----------------------------------------------------------------------------------------------------
Normally, you would need to authenticate to the server via a UI login. Here, we are going to switch authentication mode by applying the following patch to the deployment. (For production, you might need to incorporate a proper authentication mechanism.) Submit the following command:
@ -149,7 +149,7 @@ Argo is by default served on HTTPS with a self-signed certificate, on port **274
![image2023-2-15_16-41-49.png](../_images/image2023-2-15_16-41-49.png)
Run sample workflow with two tasks[](#run-sample-workflow-with-two-tasks "Permalink to this headline")
Run sample workflow with two tasks[🔗](#run-sample-workflow-with-two-tasks "Permalink to this headline")
-------------------------------------------------------------------------------------------------------
In order to run a sample workflow, first close the initial pop-ups in the UI. Then go to the top-left icon “Workflows” and click on it, then you might need to press “Continue” in the following pop-up.
@ -211,7 +211,7 @@ The results show that indeed the message “Files processed” was printed in th
![image2023-2-15_18-13-51.png](../_images/image2023-2-15_18-13-51.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
For production, consider alternative authentication mechanism and replacing self-signed HTTPS certificates with the ones generated by a Certificate Authority.

View File

@ -1,4 +1,4 @@
Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud[](#install-and-run-dask-on-a-kubernetes-cluster-in-brand-name-cloud "Permalink to this headline")
Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud[🔗](#install-and-run-dask-on-a-kubernetes-cluster-in-brand-name-cloud "Permalink to this headline")
=========================================================================================================================================================================
[Dask](https://www.dask.org/) enables scaling computation tasks either as multiple processes on a single machine, or on Dask clusters that consist of multiple worker machines. Dask provides a scalable alternative to popular Python libraries e.g. Numpy, Pandas or SciKit Learn, but still using a compact and very similar API.
@ -7,7 +7,7 @@ Dask scheduler, once presented with a computation task, splits it into smaller t
In this article you will install a Dask cluster on Kubernetes and run Dask worker nodes as Kubernetes pods. As part of the installation, you will get access to a Jupyter instance, where you can run the sample code.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install Dask on Kubernetes
@ -16,7 +16,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Configure Dask cluster on Kubernetes from Python
> * Resolving errors
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -43,7 +43,7 @@ No. 6 **Basic familiarity with Jupyter and Python scientific libraries**
> We will use [Pandas](https://pandas.pydata.org/docs/user_guide/index.html#user-guide) as an example.
Step 1 Install Dask on Kubernetes[](#step-1-install-dask-on-kubernetes "Permalink to this headline")
Step 1 Install Dask on Kubernetes[🔗](#step-1-install-dask-on-kubernetes "Permalink to this headline")
-----------------------------------------------------------------------------------------------------
To install Dask as a Helm chart, first download the Dask Helm repository:
@ -83,7 +83,7 @@ helm install dask dask/dask -n dask --create-namespace -f dask-values.yaml
```
Step 2 Access Jupyter and Dask Scheduler dashboard[](#step-2-access-jupyter-and-dask-scheduler-dashboard "Permalink to this headline")
Step 2 Access Jupyter and Dask Scheduler dashboard[🔗](#step-2-access-jupyter-and-dask-scheduler-dashboard "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------
After the installation step, you can access Dask services:
@ -110,7 +110,7 @@ Similarly, with the Scheduler Dashboard, paste the floating IP to the browser to
![image2023-8-8_14-4-40.png](../_images/image2023-8-8_14-4-40.png)
Step 3 Run a sample computing task[](#step-3-run-a-sample-computing-task "Permalink to this headline")
Step 3 Run a sample computing task[🔗](#step-3-run-a-sample-computing-task "Permalink to this headline")
-------------------------------------------------------------------------------------------------------
The installed Jupyter instance already contains Dask and other useful Python libraries installed. To run a sample job, first activate the notebook by clicking on icon named **NoteBook****Python3(ipykernel)** on the right hand side of the Jupyter instance browser screen.
@ -161,7 +161,7 @@ Computation time Dask: 0.07 seconds.
Note these results are not deterministic and simple Pandas could also perform better case by case. The overhead to distribute and collect results from Dask workers needs to be also taken into account. Further tuning the performance of Dask is beyond the scope of this article.
Step 4 Configure Dask cluster on Kubernetes from Python[](#step-4-configure-dask-cluster-on-kubernetes-from-python "Permalink to this headline")
Step 4 Configure Dask cluster on Kubernetes from Python[🔗](#step-4-configure-dask-cluster-on-kubernetes-from-python "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------
For managing the Dask cluster on Kubernetes we can use a dedicated Python library *dask-kubernetes*. Using this library, we can reconfigure certain parameters of our Dask cluster.
@ -216,7 +216,7 @@ Or, you can see the current number of worker nodes in the Dask Scheduler dashboa
Note that the functionalities of *dask-kubernetes* should be possible to achieve using just Kubernetes API directly, the choice will depend on your personal preference.
Resolving errors[](#resolving-errors "Permalink to this headline")
Resolving errors[🔗](#resolving-errors "Permalink to this headline")
-------------------------------------------------------------------
When running command

View File

@ -1,4 +1,4 @@
Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud[](#install-and-run-noobaa-on-kubernetes-cluster-in-single-and-multicloud-environment-on-brand-name "Permalink to this headline")
Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud[🔗](#install-and-run-noobaa-on-kubernetes-cluster-in-single-and-multicloud-environment-on-brand-name "Permalink to this headline")
========================================================================================================================================================================================================================================
[NooBaa](https://www.noobaa.io/) enables creating an abstracted S3 backend on Kubernetes. Such backend can be connected to multiple S3 backing stores e.g. in a multi-cloud setup, allowing for storage expandability or High Availability among other beneficial features.
@ -9,7 +9,7 @@ In this article you will learn the basics of using NooBaa
> * how to create a NooBaa bucket backed by S3 object storage in the CloudFerro Cloud cloud
> * how to create a NooBaa bucket mirroring data on two different clouds
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install NooBaa in local environment
@ -22,7 +22,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Testing access to the bucket
> * Create mirroring on clouds WAW3-1 and WAW3-2
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -55,7 +55,7 @@ No. 7 **Access to WAW3-2 cloud**
To mirror data on WAW3-1 and WAW3-2, you will need access to those two clouds.
Install NooBaa in local environment[](#install-noobaa-in-local-environment "Permalink to this headline")
Install NooBaa in local environment[🔗](#install-noobaa-in-local-environment "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
The first step to work with NooBaa is to install it on our local system. We will download the installer, make it executable and move it to the system path:
@ -80,7 +80,7 @@ This will result in an output similar to the below:
![install_noobaa_locally.png](../_images/install_noobaa_locally.png)
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
Apply preliminary configuration[🔗](#apply-preliminary-configuration "Permalink to this headline")
-------------------------------------------------------------------------------------------------
We will need to apply additional configuration on a Magnum cluster to avoid PodSecurityPolicy exception. For a refresher, see article [Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud](Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-CloudFerro-Cloud-cloud.html.md).
@ -120,7 +120,7 @@ kubectl apply -f noobaa-rolebinding.yaml
```
Install NooBaa on the Kubernetes cluster[](#install-noobaa-on-the-kubernetes-cluster "Permalink to this headline")
Install NooBaa on the Kubernetes cluster[🔗](#install-noobaa-on-the-kubernetes-cluster "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
We already have NooBaa available in our local environment, but we still need to install NooBaa on our Kubernetes cluster. NooBaa will use the context of the KUBECONFIG by **kubectl** (as activated in Prerequisite No. 4), so install NooBaa in the dedicated namespace:
@ -144,10 +144,10 @@ It outputs several useful insights about the NooBaa installation, with the “ke
For the purpose of this article, we will not use the default backing store, but rather learn to create a new backing store based on cloud S3 object storage. Such setup can be then easily extended so that we can end up with separate backing stores for different clouds. In the second part of this article you will create one store on WAW3-1 cloud, another one on WAW3-2 cloud and they will be available through one abstracted S3 bucket in NooBaa.
Create a NooBaa backing store[](#create-a-noobaa-backing-store "Permalink to this headline")
Create a NooBaa backing store[🔗](#create-a-noobaa-backing-store "Permalink to this headline")
---------------------------------------------------------------------------------------------
### Step 1. Create object storage bucket on WAW3-1[](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")
### Step 1. Create object storage bucket on WAW3-1[🔗](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")
Now create an object storage bucket on WAW3-1 cloud:
@ -162,7 +162,7 @@ Note
You need to create a bucket with a different name and use this generated name to follow along.
### Step 2. Set up EC2 credentials[](#step-2-set-up-ec2-credentials "Permalink to this headline")
### Step 2. Set up EC2 credentials[🔗](#step-2-set-up-ec2-credentials "Permalink to this headline")
If you have properly set up the EC2 (S3) keys for your WAW3-1 object storage, take note of them with the following command:
@ -171,7 +171,7 @@ openstack ec2 credentials list
```
### Step 3. Create a new NooBaa backing store[](#step-3-create-a-new-noobaa-backing-store "Permalink to this headline")
### Step 3. Create a new NooBaa backing store[🔗](#step-3-create-a-new-noobaa-backing-store "Permalink to this headline")
With the above in place, we can create a new NooBaa backing store called *custom-bs* by running the command below. Make sure to replace the access-key XXXXXX and the secret-key YYYYYYY with your own EC2 keys and the *bucket* with your own bucket name:
@ -195,7 +195,7 @@ Also, when viewing the bucket in Horizon (backing store), we can see NooBaa popu
![image2023-7-20_11-58-22.png](../_images/image2023-7-20_11-58-22.png)
### Step 4. Create a Bucket Class[](#step-4-create-a-bucket-class "Permalink to this headline")
### Step 4. Create a Bucket Class[🔗](#step-4-create-a-bucket-class "Permalink to this headline")
When we have the backing store, the next step is to create a BucketClass (BC). Such BucketClass serves as a blueprint for NooBaa buckets: it defines
@ -232,7 +232,7 @@ kubectl apply -f custom-bc.yaml
```
### Step 5. Create an ObjectBucketClaim[](#step-5-create-an-objectbucketclaim "Permalink to this headline")
### Step 5. Create an ObjectBucketClaim[🔗](#step-5-create-an-objectbucketclaim "Permalink to this headline")
As the last step, we create an *ObjectBucketClaim*. This bucket claim utilizes the *noobaa.noobaa.io* storage class which got deployed with NooBaa, and references the *custom-bc* bucket class created in the previous step. Create a file called *custom-obc.yaml*:
@ -259,7 +259,7 @@ kubectl apply -f custom-obc.yaml
```
### Step 6. Obtain name of the NooBaa bucket[](#step-6-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
### Step 6. Obtain name of the NooBaa bucket[🔗](#step-6-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
As a result, besides the *ObjectBucket* claim resource, also a configmap and a secret with the same name *custom-obc* got created in NooBaa. Lets view the configmap with:
@ -286,7 +286,7 @@ metadata:
We can see the name of the NooBaa bucket *my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf*, which is backing up our “physical” WAW3-1 bucket. Store this name for later use in this article.
### Step 7. Obtain secret for the NooBaa bucket[](#step-7-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
### Step 7. Obtain secret for the NooBaa bucket[🔗](#step-7-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
The secret is also relevant for us as we need to extract the S3 keys to the NooBaa bucket. The access and secret key are base64 encoded in the secret, we can retrieve them decoded with the following commands:
@ -298,7 +298,7 @@ kubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KE
Take note of access and secret keys, as we will use them in the next step.
### Step 8. Connect to NooBaa bucket from S3cmd[](#step-8-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
### Step 8. Connect to NooBaa bucket from S3cmd[🔗](#step-8-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
Noobaa created a few services when it got deployed, which we can verify with the command below:
@ -320,7 +320,7 @@ sts LoadBalancer 10.254.23.154 64.225.135.92 443:31374/TCP
The “s3” service provides the endpoint that can be used to access Nooba storage (backed by the actual storage in WAW3-1). In our case, this endpoint URL is **64.225.133.81**. Replace it with the value you get from the above command, when working through this article.
### Step 9. Configure S3cmd to access NooBaa[](#step-9-configure-s3cmd-to-access-noobaa "Permalink to this headline")
### Step 9. Configure S3cmd to access NooBaa[🔗](#step-9-configure-s3cmd-to-access-noobaa "Permalink to this headline")
Now that we have both the endpoint and the keys, we can configure **s3cmd** to access the bucket created by NooBaa. Create a configuration file *noobaa.s3cfg* with the following contents:
@ -362,7 +362,7 @@ Configuration saved to 'noobaa.s3cfg'
```
### Step 10. Testing access to the bucket[](#step-10-testing-access-to-the-bucket "Permalink to this headline")
### Step 10. Testing access to the bucket[🔗](#step-10-testing-access-to-the-bucket "Permalink to this headline")
We can upload a test file to NooBaa. In our case, we upload a simple text file *xyz.txt* with text content “xyz”, using the following command:
@ -381,7 +381,7 @@ upload: 'xyz.txt' -> 's3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf/xyz.tx
We can also see in Horizon that a few new folders and files were added to NooBaa. However, we will not see the *xyz.txt* file directly there, because NooBaa applies its own fragmentation techniques on the data.
Connect NooBaa in a multi-cloud setup[](#connect-noobaa-in-a-multi-cloud-setup "Permalink to this headline")
Connect NooBaa in a multi-cloud setup[🔗](#connect-noobaa-in-a-multi-cloud-setup "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------
NooBaa can be used to create an abstracted S3 endpoint, connected to two or more cloud S3 endpoints. This can be helpful in scenarios of e.g. replicating the same data in multiple clouds or combining the storage of multiple clouds.
@ -394,19 +394,19 @@ To illustrate the process, we are going create a new set of resources, new S3 bu
To proceed, first create two additional buckets from the Horizon interface. Replace the further commands and file contents in this section to reflect these bucket names.
### Step 1 Multi-cloud. Create bucket on WAW3-1[](#step-1-multi-cloud-create-bucket-on-waw3-1 "Permalink to this headline")
### Step 1 Multi-cloud. Create bucket on WAW3-1[🔗](#step-1-multi-cloud-create-bucket-on-waw3-1 "Permalink to this headline")
Go to WAW3-1 Horizon interface and create a bucket we call *noobaamirror-waw3-1* (supply your own bucket name here and adhere to it in the rest of the article). It will be the available on endpoint <https://s3.waw3-1.cloudferro.com>.
### Step 1 Multi-cloud. Create bucket on WAW3-2[](#step-1-multi-cloud-create-bucket-on-waw3-2 "Permalink to this headline")
### Step 1 Multi-cloud. Create bucket on WAW3-2[🔗](#step-1-multi-cloud-create-bucket-on-waw3-2 "Permalink to this headline")
Next, go to WAW3-2 Horizon interface and create a bucket we call *noobaamirror-waw3-2* (again, supply your own bucket name here and adhere to it in the rest of the article). It will be available on endpoint <https://s3.waw3-2.cloudferro.com>
### Step 2 Multi-cloud. Set up EC2 credentials[](#step-2-multi-cloud-set-up-ec2-credentials "Permalink to this headline")
### Step 2 Multi-cloud. Set up EC2 credentials[🔗](#step-2-multi-cloud-set-up-ec2-credentials "Permalink to this headline")
Use the existing pair of EC2 credentials or first create a new pair and then use them in the next step.
### Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1[](#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1 "Permalink to this headline")
### Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1[🔗](#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1 "Permalink to this headline")
Apply the following command to create *mirror-bs1* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
@ -415,7 +415,7 @@ noobaa -n noobaa backingstore create s3-compatible mirror-bs1 --endpoint https:/
```
### Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2[](#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2 "Permalink to this headline")
### Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2[🔗](#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2 "Permalink to this headline")
Apply the following command to create *mirror-bs2* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
@ -424,7 +424,7 @@ noobaa -n noobaa backingstore create s3-compatible mirror-bs2 --endpoint https:/
```
### Step 4 Multi-cloud. Create a Bucket Class[](#step-4-multi-cloud-create-a-bucket-class "Permalink to this headline")
### Step 4 Multi-cloud. Create a Bucket Class[🔗](#step-4-multi-cloud-create-a-bucket-class "Permalink to this headline")
To create a BucketClass called *bc-mirror*, create a file called *bc-mirror.yaml* with the following contents:
@ -459,7 +459,7 @@ Note
The mirroring is implemented by listing **two** backing stores, *mirror-bs1* and *mirror-bs1*, under the *tiers* option.
### Step 5 Multi-cloud. Create an ObjectBucketClaim[](#step-5-multi-cloud-create-an-objectbucketclaim "Permalink to this headline")
### Step 5 Multi-cloud. Create an ObjectBucketClaim[🔗](#step-5-multi-cloud-create-an-objectbucketclaim "Permalink to this headline")
Again, create file *obc-mirror.yaml* for ObjectBucketClaim *obc-mirror*:
@ -486,7 +486,7 @@ kubectl apply -f obc-mirror
```
### Step 6 Multi-cloud. Obtain name of the NooBaa bucket[](#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
### Step 6 Multi-cloud. Obtain name of the NooBaa bucket[🔗](#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
Extract bucket name from the configmap:
@ -495,7 +495,7 @@ kubectl get configmap obc-mirror -n noobaa -o yaml
```
### Step 7 Multi-cloud. Obtain secret for the NooBaa bucket[](#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
### Step 7 Multi-cloud. Obtain secret for the NooBaa bucket[🔗](#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
Extract S3 keys from the created secret:
@ -505,7 +505,7 @@ kubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KE
```
### Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd[](#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
### Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd[🔗](#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
Create additional config file for s3cmd e.g. *noobaa-mirror.s3cfg* and update the access key, the secret key and the bucket name to the ones retrieved above:
@ -514,7 +514,7 @@ s3cmd --configure -c noobaa-mirror.s3cfg
```
### Step 9 Multi-cloud. Configure S3cmd to access NooBaa[](#step-9-multi-cloud-configure-s3cmd-to-access-noobaa "Permalink to this headline")
### Step 9 Multi-cloud. Configure S3cmd to access NooBaa[🔗](#step-9-multi-cloud-configure-s3cmd-to-access-noobaa "Permalink to this headline")
To test, upload the *xyz.txt* file, which behind the scenes uploads a copy to both clouds. Be sure to change the bucket name *my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff* to the one retrieved from the configmap:
@ -523,7 +523,7 @@ s3cmd put xyz.txt s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-
```
### Step 10 Multi-cloud. Testing access to the bucket[](#step-10-multi-cloud-testing-access-to-the-bucket "Permalink to this headline")
### Step 10 Multi-cloud. Testing access to the bucket[🔗](#step-10-multi-cloud-testing-access-to-the-bucket "Permalink to this headline")
To verify, delete the “physical” bucket on one of the clouds (e.g. from WAW3-1) from the Horizon interface. With the **s3cmd** command below you can see that NooBaa will still hold the copy from WAW3-2 cloud:

View File

@ -1,4 +1,4 @@
Installing HashiCorp Vault on CloudFerro Cloud Magnum[](#installing-hashicorp-vault-on-brand-name-cloud-name-magnum "Permalink to this headline")
Installing HashiCorp Vault on CloudFerro Cloud Magnum[🔗](#installing-hashicorp-vault-on-brand-name-cloud-name-magnum "Permalink to this headline")
==================================================================================================================================================
In Kubernetes, a *Secret* is an object that contains passwords, tokens, keys or any other small pieces of data. Using *Secrets* ensures that the probability of exposing confidential data while creating, running and editing Pods is much smaller. The main problem is that *Secrets* are stored unencrypted in *etcd* so anyone with
@ -20,7 +20,7 @@ You can apply a number of strategies to improve the security of the cluster or y
In this article, we shall install HashiCorp Vault within a Magnum Kubernetes cluster, on CloudFerro Cloud cloud.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install self-signed TLS certificates with CFSSL
@ -33,7 +33,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Return livenessProbe to production value
> * Troubleshooting
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -50,7 +50,7 @@ This article will introduce you to Helm charts on Kubernetes:
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html.md)
Step 1 Install CFSSL[](#step-1-install-cfssl "Permalink to this headline")
Step 1 Install CFSSL[🔗](#step-1-install-cfssl "Permalink to this headline")
---------------------------------------------------------------------------
To ensure that Vault communication with the cluster is encrypted, we need to provide TLS certificates.
@ -76,7 +76,7 @@ sudo mv cfssl cfssljson /usr/local/bin
```
Step 2 Generate TLS certificates[](#step-2-generate-tls-certificates "Permalink to this headline")
Step 2 Generate TLS certificates[🔗](#step-2-generate-tls-certificates "Permalink to this headline")
---------------------------------------------------------------------------------------------------
Before we start, lets create a dedicated namespace where all Vault-related Kubernetes resources will live:
@ -194,7 +194,7 @@ kubectl -n vault create secret tls tls-server --cert ./vault.pem --key ./vault-k
The naming of those secrets reflects the Vault Helm chart default names.
Step 3 Install Consul Helm chart[](#step-3-install-consul-helm-chart "Permalink to this headline")
Step 3 Install Consul Helm chart[🔗](#step-3-install-consul-helm-chart "Permalink to this headline")
---------------------------------------------------------------------------------------------------
The Consul backend will ensure High Availability of our Vault installation. Consul will live in a namespace that we have already created, **vault**.
@ -257,7 +257,7 @@ kubectl get pods -n vault
Wait until all of the pods are **Running** and then proceed with the next step.
Step 4 Install Vault Helm chart[](#step-4-install-vault-helm-chart "Permalink to this headline")
Step 4 Install Vault Helm chart[🔗](#step-4-install-vault-helm-chart "Permalink to this headline")
-------------------------------------------------------------------------------------------------
We are now ready to install Vault.
@ -372,7 +372,7 @@ vault-agent-injector-6c7cfc768-kv968 1/1 Running 0
```
Sealing and unsealing the Vault[](#sealing-and-unsealing-the-vault "Permalink to this headline")
Sealing and unsealing the Vault[🔗](#sealing-and-unsealing-the-vault "Permalink to this headline")
-------------------------------------------------------------------------------------------------
Right after the installation, Vault server starts in a *sealed* state. It knows where and how to access the physical storage but, by design, it is lacking the key to decrypt any of it. The only operations you can do when Vault is sealed are to
@ -393,7 +393,7 @@ You will have a limited but sufficient amount of time to enter the keys; the val
At the end of the article we show how to interactively set it to **60** seconds, so that the cluster can check health of the pods more frequently.
Step 5 Unseal Vault[](#step-5-unseal-vault "Permalink to this headline")
Step 5 Unseal Vault[🔗](#step-5-unseal-vault "Permalink to this headline")
-------------------------------------------------------------------------
Three nodes in the Kubernetes cluster represent Vault and are named *vault-0*, *vault-1*, *vault-2*. To make the Vault functional, you will have to unseal all three of them.
@ -463,7 +463,7 @@ kubectl -n vault exec -it vault-1 -- sh
and unseal it by entering at least three keys. Then the similar procedure for *vault-2*. Only when all three pods are unsealed will the Vault become active.
Step 6 Run Vault UI[](#step-6-run-vault-ui "Permalink to this headline")
Step 6 Run Vault UI[🔗](#step-6-run-vault-ui "Permalink to this headline")
-------------------------------------------------------------------------
With our configuration, Vault UI is exposed on port 8200 of a dedicated LoadBalancer that got created.
@ -492,7 +492,7 @@ You can now start using the Vault.
![start_using_vault.png](../_images/start_using_vault.png)
Return livenessProbe to production value[](#return-livenessprobe-to-production-value "Permalink to this headline")
Return livenessProbe to production value[🔗](#return-livenessprobe-to-production-value "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
*livenessProbe* in Kubernetes is time in which the system checks the health of the nodes. That would normally not be a concern of yours but if you do not unseal the Vault within that amount of time, the unsealing wont work. Under normal circumstances, the value would be **60** seconds so that in case of any disturbance, the system would react within one minute instead of six. But it is very hard to copy and enter three strings under one minute as would happen if the value of **60** were present in file **vault-values.yaml**. You would almost inevitably see Kubernetes error **137**, meaning that you did not perform the required operations in time.
@ -520,7 +520,7 @@ You can now access the equivalent of file **vault-values.yaml** inside the Kuber
When done, save and leave Vim with the standard **:w** and **:q** syntax.
Troubleshooting[](#troubleshooting "Permalink to this headline")
Troubleshooting[🔗](#troubleshooting "Permalink to this headline")
-----------------------------------------------------------------
Check the events, which can point out hints of what needs to be improved:
@ -540,7 +540,7 @@ kubectl delete MutatingWebhookConfiguration vault-agent-injector-cfg
```
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Now you have Vault server as a part of the cluster and you can also use it from the IP address it got installed to.

View File

@ -1,4 +1,4 @@
Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud[](#installing-jupyterhub-on-magnum-kubernetes-cluster-in-brand-name-cloud-name-cloud "Permalink to this headline")
Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud[🔗](#installing-jupyterhub-on-magnum-kubernetes-cluster-in-brand-name-cloud-name-cloud "Permalink to this headline")
================================================================================================================================================================================================
Jupyter notebooks are a popular method of presenting application code, as well as running exploratory experiments and analysis, conveniently, from a web browser. From a Jupyter notebook, one can run code, see the generated results in attractive visual form, and often also interactively interact with the generated output.
@ -7,7 +7,7 @@ JupyterHub is an open-source service that creates cloud-based Jupyter notebook s
It is straightforward to quickly deploy JupyterHub using Magnum Kubernetes service, which we present in this article.
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We are Going to Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Authenticate to the cluster
@ -15,7 +15,7 @@ What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Retrieve details of Jupyterhub service
> * Run Jupyterhub on HTTPS
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -36,7 +36,7 @@ No. 4 **A registered domain name available**
To see the results of the installation, you should have a registered domain of your own. You will use it in Step 5 to run JupyterHub on HTTPS in a browser.
Step 1 Authenticate to the cluster[](#step-1-authenticate-to-the-cluster "Permalink to this headline")
Step 1 Authenticate to the cluster[🔗](#step-1-authenticate-to-the-cluster "Permalink to this headline")
-------------------------------------------------------------------------------------------------------
First of all, we need to authenticate to the cluster. It may so happen that you already have a cluster at your disposal and that the config file is already in place. In other words, you are able to execute the **kubectl** command immediately.
@ -57,7 +57,7 @@ export KUBECONFIG=/home/eouser/config
Run this command.
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
Step 2 Apply preliminary configuration[🔗](#step-2-apply-preliminary-configuration "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. JupyterHub will require some additional privileges in order to run correctly.
@ -97,7 +97,7 @@ kubectl apply -f jupyterhub-rolebinding.yaml
```
Step 3 Run Jupyterhub Helm chart installation[](#step-3-run-jupyterhub-helm-chart-installation "Permalink to this headline")
Step 3 Run Jupyterhub Helm chart installation[🔗](#step-3-run-jupyterhub-helm-chart-installation "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------
To install Helm chart with the default settings use the below set of commands. This will
@ -116,7 +116,7 @@ This is the result of successful Helm chart installation:
![installation_done.png](../_images/installation_done.png)
Step 4 Retrieve details of your service[](#step-4-retrieve-details-of-your-service "Permalink to this headline")
Step 4 Retrieve details of your service[🔗](#step-4-retrieve-details-of-your-service "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------
Once all the Helm resources get deployed to the *jupyterhub* namespace, we can view their state and definitions using standard **kubectl** commands.
@ -151,7 +151,7 @@ Warning
If in the next step you start running a JupyterHub on HTTPS, you will not be able to run it as a HTTP service unless it has been relaunched.
Step 5 Run on HTTPS[](#step-5-run-on-https "Permalink to this headline")
Step 5 Run on HTTPS[🔗](#step-5-run-on-https "Permalink to this headline")
-------------------------------------------------------------------------
JupyterHub Helm chart enables HTTPS deployments natively. Once we deployed the chart above, we can simply upgrade the chart to enable serving it on HTTPS. Under the hood, it will generate the certificates using Lets Encrypt certificate authority.
@ -182,7 +182,7 @@ As noted in Prerequisite No. 4, you should have an available registered domain s
![image2023-2-6_15-25-4.png](../_images/image2023-2-6_15-25-4.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
For the production environment: replace the dummy authenticator with an alternative authentication mechanism, ensure persistence by e.g. connecting to a Postgres database. These steps are beyond the scope of this article.

View File

@ -1,4 +1,4 @@
Kubernetes cluster observability with Prometheus and Grafana on CloudFerro Cloud[](#kubernetes-cluster-observability-with-prometheus-and-grafana-on-brand-name "Permalink to this headline")
Kubernetes cluster observability with Prometheus and Grafana on CloudFerro Cloud[🔗](#kubernetes-cluster-observability-with-prometheus-and-grafana-on-brand-name "Permalink to this headline")
=============================================================================================================================================================================================
Complex systems deployed on Kubernetes take advantage of multiple Kubernetes resources. Such deployments often consist of a number of namespaces, pods and many other entities, which contribute to consuming the cluster resources.
@ -7,7 +7,7 @@ To allow proper insight into how the cluster resources are utilized, and enable
In this article we will present the use of a popular open-source observability stack consisting of Prometheus and Grafana.
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install Prometheus
@ -15,7 +15,7 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
> * Access Prometheus as datasource to Grafana
> * Add cluster observability dashboard
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -34,7 +34,7 @@ No. 4 **Access to kubectl command line**
The instructions for activation of **kubectl** are provided in: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
1. Install Prometheus with Helm[](#install-prometheus-with-helm "Permalink to this headline")
1. Install Prometheus with Helm[🔗](#install-prometheus-with-helm "Permalink to this headline")
----------------------------------------------------------------------------------------------
Prometheus is an open-source monitoring and alerting toolkit, widely used in System Administration and DevOps domains. Prometheus comes with a timeseries database, which can store metrics generated by variety of other systems and software tools. It provides a query language called PromQL to efficiently access this data. In our case, we will use Prometheus to get access to the metrics generated by our Kubernetes cluster.
@ -115,7 +115,7 @@ to query for all pods in the default namespace. (Further elaboration about the c
![image2023-11-7_13-40-1.png](../_images/image2023-11-7_13-40-1.png)
2. Install Grafana[](#install-grafana "Permalink to this headline")
2. Install Grafana[🔗](#install-grafana "Permalink to this headline")
--------------------------------------------------------------------
The next step is to install Grafana. We already added the Bitnami repository when installing Prometheus, so Grafana repository was also added to our local cache. We only need to install Grafana.
@ -162,7 +162,7 @@ Then access the Grafana dashboard by entering *localhost:8080* in the browser:
Type the login: *admin* and the password *ownpassword* (or the auto-generated password you extracted in the earlier step).
3. Add Prometheus as datasource to Grafana[](#add-prometheus-as-datasource-to-grafana "Permalink to this headline")
3. Add Prometheus as datasource to Grafana[🔗](#add-prometheus-as-datasource-to-grafana "Permalink to this headline")
--------------------------------------------------------------------------------------------------------------------
In this step we will setup Grafana to use our Prometheus installation as a datasource.
@ -181,7 +181,7 @@ Hit the **Save and test** button. If all went well, you will see the following s
![image2023-11-7_15-1-59.png](../_images/image2023-11-7_15-1-59.png)
4. Add cluster observability dashboard[](#add-cluster-observability-dashboard "Permalink to this headline")
4. Add cluster observability dashboard[🔗](#add-cluster-observability-dashboard "Permalink to this headline")
------------------------------------------------------------------------------------------------------------
We could be building a Kubernetes observability dashboard from the scratch, but we will much rather utilize one of the open-source dashboards already available.
@ -202,7 +202,7 @@ As the result, the Grafana Kubernetes observability dashboard gets populated:
![image2023-11-7_15-38-40.png](../_images/image2023-11-7_15-38-40.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You can find and import many other dashboards for Kubernetes observability by browsing <https://grafana.com/grafana/dashboards/>. Some examples are dashboards with IDs: 315, 15758, 15761 or many more.

View File

@ -1,11 +1,11 @@
Private container registries with Harbor on CloudFerro Cloud Kubernetes[](#private-container-registries-with-harbor-on-brand-name-kubernetes "Permalink to this headline")
Private container registries with Harbor on CloudFerro Cloud Kubernetes[🔗](#private-container-registries-with-harbor-on-brand-name-kubernetes "Permalink to this headline")
===========================================================================================================================================================================
A fundamental component of the container-based ecosystem are *container registries*, used for storing and distributing container images. There are a few popular public container registries, which serve this purpose in a software-as-a-service model and the most popular is [DockerHub](https://hub.docker.com/).
In this article, we are using [Harbor](https://goharbor.io/), which is a popular open-source option for running private registries. It is compliant with [OCI (Open Container Initiative](https://opencontainers.org/)), which makes it suitable to work with standard container images. It ships with multiple enterprise-ready features out of the box.
Benefits of using your own private container registry[](#benefits-of-using-your-own-private-container-registry "Permalink to this headline")
Benefits of using your own private container registry[🔗](#benefits-of-using-your-own-private-container-registry "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------
When you **deploy your own private container registry**, the benefits would be, amongst others:
@ -16,7 +16,7 @@ When you **deploy your own private container registry**, the benefits would be,
You can also use *Role-based access control* on Harbor project level to specify and enforce which users have permission to publish updated images, to consume the available ones and so on.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Deploy Harbor private registry with Bitnami-Harbor Helm chart
@ -29,7 +29,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Upload a Docker image to your Harbor instance
> * Download a Docker image from your Harbor instance
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -64,7 +64,7 @@ No. 7 **Docker installed on your machine**
See [How to install and use Docker on Ubuntu 24.04](../cloud/How-to-use-Docker-on-CloudFerro-Cloud.html.md).
Deploy Harbor private registry with Bitnami-Harbor Helm chart[](#deploy-harbor-private-registry-with-bitnami-harbor-helm-chart "Permalink to this headline")
Deploy Harbor private registry with Bitnami-Harbor Helm chart[🔗](#deploy-harbor-private-registry-with-bitnami-harbor-helm-chart "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------
The first step to deploy Harbor private registry is to create a dedicated namespace to host Harbor artifacts:
@ -158,7 +158,7 @@ APP VERSION: 2.8.1
```
Access Harbor from browser[](#access-harbor-from-browser "Permalink to this headline")
Access Harbor from browser[🔗](#access-harbor-from-browser "Permalink to this headline")
---------------------------------------------------------------------------------------
With the previous steps followed, you should be able to access the Harbor portal. The following command will display all of the services deployed:
@ -191,7 +191,7 @@ harbor-trivy ClusterIP 10.254.249.99 <none> 8080/TC
Explaining the purpose of several artifacts is beyond the scope of this article. The key service that is interesting to us at this stage is *harbor*, which got deployed as LoadBalancer type with public IP **64.225.134.148**.
Associate the A record of your domain to Harbors IP address[](#associate-the-a-record-of-your-domain-to-harbor-s-ip-address "Permalink to this headline")
Associate the A record of your domain to Harbors IP address[🔗](#associate-the-a-record-of-your-domain-to-harbor-s-ip-address "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------
The final step is to associate the A record of your domain to the Harbors IP address.
@ -221,7 +221,7 @@ To log in to your instance, use these as the login details
> | login | admin |
> | password | Harbor12345 |
Create a project in Harbor[](#create-a-project-in-harbor "Permalink to this headline")
Create a project in Harbor[🔗](#create-a-project-in-harbor "Permalink to this headline")
---------------------------------------------------------------------------------------
When you log in to Harbor, you enter the **Projects** section:
@ -234,7 +234,7 @@ To create a new project, click on **New Project** button. In this article, we wi
![image2023-8-2_16-44-28.png](../_images/image2023-8-2_16-44-28.png)
Create a Dockerfile for our custom image[](#create-a-dockerfile-for-our-custom-image "Permalink to this headline")
Create a Dockerfile for our custom image[🔗](#create-a-dockerfile-for-our-custom-image "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
The Harbor service is running and we can use it to upload our Docker images. We will generate a minimal image, so just create an empty folder, called *helloharbor*, with a single Docker file (called *Dockerfile*)
@ -256,12 +256,12 @@ CMD ["/bin/sh", "-c", "echo 'Hello Harbor!'"]
```
Ensure trust from our local Docker instance[](#ensure-trust-from-our-local-docker-instance "Permalink to this headline")
Ensure trust from our local Docker instance[🔗](#ensure-trust-from-our-local-docker-instance "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
In order to build our Docker image in further steps and upload this image to Harbor, we need to ensure communication of our local Docker instance with Harbor. To fulfill this objective, proceed as follows:
### Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate[](#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate "Permalink to this headline")
### Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate[🔗](#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate "Permalink to this headline")
Bypass Docker validating the domain certificate pointing to the domain where Harbor is running. Docker would not trust this certificate, because it is self-signed. To bypass this validation, create a file called *daemon.json* in */etc/docker* directory on your local machine:
@ -290,7 +290,7 @@ As always, replace *mysampledomain.info* with your own domain.
For production, you would rather set up proper HTTPS certificate for the domain.
### Ensure Docker trust - Step 2. Ensure Docker trusts the Harbors Certificate Authority[](#ensure-docker-trust-step-2-ensure-docker-trusts-the-harbor-s-certificate-authority "Permalink to this headline")
### Ensure Docker trust - Step 2. Ensure Docker trusts the Harbors Certificate Authority[🔗](#ensure-docker-trust-step-2-ensure-docker-trusts-the-harbor-s-certificate-authority "Permalink to this headline")
To do so, we download the **ca.crt** file from our Harbor portal instance from the **myproject** project view:
@ -315,7 +315,7 @@ Install the certificate on WSL2 running on Windows 10 or 11
> * Browse to the *ca.crt* file location and then keep pressing **Next** to complete the wizard
> * Restart Docker from Docker Desktop menu
### Ensure Docker trust - Step 3. Restart Docker[](#ensure-docker-trust-step-3-restart-docker "Permalink to this headline")
### Ensure Docker trust - Step 3. Restart Docker[🔗](#ensure-docker-trust-step-3-restart-docker "Permalink to this headline")
Restart Docker with:
@ -324,7 +324,7 @@ sudo systemctl restart docker
```
Build our image locally[](#build-our-image-locally "Permalink to this headline")
Build our image locally[🔗](#build-our-image-locally "Permalink to this headline")
---------------------------------------------------------------------------------
After these steps, we can tag our image and build it locally (from the location where Dockerfile is placed):
@ -341,7 +341,7 @@ docker login mysampledomain.info
```
Upload a Docker image to your Harbor instance[](#upload-a-docker-image-to-your-harbor-instance "Permalink to this headline")
Upload a Docker image to your Harbor instance[🔗](#upload-a-docker-image-to-your-harbor-instance "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------
Lastly, push the image to the repo:
@ -355,7 +355,7 @@ The result will be similar to the following:
![image2023-8-3_15-11-48.png](../_images/image2023-8-3_15-11-48.png)
Download a Docker image from your Harbor instance[](#download-a-docker-image-from-your-harbor-instance "Permalink to this headline")
Download a Docker image from your Harbor instance[🔗](#download-a-docker-image-from-your-harbor-instance "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------
To demonstrate downloading images from our Harbor repository, we can first delete the local Docker image we created earlier.

View File

@ -1,11 +1,11 @@
Sealed Secrets on CloudFerro Cloud Kubernetes[](#sealed-secrets-on-brand-name-kubernetes "Permalink to this headline")
Sealed Secrets on CloudFerro Cloud Kubernetes[🔗](#sealed-secrets-on-brand-name-kubernetes "Permalink to this headline")
=======================================================================================================================
Sealed Secrets improve security of our Kubernetes deployments by enabling encrypted Kubernetes secrets. This allows to store such secrets in source control and follow GitOps practices of storing all configuration in code.
In this article we will install tools to work with Sealed Secrets and demonstrate using Sealed Secrets on CloudFerro Cloud cloud.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Install the Sealed Secrets controller
@ -14,7 +14,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Unseal the secret
> * Verify
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -39,7 +39,7 @@ No. 4 **Access to cluster with kubectl**
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
Step 1 Install the Sealed Secrets controller[](#step-1-install-the-sealed-secrets-controller "Permalink to this headline")
Step 1 Install the Sealed Secrets controller[🔗](#step-1-install-the-sealed-secrets-controller "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------
In order to use Sealed Secrets we will first install the Sealed Secrets controller to our Kubernetes cluster. We can use Helm for this purpose and the first step is to download the Helm repository. To add the repo locally use the following command:
@ -61,7 +61,7 @@ The chart downloads several resources to our cluster. The key ones are:
> * **SealedSecret Custom Resource Definition (CRD)** - defines the template for sealed secrets that will be created on the cluster
> * The **SealedSecrets controller pod** running in the kube-system namespace.
Step 2 Install the kubeseal command line utility[](#step-2-install-the-kubeseal-command-line-utility "Permalink to this headline")
Step 2 Install the kubeseal command line utility[🔗](#step-2-install-the-kubeseal-command-line-utility "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------
Kubeseal CLI tool is used for encrypting secrets using the public certificate of the controller. To proceed, install **kubeseal** with the following set of commands:
@ -85,7 +85,7 @@ which will return result similar to the following:
![image-2024-5-23_17-16-2.png](../_images/image-2024-5-23_17-16-2.png)
Step 3 Create a sealed secret[](#step-3-create-a-sealed-secret "Permalink to this headline")
Step 3 Create a sealed secret[🔗](#step-3-create-a-sealed-secret "Permalink to this headline")
---------------------------------------------------------------------------------------------
We can use Sealed Secrets to encrypt the secrets, which can be decrypted only by the controller running on the cluster.
@ -104,7 +104,7 @@ kubectl create secret generic mysecret \
When we view the file we can see the contents are encrypted and safe to store in source control.
Step 4 Unseal the secret[](#step-4-unseal-the-secret "Permalink to this headline")
Step 4 Unseal the secret[🔗](#step-4-unseal-the-secret "Permalink to this headline")
-----------------------------------------------------------------------------------
To unseal the secret and make it available and usable in the cluster, we perform the following command:
@ -128,7 +128,7 @@ The results can also be seen on the below screen:
![image-2024-5-23_17-39-37.png](../_images/image-2024-5-23_17-39-37.png)
Step 5 Verify[](#step-5-verify "Permalink to this headline")
Step 5 Verify[🔗](#step-5-verify "Permalink to this headline")
-------------------------------------------------------------
The generated secret can be used as a regular Kubernetes secret. To test, create a file **test-pod.yaml** with the following contents:
@ -171,7 +171,7 @@ The command prompt will change to **#**, meaning the command you enter is execut
![image-end-of-article.png](../_images/image-end-of-article.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Sealed Secrets present a viable alternative to secret management using additional tools such as HashiCorp-Vault. For additional information, see [Installing HashiCorp Vault on CloudFerro Cloud Magnum](Installing-HashiCorp-Vault-on-CloudFerro-Cloud-Magnum.html.md).

View File

@ -1,11 +1,11 @@
Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum[](#using-dashboard-to-access-kubernetes-cluster-post-deployment-on-brand-name-openstack-magnum "Permalink to this headline")
Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum[🔗](#using-dashboard-to-access-kubernetes-cluster-post-deployment-on-brand-name-openstack-magnum "Permalink to this headline")
===============================================================================================================================================================================================================================
After the Kubernetes cluster has been created, you can access it through command line tool, **kubectl**, or you can access it through a visual interface, called the **Kubernetes dashboard**. *Dashboard* is a GUI interface to Kubernetes cluster, much the same as **kubectl** as a CLI interface to the Kubernetes cluster.
This article shows how to install Kubernetes dashboard.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Deploying the dashboard
@ -15,7 +15,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Creating a separate terminal window for proxy access
> * Running the dashboard in browser
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Hosting**
@ -35,7 +35,7 @@ export KUBECONFIG=/home/user/k8sdir/config
Note the exact command which in your case sets up the value of **KUBECONFIG** variable as you will need it to start a new terminal window from which the dashboard will run.
Step 1 Deploying the Dashboard[](#step-1-deploying-the-dashboard "Permalink to this headline")
Step 1 Deploying the Dashboard[🔗](#step-1-deploying-the-dashboard "Permalink to this headline")
-----------------------------------------------------------------------------------------------
Install it with the following command:
@ -49,7 +49,7 @@ The result is
![dashboard_installed.png](../_images/dashboard_installed.png)
Step 2 Creating a sample user[](#step-2-creating-a-sample-user "Permalink to this headline")
Step 2 Creating a sample user[🔗](#step-2-creating-a-sample-user "Permalink to this headline")
---------------------------------------------------------------------------------------------
Next, you create a bearer token which will serve as an authorization token for the Dashboard. To that end, you will create two local files and “send” them to the cloud using the **kubectl** command. The first file is called *dashboard-adminuser.yaml* and its contents are
@ -109,7 +109,7 @@ kubectl apply -f dashboard-clusterolebinding.yaml
```
Step 3 Create secret for admin-user[](#step-3-create-secret-for-admin-user "Permalink to this headline")
Step 3 Create secret for admin-user[🔗](#step-3-create-secret-for-admin-user "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
We have to manually create token for admin user.
@ -142,7 +142,7 @@ kubectl apply -f admin-user-token.yaml
```
Step 4 Get the bearer token for authentication to dashboard[](#step-4-get-the-bearer-token-for-authentication-to-dashboard "Permalink to this headline")
Step 4 Get the bearer token for authentication to dashboard[🔗](#step-4-get-the-bearer-token-for-authentication-to-dashboard "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------
The final step is to get the bearer token, which is a long string that will authenticate calls to Dashboard:
@ -162,7 +162,7 @@ Note
If the last character of the bearer token string is *%*, it may be a character that denotes the end of the string but is not a part of it. If you copy the bearer string and it is not recognized, try copying it without this ending character *%*.
Step 5 Create a separate terminal window for proxy access[](#step-5-create-a-separate-terminal-window-for-proxy-access "Permalink to this headline")
Step 5 Create a separate terminal window for proxy access[🔗](#step-5-create-a-separate-terminal-window-for-proxy-access "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------
We shall now use a proxy server for Kubernetes API server. The proxy server
@ -191,7 +191,7 @@ The server is activated on port **8001**:
![starting_to_server.png](../_images/starting_to_server.png)
Step 6 See the dashboard in browser[](#step-6-see-the-dashboard-in-browser "Permalink to this headline")
Step 6 See the dashboard in browser[🔗](#step-6-see-the-dashboard-in-browser "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
Then enter this address into the browser:
@ -209,7 +209,7 @@ Enter the token, click on **Sign In** and get the Dashboard UI for the Kubernete
The Kubernetes Dashboard organizes working with the cluster in a visual and interactive way. For instance, click on *Nodes* on the left sides to see the nodes that the *k8s-cluster* has.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You can still use **kubectl** or alternate with using the **Dashboard**. Either way, you can

View File

@ -1,4 +1,4 @@
Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum[](#using-kubernetes-ingress-on-brand-name-cloud-name-openstack-magnum "Permalink to this headline")
Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum[🔗](#using-kubernetes-ingress-on-brand-name-cloud-name-openstack-magnum "Permalink to this headline")
==================================================================================================================================================================
The Ingress feature in Kubernetes can be associated with routing the traffic from outside of the cluster to the services within the cluster. With Ingress, multiple Kubernetes services can be exposed using a single Load Balancer.
@ -8,7 +8,7 @@ In this article, we will provide insight into how Ingress is implemented on the
> * run on the same IP address without need of creating extra LoadBalancer per service and will also
> * automatically enjoy all of the Kubernetes cluster benefits reliability, scalability etc.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Create Magnum Kubernetes cluster with NGINX Ingress enabled
@ -16,7 +16,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Create Ingress Resource
> * Verify that Ingress can access both testing servers
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -36,7 +36,7 @@ The net result of following instructions in that and the related articles will b
> * a cluster formed, healthy and ready to be used, as well as
> * enabling access to the cluster from the local machine (i.e. having *kubectl* command operational).
Step 1 Create a Magnum Kubernetes cluster with NGINX Ingress enabled[](#step-1-create-a-magnum-kubernetes-cluster-with-nginx-ingress-enabled "Permalink to this headline")
Step 1 Create a Magnum Kubernetes cluster with NGINX Ingress enabled[🔗](#step-1-create-a-magnum-kubernetes-cluster-with-nginx-ingress-enabled "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
When we create a Kubernetes cluster on the cloud, we can deploy it with a preconfigured ingress setup. This requires minimal setting and is described in this help section: [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html.md).
@ -71,7 +71,7 @@ nginx k8s.io/ingress-nginx <none> 7m36s
```
Step 2 Creating services for Nginx and Apache webserver[](#step-2-creating-services-for-nginx-and-apache-webserver "Permalink to this headline")
Step 2 Creating services for Nginx and Apache webserver[🔗](#step-2-creating-services-for-nginx-and-apache-webserver "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------
You are now going to build and expose two minimal applications:
@ -132,7 +132,7 @@ curl ingress-tqwzjwu2lw7p-node-1:32660
```
Step 3 Create Ingress Resource[](#step-3-create-ingress-resource "Permalink to this headline")
Step 3 Create Ingress Resource[🔗](#step-3-create-ingress-resource "Permalink to this headline")
-----------------------------------------------------------------------------------------------
To expose application to a public IP address, you will need to define an Ingress Resource. Since both applications will be available from the same IP address, the ingress resource will define the detailed rules of what gets served in which route. In this example, the */apache* route will be served from the Apache service, and all other routes will be served by the Nginx service.
@ -192,7 +192,7 @@ Note
The address **64.225.130.77** is generated randomly and in your case it will be different. Be sure to copy and use the address shown by **kubectl get ingress**.
Step 4 Verify that it works[](#step-4-verify-that-it-works "Permalink to this headline")
Step 4 Verify that it works[🔗](#step-4-verify-that-it-works "Permalink to this headline")
-----------------------------------------------------------------------------------------
Copy the ingress floating IP in the browser, followed by some example routes. You should see an output similar to the one below. Here is the screenshot for the */apache* route:
@ -203,7 +203,7 @@ This screenshot shows what happens on any other route it defaults to Nginx:
![any_other_route.png](../_images/any_other_route.png)
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You now have two of the most popular web servers installed as services within a Kubernetes cluster. Here are some ideas how to use this setup:

View File

@ -1,4 +1,4 @@
Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on CloudFerro Cloud OpenStack Magnum[](#volume-based-vs-ephemeral-based-storage-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on CloudFerro Cloud OpenStack Magnum[🔗](#volume-based-vs-ephemeral-based-storage-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
=====================================================================================================================================================================================================================================
Containers in Kubernetes store files on-disk and if the container crashes, the data will be lost. A new container can replace the old one but the data will not survive. Another problem that appears is when containers running in a pod need to share files.
@ -17,7 +17,7 @@ openstack coe cluster create --docker-volume-size 50
This means that a persistent volume of 50 GB will be created and attached to the pod. Using **docker-volume-size** is a way to both reserve the space and declare that the storage will be persistent.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * How to create a cluster when **docker-volume-size** is used
@ -27,7 +27,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * How to save a file into persistent storage
> * How to demonstrate that the attached volume is persistent
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
1 **Hosting**
@ -54,7 +54,7 @@ An SSH key-pair created in OpenStack dashboard. To create it, follow this articl
Types of volumes are described in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/).
Step 1 - Create Cluster Using **docker-volume-size**[](#step-1-create-cluster-using-docker-volume-size "Permalink to this headline")
Step 1 - Create Cluster Using **docker-volume-size**[🔗](#step-1-create-cluster-using-docker-volume-size "Permalink to this headline")
--------------------------------------------------------------------------------------------------------------------------------------
You are going to create a new cluster called *dockerspace* that will use parameter **docker-volume-size** using the following command:
@ -95,7 +95,7 @@ As specified during creation, *docker-volumes* have size of 50 GB each.
In this step, you have created a new cluster with docker storage turned on and then you verified that the main difference lies in creation of volumes for the cluster.
Step 2 - Create Pod Manifest[](#step-2-create-pod-manifest "Permalink to this headline")
Step 2 - Create Pod Manifest[🔗](#step-2-create-pod-manifest "Permalink to this headline")
-----------------------------------------------------------------------------------------
To create a pod, you need to use a file in *yaml* format that defines the parameters of the pod. Use command
@ -141,7 +141,7 @@ Besides *emptyDir*, about a dozen other volume types could have been used here:
In this step, you have prepared pod manifest with which you will create the pod in the next step.
Step 3 - Create a Pod on Node **0** of *dockerspace*[](#step-3-create-a-pod-on-node-0-of-dockerspace "Permalink to this headline")
Step 3 - Create a Pod on Node **0** of *dockerspace*[🔗](#step-3-create-a-pod-on-node-0-of-dockerspace "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------
In this step you will create a new pod on node **0** of *dockerspace* cluster.
@ -213,7 +213,7 @@ In this step, you have created a new pod on cluster *dockerspace* and it is runn
In the next step, you will enter the container and start issuing commands just like you would in any other Linux environment.
Step 4 - Executing *bash* Commands in the Container[](#step-4-executing-bash-commands-in-the-container "Permalink to this headline")
Step 4 - Executing *bash* Commands in the Container[🔗](#step-4-executing-bash-commands-in-the-container "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------
In this step, you will start **bash** shell in the container, which in Linux is equivalent to start running the operating system:
@ -263,7 +263,7 @@ lists sizes of files and directories in a human fashion (the usual meaning of pa
In this step, you have activated the container operating system.
Step 5 - Saving a File Into Persistent Storage[](#step-5-saving-a-file-into-persistent-storage "Permalink to this headline")
Step 5 - Saving a File Into Persistent Storage[🔗](#step-5-saving-a-file-into-persistent-storage "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------
In this step you are going to test the longevity of files on persistent storage. You will first
@ -317,7 +317,7 @@ That will first kill the container and then exit its command line.
In this step, you have created a file and killed the container that contains the file. This sets up the ground for testing whether the files survive container crash.
Step 6 - Check the File Saved in Previous Step[](#step-6-check-the-file-saved-in-previous-step "Permalink to this headline")
Step 6 - Check the File Saved in Previous Step[🔗](#step-6-check-the-file-saved-in-previous-step "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------
In this step, you will find out whether the file *test-file* is still existing.
@ -339,7 +339,7 @@ Yes, the file *test-file* is still there. The persistent storage for the pod con
In this step, you have entered the pod again and found out that the file has survived intact. That was expected, as volumes of type *emptyDir* will survive container crashes as long as the pod is existing.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
*emptyDir* survives container crashes but will disappear when the pod disappears. Other volume types may survive loss of pods better. For instance:

View File

@ -4,36 +4,36 @@
* [Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum](Automatic-Kubernetes-cluster-upgrade-on-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum](Autoscaling-Kubernetes-Cluster-Resources-on-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Installation step 1 Getting EC2 client credentials[](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")](Backup-of-Kubernetes-Cluster-using-Velero.html.md)
* [Installation step 1 Getting EC2 client credentials[🔗](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")](Backup-of-Kubernetes-Cluster-using-Velero.html.md)
* [CICD pipelines with GitLab on CloudFerro Cloud Kubernetes building a Docker image](CICD-pipelines-with-GitLab-on-CloudFerro-Cloud-Kubernetes-building-a-Docker-image.html.md)
* [Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-CloudFerro-Cloud.html.md)
* [Prepare Your Environment[🔗](#prepare-your-environment "Permalink to this headline")](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-CloudFerro-Cloud.html.md)
* [Required for Load Balancer v2 API](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-CloudFerro-Cloud.html.md)
* [Create and access NFS server from Kubernetes on CloudFerro Cloud](Create-and-access-NFS-server-from-Kubernetes-on-CloudFerro-Cloud.html.md)
* [Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Network plugins for Kubernetes clusters[](#network-plugins-for-kubernetes-clusters "Permalink to this headline")](Default-Kubernetes-cluster-templates-in-CloudFerro-Cloud-Cloud.html.md)
* [Network plugins for Kubernetes clusters[🔗](#network-plugins-for-kubernetes-clusters "Permalink to this headline")](Default-Kubernetes-cluster-templates-in-CloudFerro-Cloud-Cloud.html.md)
* [Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud](Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-CloudFerro-Cloud.html.md)
* [export KUBECONFIG=<your-kubeconfig-file-location>](Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-CloudFerro-Cloud-Cloud.html.md)
* [Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html.md)
* [Verify the vGPU installation[](#verify-the-vgpu-installation "Permalink to this headline")](Deploying-vGPU-workloads-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Verify the vGPU installation[🔗](#verify-the-vgpu-installation "Permalink to this headline")](Deploying-vGPU-workloads-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster](Enable-Kubeapps-app-launcher-on-CloudFerro-Cloud-Magnum-Kubernetes-cluster.html.md)
* [GitOps with Argo CD on CloudFerro Cloud Kubernetes](GitOps-with-Argo-CD-on-CloudFerro-Cloud-Kubernetes.html.md)
* [name of the deployment, must be in the same namespace as ScaledObject](HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-CloudFerro-Cloud.html.md)
* [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [How To Create API Server LoadBalancer for Kubernetes Cluster On CloudFerro Cloud OpenStack Magnum](How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html.md)
* [Reproduce Commands Through Cut & Paste[](#reproduce-commands-through-cut-paste "Permalink to this headline")](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Reproduce Commands Through Cut & Paste[🔗](#reproduce-commands-through-cut-paste "Permalink to this headline")](How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum](How-to-Create-a-Kubernetes-Cluster-Using-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Define providers](How-to-create-Kubernetes-cluster-using-Terraform-on-CloudFerro-Cloud.html.md)
* [Preparation step 1 Create new project[](#preparation-step-1-create-new-project "Permalink to this headline")](How-to-install-Rancher-RKE2-Kubernetes-on-CloudFerro-Cloud-cloud.html.md)
* [Verification[](#verification "Permalink to this headline")](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html.md)
* [Preparation step 1 Create new project[🔗](#preparation-step-1-create-new-project "Permalink to this headline")](How-to-install-Rancher-RKE2-Kubernetes-on-CloudFerro-Cloud-cloud.html.md)
* [Verification[🔗](#verification "Permalink to this headline")](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html.md)
* [Install GitLab on CloudFerro Cloud Kubernetes](Install-GitLab-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes](Install-and-run-Argo-Workflows-on-CloudFerro-Cloud-Magnum-Kubernetes.html.md)
* [Pandas](Install-and-run-Dask-on-a-Kubernetes-cluster-in-CloudFerro-Cloud-cloud.html.md)
* [Step 1. Create object storage bucket on WAW3-1[](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")](Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-CloudFerro-Cloud.html.md)
* [Step 1. Create object storage bucket on WAW3-1[🔗](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")](Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-CloudFerro-Cloud.html.md)
* [Vault Helm Chart Value Overrides](Installing-HashiCorp-Vault-on-CloudFerro-Cloud-Magnum.html.md)
* [Installing JupyterHub on Magnum Kubernetes cluster in CloudFerro Cloud cloud](Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-CloudFerro-Cloud-cloud.html.md)
* [Kubernetes cluster observability with Prometheus and Grafana on CloudFerro Cloud](Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-CloudFerro-Cloud.html.md)
* [Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate[](#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate "Permalink to this headline")](Private-container-registries-with-Harbor-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate[🔗](#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate "Permalink to this headline")](Private-container-registries-with-Harbor-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Sealed Secrets on CloudFerro Cloud Kubernetes](Sealed-Secrets-on-CloudFerro-Cloud-Kubernetes.html.md)
* [Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum](Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
* [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html.md)