link icon replaced
This commit is contained in:
@ -1,4 +1,4 @@
|
||||
Adding and editing Organization[](#adding-and-editing-organization "Permalink to this headline")
|
||||
Adding and editing Organization[🔗](#adding-and-editing-organization "Permalink to this headline")
|
||||
=================================================================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press **Organization** button on the left bar menu.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Wallets and Contracts Management[](#wallets-and-contracts-management "Permalink to this headline")
|
||||
Wallets and Contracts Management[🔗](#wallets-and-contracts-management "Permalink to this headline")
|
||||
===================================================================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press **Wallets/Contracts** button on the left menu bar:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Cookie consent on CloudFerro Cloud[](#cookie-consent-on-brand-name "Permalink to this headline")
|
||||
Cookie consent on CloudFerro Cloud[🔗](#cookie-consent-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================
|
||||
|
||||
A *cookie* is a small text file that your browser stores in local environment and later uses to track or recognize your activities on the site.
|
||||
@ -8,7 +8,7 @@ Cookies are an essential tool for the remote site to deliver the best possible u
|
||||
> * the site itself (if it uses its own cookies in a way that is detrimental to the user),
|
||||
> * by many other sites that see available cookies and decide to gather reconnaissance about your surfing activities.
|
||||
|
||||
Introducing Cookiebot site[](#introducing-cookiebot-site "Permalink to this headline")
|
||||
Introducing Cookiebot site[🔗](#introducing-cookiebot-site "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
CloudFerro Cloud is using [Cookiebot](https://www.cookiebot.com/) software to manage **cookies consent** from the user. It will show you all of the cookies that your browser is storing and you will be able to choose which types of cookies should CloudFerro Cloud take into account. Both Cookiebot and CloudFerro Cloud site are [GDPR compliant](https://gdpr-info.eu/), however, CloudFerro Cloud also has its own [Privacy Policy](https://cloudferro.com/privacy-policy/) in effect.
|
||||
@ -19,7 +19,7 @@ Note
|
||||
|
||||
You can directly interfere with cookies from your browser, operating system, network or VPN access software. This boils down to detecting, showing, hiding, tracking or removing access to certain types of cookies and so on. These methods are, however, out of scope of this article.
|
||||
|
||||
Cookiebot window[](#cookiebot-window "Permalink to this headline")
|
||||
Cookiebot window[🔗](#cookiebot-window "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
This is the Cookiebot window on CloudFerro Cloud:
|
||||
@ -35,12 +35,12 @@ You will see it when visiting one of these sites for the first time:
|
||||
|
||||
Cookiebot is interactive and you can change your cookies preferences while using the site. If the consent for using cookies was withdrawn, you are also going to see the same starting Cookiebot window when visiting these sites after the change.
|
||||
|
||||
Option Allow all[](#option-allow-all "Permalink to this headline")
|
||||
Option Allow all[🔗](#option-allow-all "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
Click on button **Allow all** will do what it says – the site will record **all** types of cookies and, consequently, to track your behaviour completely. This option will unleash the full power of the site and you will always be able to use all of its capabilities. For you as the user, it is also the easiest and fastest way of dealing with cookies on the site.
|
||||
|
||||
Details view of available cookies[](#details-view-of-available-cookies "Permalink to this headline")
|
||||
Details view of available cookies[🔗](#details-view-of-available-cookies "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
To see the cookies that you can give your consent to, click on **Details**.
|
||||
@ -51,7 +51,7 @@ There are five types of cookies and you may need to scroll down to see them all.
|
||||
|
||||
When shown for the first time, the left button will contain label **Deny**. Choosing it will turn off all of the cookies apart from the **Necessary cookie** type, which by default cannot be turned off. If you do not like the fact that that is the default, refrain from using the site.
|
||||
|
||||
### Necessary cookies[](#necessary-cookies "Permalink to this headline")
|
||||
### Necessary cookies[🔗](#necessary-cookies "Permalink to this headline")
|
||||
|
||||
This is the most basic type of cookie and the site presumes you have already given consent to it. That is why the check button to the right of the row, , is already set to “ON”. Technically, you can try to remove the consent by clicking on that button, but you will be met with a message like this:
|
||||
|
||||
@ -61,7 +61,7 @@ You can also see additional details about that cookie type and the cookies it co
|
||||
|
||||

|
||||
|
||||
### The number of cookies shown per category[](#the-number-of-cookies-shown-per-category "Permalink to this headline")
|
||||
### The number of cookies shown per category[🔗](#the-number-of-cookies-shown-per-category "Permalink to this headline")
|
||||
|
||||
The number of cookies that Cookiebot is showing may vary wildly and will be increased if the sites you visit are using:
|
||||
|
||||
@ -79,28 +79,28 @@ Some large content sites may use up to 30-40 cookies per visitor – that alone
|
||||
|
||||
If you delete some or all cookies, perhaps using the browser of your choice, the numbers the Cookiebot will show will be almost zero (but with each visit to another site or sites, that number is almost sure to grow).
|
||||
|
||||
### Preferences cookie type[](#preferences-cookie-type "Permalink to this headline")
|
||||
### Preferences cookie type[🔗](#preferences-cookie-type "Permalink to this headline")
|
||||
|
||||
Enabling this cookie permits the site to store the preferences such as preferred language or region you are in.
|
||||
|
||||
### Statistics cookie type[](#statistics-cookie-type "Permalink to this headline")
|
||||
### Statistics cookie type[🔗](#statistics-cookie-type "Permalink to this headline")
|
||||
|
||||
For storing anonymized statistics. In spite of your data being stored in the background of the site, these cookies will not be revealed to third parties (unless forced by law).
|
||||
|
||||
### Marketing cookie type[](#marketing-cookie-type "Permalink to this headline")
|
||||
### Marketing cookie type[🔗](#marketing-cookie-type "Permalink to this headline")
|
||||
|
||||
Used to create user profiles to send advertising. If you opt out of this cookie type, you may miss some new features of the site or, eventually, miss on promotional campaigns, sales offers and so on.
|
||||
|
||||
### Unclassified cookie type[](#unclassified-cookie-type "Permalink to this headline")
|
||||
### Unclassified cookie type[🔗](#unclassified-cookie-type "Permalink to this headline")
|
||||
|
||||
All other types of cookies, if any, that have not been classified as yet.
|
||||
|
||||
How to give consent to cookie types[](#how-to-give-consent-to-cookie-types "Permalink to this headline")
|
||||
How to give consent to cookie types[🔗](#how-to-give-consent-to-cookie-types "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
Click on toggle button on the right side of the form window and when you finish selecting, click on **Allow selection** to confirm, or again, click on **Allow all** to activate all of them.
|
||||
|
||||
### About cookie consent[](#about-cookie-consent "Permalink to this headline")
|
||||
### About cookie consent[🔗](#about-cookie-consent "Permalink to this headline")
|
||||
|
||||
This option explains what cookies are and also provides links to [Privacy Policy](https://cloudferro.com/privacy-policy/) and, more specifically, to [Cookie Policy](https://cloudferro.com/cookie-policy/).
|
||||
|
||||
@ -108,7 +108,7 @@ This option explains what cookies are and also provides links to [Privacy Policy
|
||||
|
||||
You can still change cookie consent by clicking on **Customize**, which will lead you back to **Details** tab (already explained above).
|
||||
|
||||
Selecting the cookies preferences[](#selecting-the-cookies-preferences "Permalink to this headline")
|
||||
Selecting the cookies preferences[🔗](#selecting-the-cookies-preferences "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
Once you click either **Allow selection** or **Allow all** buttons, the form will disappear and your selection will be fixed. To change it, click on icon  in the lower left browser window corner.
|
||||
@ -123,7 +123,7 @@ Clicking on **Withdraw your consent**, all types of cookies will be annulled exc
|
||||
|
||||
Button **Change your consent** will lead to the **Details** tab we already discussed, where you will be able to edit your cookies preferences.
|
||||
|
||||
### What the consent data look like[](#what-the-consent-data-look-like "Permalink to this headline")
|
||||
### What the consent data look like[🔗](#what-the-consent-data-look-like "Permalink to this headline")
|
||||
|
||||
To see what your consent data look like, click on **Show details**:
|
||||
|
||||
@ -133,12 +133,12 @@ Each consent you give to the site, generates a unique consent ID, which, togethe
|
||||
|
||||
The cookie is saved on backend servers for 12 months. It is also saved in your browser so that the website can automatically read and respect the user’s consent on all subsequent page requests.
|
||||
|
||||
Troubleshooting[](#troubleshooting "Permalink to this headline")
|
||||
Troubleshooting[🔗](#troubleshooting "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can see the contents of the cookie file through various browser options and also through a file viewer on your desktop computer. It is quite possible (but not at all advisable) to delete the cookie file outside of the browser. In particular, deleting the entire cookie by force will also delete the **necessary** part of the cookie. You may, then, lose access to the site, be forced to contact [Helpdesk and Support](Help-Desk-And-Support.html.md) and so on.
|
||||
|
||||
Setting up cookies on CloudFerro Cloud subdomains[](#setting-up-cookies-on-brand-name-subdomains "Permalink to this headline")
|
||||
Setting up cookies on CloudFerro Cloud subdomains[🔗](#setting-up-cookies-on-brand-name-subdomains "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Cookiebot procedures are exactly the same on subdomains or the dashboard.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Editing profile[](#editing-profile "Permalink to this headline")
|
||||
Editing profile[🔗](#editing-profile "Permalink to this headline")
|
||||
=================================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press **My Profile** button on the left bar menu.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Forgotten Password[](#forgotten-password "Permalink to this headline")
|
||||
Forgotten Password[🔗](#forgotten-password "Permalink to this headline")
|
||||
=======================================================================
|
||||
|
||||
Go to the login page and click on **Forgot Password** button.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Helpdesk and Support[](#helpdesk-and-support "Permalink to this headline")
|
||||
Helpdesk and Support[🔗](#helpdesk-and-support "Permalink to this headline")
|
||||
===========================================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press the **Tickets** button on the left menu bar to create or manage your tickets.
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication[](#how-to-activate-openstack-cli-access-to-brand-name-cloud-using-one-or-two-factor-authentication "Permalink to this headline")
|
||||
How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication[🔗](#how-to-activate-openstack-cli-access-to-brand-name-cloud-using-one-or-two-factor-authentication "Permalink to this headline")
|
||||
========================================================================================================================================================================================================================================
|
||||
|
||||
One-factor and two-factor authentication for activating command line access to the cloud[](#one-factor-and-two-factor-authentication-for-activating-command-line-access-to-the-cloud "Permalink to this headline")
|
||||
One-factor and two-factor authentication for activating command line access to the cloud[🔗](#one-factor-and-two-factor-authentication-for-activating-command-line-access-to-the-cloud "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To log into a site, you usually provide user name and email address during the creation of the account and then you use those same data to enter the site. You provide that data once and that is why it is called “one-factor” authentication. Two-factor authentication requires the same but considers it to be only the first step; on CloudFerro Cloud cloud, the second step is
|
||||
@ -11,7 +11,7 @@ To log into a site, you usually provide user name and email address during the c
|
||||
|
||||
Cloud parameters for authentication and, later, OpenStack CLI access, are found in a so-called *RC file*. This article will help you download and use it to first authenticate and then access the cloud using OpenStack CLI commands.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to download the RC file
|
||||
@ -23,7 +23,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Testing the connection
|
||||
> * Resolving errors
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -50,10 +50,10 @@ Install and run WSL (Linux under Windows)
|
||||
Install OpenStackClient on Linux
|
||||
: [How to install OpenStackClient for Linux on CloudFerro Cloud](../openstackcli/How-to-install-OpenStackClient-for-Linux-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
How to download the RC file[](#how-to-download-the-rc-file "Permalink to this headline")
|
||||
How to download the RC file[🔗](#how-to-download-the-rc-file "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
### Location of the link to RC file[](#location-of-the-link-to-rc-file "Permalink to this headline")
|
||||
### Location of the link to RC file[🔗](#location-of-the-link-to-rc-file "Permalink to this headline")
|
||||
|
||||
**Click on account name**
|
||||
|
||||
@ -75,7 +75,7 @@ Navigate to **API Access** -> **Download OpenStack RC File**. Depending on the c
|
||||
|
||||
Option **OpenStack clouds.yaml File** is out of scope of this article.
|
||||
|
||||
### Which OpenStack RC file to download[](#which-openstack-rc-file-to-download "Permalink to this headline")
|
||||
### Which OpenStack RC file to download[🔗](#which-openstack-rc-file-to-download "Permalink to this headline")
|
||||
|
||||
Choose the appropriate option, depending on the type of account:
|
||||
|
||||
@ -92,7 +92,7 @@ By way of example, let the downloaded RC file name be **cloud\_00734\_1-openrc-2
|
||||
> * rename it and
|
||||
> * move to the folder in which you are going to activate it.
|
||||
|
||||
The contents of the downloaded RC file[](#the-contents-of-the-downloaded-rc-file "Permalink to this headline")
|
||||
The contents of the downloaded RC file[🔗](#the-contents-of-the-downloaded-rc-file "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
RC file sets up *environment variables* which are used by the OpenStack CLI client to authenticate to the cloud. By convention, these variables are in upper case and start with **OS\_**: **OS\_TENANT\_ID**, **OS\_PROJECT\_NAME** etc. For example, in case of one-factor authentication, the RC file will ask for password and store it into a variable called **OS\_PASSWORD**.
|
||||
@ -103,7 +103,7 @@ Below is an example content of an RC file which does not use 2FA:
|
||||
|
||||
File which supports 2FA will have additional pieces of code for providing the second factor of authentication.
|
||||
|
||||
How to activate the downloaded RC file[](#how-to-activate-the-downloaded-rc-file "Permalink to this headline")
|
||||
How to activate the downloaded RC file[🔗](#how-to-activate-the-downloaded-rc-file "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The activation procedure will depend on the operating system you are working with:
|
||||
@ -131,7 +131,7 @@ Note that in both cases **./** means “use the file in this very folder you alr
|
||||
|
||||
See Prerequisite No. 3, which describes in more detail how to run **.sh** files using various scenarios on Windows.
|
||||
|
||||
### Running with one-factor authentication[](#running-with-one-factor-authentication "Permalink to this headline")
|
||||
### Running with one-factor authentication[🔗](#running-with-one-factor-authentication "Permalink to this headline")
|
||||
|
||||
The activated **.sh** file will run in a Terminal window (user name is grayed out for privacy reasons):
|
||||
|
||||
@ -141,7 +141,7 @@ Enter the password, either by typing it in or by pasting it in the way your term
|
||||
|
||||
If your account has only one-factor authentication, this is all you need to do to start running commands from command line.
|
||||
|
||||
### Two-factor authentication[](#two-factor-authentication "Permalink to this headline")
|
||||
### Two-factor authentication[🔗](#two-factor-authentication "Permalink to this headline")
|
||||
|
||||
If your file supports two-factor authentication, the terminal will first require the password, exactly the same as in case of one-factor authentication. Then you will get a prompt for the second factor, which usually comes in shape of a six-digit one-time password:
|
||||
|
||||
@ -166,14 +166,14 @@ This six-digit number will be regenerated every thirty seconds. Enter the latest
|
||||
|
||||

|
||||
|
||||
Duration of life for environment variables set by sourcing the RC file[](#duration-of-life-for-environment-variables-set-by-sourcing-the-rc-file "Permalink to this headline")
|
||||
Duration of life for environment variables set by sourcing the RC file[🔗](#duration-of-life-for-environment-variables-set-by-sourcing-the-rc-file "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
When you source the file, environment variables are set for your current shell. To prove it, open two terminal windows, source the RC file in one of them but not in the other and you won’t be able to authenticate from that second terminal window.
|
||||
|
||||
That is why you will need to activate your RC file each time you start a new terminal session. Once authenticated and while that terminal window is open, you can use it to issue OpenStack CLI commands at will.
|
||||
|
||||
Testing the connection[](#testing-the-connection "Permalink to this headline")
|
||||
Testing the connection[🔗](#testing-the-connection "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
If not already, install OpenStack client using one of the links in Prerequisite No 3. To verify access, execute the following command which lists flavors available in CloudFerro Cloud cloud:
|
||||
@ -187,10 +187,10 @@ You should get output similar to this:
|
||||
|
||||

|
||||
|
||||
Resolving errors[](#resolving-errors "Permalink to this headline")
|
||||
Resolving errors[🔗](#resolving-errors "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
### jq not installed[](#jq-not-installed "Permalink to this headline")
|
||||
### jq not installed[🔗](#jq-not-installed "Permalink to this headline")
|
||||
|
||||
**jq** is an app to parse JSON input. In this context, it serves to process the output from the server. It will be installed on most Linux distros. If you do not have it installed on your computer, you may get a message like this:
|
||||
|
||||
@ -200,7 +200,7 @@ To resolve, [download from the official support page and follow the directions t
|
||||
|
||||
If you are using Git Bash on Windows and running into this error, Step 6 of article on GitBash from **Prerequisite 3**, has proper instructions for installing **jq**.
|
||||
|
||||
### 2FA accounts: entering a wrong password and/or six-digit code[](#fa-accounts-entering-a-wrong-password-and-or-six-digit-code "Permalink to this headline")
|
||||
### 2FA accounts: entering a wrong password and/or six-digit code[🔗](#fa-accounts-entering-a-wrong-password-and-or-six-digit-code "Permalink to this headline")
|
||||
|
||||
If you enter a wrong six-digit code, you will get the following error:
|
||||
|
||||
@ -215,7 +215,7 @@ Call to Keycloak failed with code 401 and message
|
||||
|
||||
If that is the case, simply activate the RC file again as previously and type the correct credentials.
|
||||
|
||||
### 2FA accounts: lost Internet connection[](#fa-accounts-lost-internet-connection "Permalink to this headline")
|
||||
### 2FA accounts: lost Internet connection[🔗](#fa-accounts-lost-internet-connection "Permalink to this headline")
|
||||
|
||||
Activating a 2FA RC file requires access to CloudFerro Cloud account service because it involves not only setting variables, but also obtaining an appropriate token.
|
||||
|
||||
@ -230,7 +230,7 @@ It will be followed by an empty line and you will be returned to your command pr
|
||||
|
||||
To resolve this issue, please connect to the Internet and try to activate the RC file again. If you are certain that you have Internet connection, it could mean that CloudFerro Cloud account service is down. If no downtime was announced for it, please contact CloudFerro Cloud customer support: [Helpdesk and Support](Help-Desk-And-Support.html.md)
|
||||
|
||||
### Non-2FA accounts: entering a wrong password[](#non-2fa-accounts-entering-a-wrong-password "Permalink to this headline")
|
||||
### Non-2FA accounts: entering a wrong password[🔗](#non-2fa-accounts-entering-a-wrong-password "Permalink to this headline")
|
||||
|
||||
If your account does not have two-factor authentication and you entered a wrong password, you will **not** get an error. However, if you try to execute a command like **openstack flavor list**, you will get the error similar to this:
|
||||
|
||||
@ -243,7 +243,7 @@ Instead of **x** characters, you will see a string of characters.
|
||||
|
||||
To resolve, activate your file again and enter the correct password.
|
||||
|
||||
### Using the wrong file[](#using-the-wrong-file "Permalink to this headline")
|
||||
### Using the wrong file[🔗](#using-the-wrong-file "Permalink to this headline")
|
||||
|
||||
If you have a 2FA authentication enabled for your account but have tried to activate the non-2FA version of the RC file, executing, say, command **openstack flavor list**, will give you the following error:
|
||||
|
||||
@ -254,7 +254,7 @@ Unrecognized schema in response body. (HTTP 401)
|
||||
|
||||
If that is the case, download the correct file if needed and use it.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
With the appropriate version of RC file activated, you should be able to create and use
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to buy credits using Pay Per Use wallet on CloudFerro Cloud[](#how-to-buy-credits-using-pay-per-use-wallet-on-brand-name "Permalink to this headline")
|
||||
How to buy credits using Pay Per Use wallet on CloudFerro Cloud[🔗](#how-to-buy-credits-using-pay-per-use-wallet-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================
|
||||
|
||||
In this article you will learn how to use PPU (Pay Per Use) wallet in order to cover expenses of your account at CloudFerro Cloud.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Check for the correct tax ID or VAT number
|
||||
@ -12,7 +12,7 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
|
||||
> * Choose payment method
|
||||
> * Check payment reports
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -42,7 +42,7 @@ FIXED-TERM (Fixed Term Contract)
|
||||
|
||||
In case you have not entered **organization** data yet, see article [Adding and editing Organization](Adding-Editing-Organizations.html.md)
|
||||
|
||||
Step 1 Check for the correct tax ID or VAT number[](#step-1-check-for-the-correct-tax-id-or-vat-number "Permalink to this headline")
|
||||
Step 1 Check for the correct tax ID or VAT number[🔗](#step-1-check-for-the-correct-tax-id-or-vat-number "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Field **Company tax ID / VAT number** must be filled in with correct data.
|
||||
@ -55,7 +55,7 @@ Without it, you won’t be able to make an order. An error like this one will ap
|
||||
|
||||

|
||||
|
||||
Step 2 Select PPU as your way of payment[](#step-2-select-ppu-as-your-way-of-payment "Permalink to this headline")
|
||||
Step 2 Select PPU as your way of payment[🔗](#step-2-select-ppu-as-your-way-of-payment "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
On this link, you choose the actual contract type: <https://ecommerce.cloudferro.com/>
|
||||
@ -64,7 +64,7 @@ On this link, you choose the actual contract type: <https://ecommerce.cloudferro
|
||||
|
||||
Click on **Buy now** (assuming you will choose Pay Per Use), otherwise, click on **Choose Fixed term** to opt for **Fixed term payments**.
|
||||
|
||||
Step 3 Define how many credits for PPU service[](#step-3-define-how-many-credits-for-ppu-service "Permalink to this headline")
|
||||
Step 3 Define how many credits for PPU service[🔗](#step-3-define-how-many-credits-for-ppu-service "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Either by clicking button **Buy now** or by visiting the following link directly: <https://ecommerce.cloudferro.com/checkout/pay-per-use/>, you will start the process of paying for PPU.
|
||||
@ -77,7 +77,7 @@ Let’s say that you want to buy for 250 units, where each unit costs 1 Euro.
|
||||
|
||||
If you have only one wallet, the **default wallet** will be automatically offered. If you, however, have several wallets, choose the proper one for this order.
|
||||
|
||||
Step 4 Choose payment method[](#step-4-choose-payment-method "Permalink to this headline")
|
||||
Step 4 Choose payment method[🔗](#step-4-choose-payment-method "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
Check whether the information about your organization is correct and proceed to payment.
|
||||
@ -100,7 +100,7 @@ If you chose direct bank transfer, scroll down to the payment section and click
|
||||
|
||||

|
||||
|
||||
Step 5 Check payment reports[](#step-5-check-payment-reports "Permalink to this headline")
|
||||
Step 5 Check payment reports[🔗](#step-5-check-payment-reports "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
Check whether the invoice amount matches the actual balance. The invoice in the upper right corner next to the eye icon marked with red line.
|
||||
@ -115,7 +115,7 @@ Check your wallet as well: <https://portal.cloudferro.com/panel/orders/pay-per-u
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
There are two ways of reaching to us in case of any problems:
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to manage TOTP authentication on CloudFerro Cloud[](#how-to-manage-totp-authentication-on-brand-name "Permalink to this headline")
|
||||
How to manage TOTP authentication on CloudFerro Cloud[🔗](#how-to-manage-totp-authentication-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================================
|
||||
|
||||
In order to use your CloudFerro Cloud account, you need to set a password, and an additional factor of authentication. For the latter, the TOTP algorithm is being used. In this article you will learn how to manage your TOTP configuration.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
* Important information about TOTP
|
||||
@ -12,7 +12,7 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
|
||||
* Adding a new TOTP secret key
|
||||
* Contacting customer support
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to start using dashboard services on CloudFerro Cloud[](#how-to-start-using-dashboard-services-on-brand-name "Permalink to this headline")
|
||||
How to start using dashboard services on CloudFerro Cloud[🔗](#how-to-start-using-dashboard-services-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================
|
||||
|
||||
When you try to use CloudFerro Cloud dashboard at <https://portal.cloudferro.com/>, you will see an advice on the order of operations to start using the dashboard properly.
|
||||
|
||||

|
||||
|
||||
Step 1 Set up the organization[](#step-1-set-up-the-organization "Permalink to this headline")
|
||||
Step 1 Set up the organization[🔗](#step-1-set-up-the-organization "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
1. Go to the organization, add it by providing the name, details and a valid EU VAT number/TAX ID assigned to your country.
|
||||
@ -16,7 +16,7 @@ The option to use is **Configuration** -> **Organization**.
|
||||
|
||||
See article [Adding and editing Organization](Adding-Editing-Organizations.html.md).
|
||||
|
||||
Step 2 Enable payment options[](#step-2-enable-payment-options "Permalink to this headline")
|
||||
Step 2 Enable payment options[🔗](#step-2-enable-payment-options "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Go to the [eCommerce site](https://ecommerce.cloudferro.com/) and top up your wallet with the required funds.
|
||||
@ -25,7 +25,7 @@ Go to the [eCommerce site](https://ecommerce.cloudferro.com/) and top up your wa
|
||||
|
||||
See article [How to buy credits using Pay Per Use wallet on CloudFerro Cloud](How-to-buy-credits-using-pay-per-use-wallet-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Step 3 Activate the project[](#step-3-activate-the-project "Permalink to this headline")
|
||||
Step 3 Activate the project[🔗](#step-3-activate-the-project "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Go to “Cloud projects” and activate the project in the cloud/region you are interested in. The options to choose are **Billing and Reporting** -> **Cloud projects/Wallets**.
|
||||
@ -38,7 +38,7 @@ You may want to work with all these clouds at the same time, maybe with differen
|
||||
|
||||
It is up to you to activate all these clouds at once… or just one… or anything in between. The regions/clouds you activate in the dashboard can be seen in the Horizon dashboard, in the menu.
|
||||
|
||||
Step 4 Start using the chosen cloud in Horizon[](#step-4-start-using-the-chosen-cloud-in-horizon "Permalink to this headline")
|
||||
Step 4 Start using the chosen cloud in Horizon[🔗](#step-4-start-using-the-chosen-cloud-in-horizon "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To start using the services, choose proper **Cloud Panel** from the **Management Interfaces**.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Inviting new user to your Organization[](#inviting-new-user-to-your-organization "Permalink to this headline")
|
||||
Inviting new user to your Organization[🔗](#inviting-new-user-to-your-organization "Permalink to this headline")
|
||||
===============================================================================================================
|
||||
|
||||
Important
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Privacy policy for clients[](#privacy-policy-for-clients "Permalink to this headline")
|
||||
Privacy policy for clients[🔗](#privacy-policy-for-clients "Permalink to this headline")
|
||||
=======================================================================================
|
||||
|
||||
If you are not redirected, [click here](https://cloudferro.com/cloudferro-privacy-policy-for-clients/).
|
||||
@ -1,4 +1,4 @@
|
||||
Registration and Setting up an Account[](#registration-and-setting-up-an-account "Permalink to this headline")
|
||||
Registration and Setting up an Account[🔗](#registration-and-setting-up-an-account "Permalink to this headline")
|
||||
===============================================================================================================
|
||||
|
||||
Go to the <https://portal.cloudferro.com/> site and press **CREATE ACCOUNT** button.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Removing user from Organization[](#removing-user-from-organization "Permalink to this headline")
|
||||
Removing user from Organization[🔗](#removing-user-from-organization "Permalink to this headline")
|
||||
=================================================================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press **Sub-accounts** button on the left bar menu to check the list of members of your Organization.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Services[](#services "Permalink to this headline")
|
||||
Services[🔗](#services "Permalink to this headline")
|
||||
===================================================
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> press **Active services** button on the left bar menu.
|
||||
@ -9,7 +9,7 @@ In this tab you are able to filter your services by Project or by Product.
|
||||
|
||||
You can also check what type of contract or billing mode is assigned to your services. For more details please visit /accountmanagement/Accounts-and-Projects-Management.
|
||||
|
||||
How to change assigned contract[](#how-to-change-assigned-contract "Permalink to this headline")
|
||||
How to change assigned contract[🔗](#how-to-change-assigned-contract "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
**PAY PER USE** - user can assign wallet to specific project in the **Accounts** tab
|
||||
|
||||
@ -1,21 +1,21 @@
|
||||
Tenant manager users and roles on CloudFerro Cloud[](#tenant-manager-users-and-roles-on-cloudferro-cloud "Permalink to this headline")enant manager users and roles on CloudFerro Cloud[](#tenant-manager-users-and-roles-on-brand-name "Permalink to this headline")
|
||||
Tenant manager users and roles on CloudFerro Cloud[](#tenant-manager-users-and-roles-on-cloudferro-cloud "Permalink to this headline")enant manager users and roles on CloudFerro Cloud[🔗](#tenant-manager-users-and-roles-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================
|
||||
|
||||
Differences between OpenStack User Roles and Tenant Manager’s Roles[](#differences-between-openstack-user-roles-and-tenant-manager-s-roles "Permalink to this headline")
|
||||
Differences between OpenStack User Roles and Tenant Manager’s Roles[🔗](#differences-between-openstack-user-roles-and-tenant-manager-s-roles "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
An OpenStack role is a personality that a user assumes to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges. OpenStack roles are defined for each user and each project independently.
|
||||
|
||||
A Tenant Manager role, on the other hand, defines whether a user should have the ability to manage an organization via the Tenant Manager or have access to OpenStack.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The difference between User Roles and Tenant Manager Role
|
||||
> * List three basic roles an organization administrator you can assign
|
||||
> * Show how to add a **member+** role, which can have access to OpenStack and be used for managing projects
|
||||
|
||||
Users and Roles in the Tenant Manager[](#users-and-roles-in-the-tenant-manager "Permalink to this headline")
|
||||
Users and Roles in the Tenant Manager[🔗](#users-and-roles-in-the-tenant-manager "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
After logging into <https://portal.cloudferro.com/> click on the **Sub-accounts** button on the left bar menu.
|
||||
@ -33,7 +33,7 @@ As an *organization administrator* you can assign one of the following roles to
|
||||
> * **member** - default user with basic privileges.
|
||||
> * **member+** - the same as **member** but has OpenStack access and can manage projects.
|
||||
|
||||
Adding member+ user to your project in OpenStack using Horizon interface[](#adding-member-user-to-your-project-in-openstack-using-horizon-interface "Permalink to this headline")
|
||||
Adding member+ user to your project in OpenStack using Horizon interface[🔗](#adding-member-user-to-your-project-in-openstack-using-horizon-interface "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Users with the role of **member+** have access to OpenStack and can be enabled to manage your organization projects. They cannot however, manage the organization itself.
|
||||
@ -62,7 +62,7 @@ To add a **member+** user to the project, follow these steps:
|
||||
|
||||
**7.** Next time the user will log into <https://horizon.cloudferro.com> OpenStack Horizon, the suitable access to the project will be granted.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
The article [Inviting new user to your Organization](Inviting-New-User.html.md) shows how to invite a new user.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Two-Factor Authentication to CloudFerro Cloud site using mobile application[](#two-factor-authentication-to-brand-name-site-using-mobile-application "Permalink to this headline")
|
||||
Two-Factor Authentication to CloudFerro Cloud site using mobile application[🔗](#two-factor-authentication-to-brand-name-site-using-mobile-application "Permalink to this headline")
|
||||
===================================================================================================================================================================================
|
||||
|
||||
Warning
|
||||
@ -27,7 +27,7 @@ You will first have to install one of the following two mobile applications, for
|
||||
|
||||
We can use “mobile authenticator” as a generic term for a mobile app that can help authenticate with the account.
|
||||
|
||||
Which One to Use – FreeOTP or Google Authenticator?[](#which-one-to-use-freeotp-or-google-authenticator "Permalink to this headline")
|
||||
Which One to Use – FreeOTP or Google Authenticator?[🔗](#which-one-to-use-freeotp-or-google-authenticator "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can use FreeOTP with Google accounts instead of Google Authenticator app.
|
||||
@ -44,7 +44,7 @@ Warning
|
||||
|
||||
If you lose access to QR codes and cannot log into the Horizon site for CloudFerro Cloud, ask Support service to help you by sending email to the following address [support@cloudferro.com](/cdn-cgi/l/email-protection#80f3f5f0f0eff2f4a6a3b3b7bba6a3b5b2bba6a3b4b8bbe3eceff5e4e6e5f2f2efa6a3b4b6bbe3efed).
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to start using the mobile authenticator
|
||||
@ -52,7 +52,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * How to set up FreeOTP app and connect it to your CloudFerro Cloud account
|
||||
> * How to get new code each time you want to enter the site
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
Use only one of the four possible combinations for two apps and two app stores.
|
||||
@ -79,7 +79,7 @@ You should install the authenticator app **before** trying to log into the Cloud
|
||||
|
||||
You are now going to download, install and use the FreeOTP app to authenticate to CloudFerro Cloud site.
|
||||
|
||||
Step 1 Download and Install FreeOTP from the App Store[](#step-1-download-and-install-freeotp-from-the-app-store "Permalink to this headline")
|
||||
Step 1 Download and Install FreeOTP from the App Store[🔗](#step-1-download-and-install-freeotp-from-the-app-store "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Using the App Store icon from the desktop of your iOS device, locate app called **freeotp**. A screen like this will appear:
|
||||
@ -102,7 +102,7 @@ Note
|
||||
|
||||
FreeOTP can also use tokens to secure access to the remote site. The CloudFerro Cloud site uses QR code, so that is what you will use in this tutorial. (Both “token” and “QR scan” denote a secure connection to the site, but use different techniques in the process.)
|
||||
|
||||
Step 2 Scan QR and Create Brand[](#step-2-scan-qr-and-create-brand "Permalink to this headline")
|
||||
Step 2 Scan QR and Create Brand[🔗](#step-2-scan-qr-and-create-brand "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
Select a brand, which means select an icon that will make your tokens stand out graphically. If you will employ this app only to get access to CloudFerro Cloud, you may select whichever icon you want.
|
||||
@ -129,7 +129,7 @@ The QR code will appear on screen when you first try to log into the CloudFerro
|
||||
|
||||
[](../_images/eefa_qr_screen_creodias.png)
|
||||
|
||||
Step 3 Create a Six-digit Code to Enter Into the Login Screen[](#step-3-create-a-six-digit-code-to-enter-into-the-login-screen "Permalink to this headline")
|
||||
Step 3 Create a Six-digit Code to Enter Into the Login Screen[🔗](#step-3-create-a-six-digit-code-to-enter-into-the-login-screen "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Finally, you will see a row within the FreeOTP app, with the icon you chose and with the code that will appear automatically. For instance, the code is **289582** and that is the code that you need to enter when the site asks you for *One-time code*.
|
||||
@ -146,7 +146,7 @@ Tapping on any of these will produce the six-digit code that you have to type in
|
||||
|
||||
You are now ready to log into the CloudFerro Cloud site using the two-factor authentication.
|
||||
|
||||
How to Start Using the Mobile Authenticator With Your Account[](#how-to-start-using-the-mobile-authenticator-with-your-account "Permalink to this headline")
|
||||
How to Start Using the Mobile Authenticator With Your Account[🔗](#how-to-start-using-the-mobile-authenticator-with-your-account "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Use the usual link <https://horizon.cloudferro.com> to log into your CloudFerro Cloud account and choose CloudFerro Cloud in the input menu.
|
||||
@ -171,7 +171,7 @@ You can use the field **Device Name** to remind yourself on which device was the
|
||||
|
||||
Click on **Submit** and you will be brought back to the **Sign in** screen from the beginning:
|
||||
|
||||
Logging Into the Site Once the Two-Factor Authentication is Installed[](#logging-into-the-site-once-the-two-factor-authentication-is-installed "Permalink to this headline")
|
||||
Logging Into the Site Once the Two-Factor Authentication is Installed[🔗](#logging-into-the-site-once-the-two-factor-authentication-is-installed "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is the workflow in one place, with all of the screens repeated for easy reference.
|
||||
@ -200,7 +200,7 @@ Note
|
||||
|
||||
If the FreeOTP app is in the foreground on the mobile device while you are submitting the username and password, the app will react automatically and the proper six-digit code will appear on its own on the authenticator device.
|
||||
|
||||
### What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
### What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
|
||||
As mentioned in the beginning, you can use your computer for two-factor authentication – see article [Two-Factor Authentication to CloudFerro Cloud site using KeePassXC on desktop](Using-KeePassXC-for-Two-Factor-Authentication-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Two-Factor Authentication to CloudFerro Cloud site using KeePassXC on desktop[](#two-factor-authentication-to-brand-name-site-using-keepassxc-on-desktop "Permalink to this headline")
|
||||
Two-Factor Authentication to CloudFerro Cloud site using KeePassXC on desktop[🔗](#two-factor-authentication-to-brand-name-site-using-keepassxc-on-desktop "Permalink to this headline")
|
||||
=======================================================================================================================================================================================
|
||||
|
||||
Please see article [Two-Factor Authentication to CloudFerro Cloud site using mobile application](Two-Factor-Authentication-for-CloudFerro-Cloud-Site.html.md) if you want to use a smartphone app for the TOTP two-factor authentication.
|
||||
@ -15,7 +15,7 @@ If you already have KeePassXC installed and configured, skip to Step 3 Adding En
|
||||
|
||||
The following instructions are for Ubuntu. If you use a different operating system, please [refer to the appropriate documentation](https://keepassxc.org/download/).
|
||||
|
||||
Step 1 Install KeePassXC[](#step-1-install-keepassxc "Permalink to this headline")
|
||||
Step 1 Install KeePassXC[🔗](#step-1-install-keepassxc "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
Install KeePassXC before logging in to the CloudFerro Cloud website. Open the terminal, type the following command and press Enter:
|
||||
@ -25,7 +25,7 @@ sudo apt update && sudo apt upgrade -y && sudo apt install -y keepassxc
|
||||
|
||||
```
|
||||
|
||||
Step 2 Configure KeePassXC[](#step-2-configure-keepassxc "Permalink to this headline")
|
||||
Step 2 Configure KeePassXC[🔗](#step-2-configure-keepassxc "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
Launch KeePassXC. During its first run, you will see the following window:
|
||||
@ -54,7 +54,7 @@ Click **Done**.
|
||||
|
||||
Choose the name for the file containing your secrets and its location. Click **Save**.
|
||||
|
||||
Step 3 Add the entry for your account[](#step-3-add-the-entry-for-your-account "Permalink to this headline")
|
||||
Step 3 Add the entry for your account[🔗](#step-3-add-the-entry-for-your-account "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Your database should now be operational. Let’s create the entry containing your username, password and TOTP for the CloudFerro Cloud cloud. Click **Add a new entry** (the fourth button on the toolbar, marked with the red rectangle on the screenshot below.
|
||||
@ -71,12 +71,12 @@ Click **OK** to save the entry.
|
||||
|
||||
If the option **Automatically save after every change** in the **General** section of the application settings is enabled, you do not have to save. If not, press CTRL+S to save the database.
|
||||
|
||||
Step 4 Configure TOTP[](#step-4-configure-totp "Permalink to this headline")
|
||||
Step 4 Configure TOTP[🔗](#step-4-configure-totp "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Now we need to obtain your TOTP key.
|
||||
|
||||
### Method 1: During account creation[](#method-1-during-account-creation "Permalink to this headline")
|
||||
### Method 1: During account creation[🔗](#method-1-during-account-creation "Permalink to this headline")
|
||||
|
||||
After having created an account on <https://horizon.cloudferro.com> but before first login, you will receive the **Mobile Authenticator Setup** prompt, as in the following image:
|
||||
|
||||
@ -113,7 +113,7 @@ The window with the code will look like this:
|
||||
|
||||
Type your 6-digit code from the above window to the text field **One-time-code** on the CloudFerro Cloud website and choose how you would like to call your device containing the TOTP key. Please make sure that you do it before that key expires. If the key expires, you will get another one and you should type it instead. Click **Submit**. You should now be able to proceed with your login process.
|
||||
|
||||
### Method 2: After another method of TOTP has already been configured[](#method-2-after-another-method-of-totp-has-already-been-configured "Permalink to this headline")
|
||||
### Method 2: After another method of TOTP has already been configured[🔗](#method-2-after-another-method-of-totp-has-already-been-configured "Permalink to this headline")
|
||||
|
||||
If the method of TOTP authentication you are currently using allows you to extract the **secret** key(or you have it backed up somewhere), you should be able to use that same **secret key** which you are currently using for KeePassXC as well.
|
||||
|
||||
@ -121,7 +121,7 @@ If no other options remain, contact CloudFerro Cloud customer support for assist
|
||||
|
||||
Either way, eventually you should get your secret key. Enter it in KeePassXC the same way as explained in **Method 1** above - to the **Key:** text field. If that secret key is already added and configured for your account, no further action should be necessary. If not and you are in the process of configuring it, paste the 6-digit TOTP code from KeePassXC in the same way as you entered the code from your other device during account setup.
|
||||
|
||||
Step 5 Login using TOTP[](#step-5-login-using-totp "Permalink to this headline")
|
||||
Step 5 Login using TOTP[🔗](#step-5-login-using-totp "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Each time you login, type your credentials normally. After that you will see the following text field:
|
||||
@ -130,7 +130,7 @@ Each time you login, type your credentials normally. After that you will see the
|
||||
|
||||
Generate your TOTP code as explained before (left-click the appropriate entry in KeePassXC and press CTRL+Shift+T) and type that code in the text field **One-time code** in your browser. If you want to simply copy your code to your clipboard, press CTRL+T while your entry is highlighted (remember that depending on settings it will disappear from your clipboard, so make sure that you paste it in time). Each code lasts only 30 seconds, so if you only have a few seconds remaining on your current code, you might want to wait until the new one is generated. Now you should be signed in.
|
||||
|
||||
Additional information[](#additional-information "Permalink to this headline")
|
||||
Additional information[🔗](#additional-information "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
You can find additional information about using KeePassXC in its [official documentation](https://keepassxc.org/docs/).
|
||||
@ -1,9 +1,9 @@
|
||||
Block storage and object storage performance limits on CloudFerro Cloud[](#block-storage-and-object-storage-performance-limits-on-brand-name "Permalink to this headline")
|
||||
Block storage and object storage performance limits on CloudFerro Cloud[🔗](#block-storage-and-object-storage-performance-limits-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================
|
||||
|
||||
On CloudFerro Cloud, there are performance limits for **HDD**, **NVMe (SSD)**, and **Object Storage** to ensure stable operation and protect against accidental DDoS attacks.
|
||||
|
||||
Current limits[](#current-limits "Permalink to this headline")
|
||||
Current limits[🔗](#current-limits "Permalink to this headline")
|
||||
---------------------------------------------------------------
|
||||
|
||||
Block HDD
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
DNS as a Service on CloudFerro Cloud Hosting[](#dns-as-a-service-on-brand-name-cloud-name-hosting "Permalink to this headline")
|
||||
DNS as a Service on CloudFerro Cloud Hosting[🔗](#dns-as-a-service-on-brand-name-cloud-name-hosting "Permalink to this headline")
|
||||
================================================================================================================================
|
||||
|
||||
DNS as a Service (DNSaaS) provides functionality of managing configuration of user’s domains. Managing configuration means that the user is capable of creating, updating and deleting the following DNS records:
|
||||
@ -22,7 +22,7 @@ DNS records management is performed on the level of an OpenStack project.
|
||||
|
||||
Since DNSaaS purpose is to deal with external domain names, the internal name resolution (name resolution for private IP addresses within user’s projects) is not covered by this documentation.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Domain delegation in registrar’s system
|
||||
@ -33,7 +33,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Managing records
|
||||
> * Limitations in OpenStack DNSaaS
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -71,7 +71,7 @@ Or, you might connect from a Linux based computer to the cloud:
|
||||
|
||||
In both cases, the article will contain a section to connect floating IP to the newly created VM. The generated IP address will vary, but for the sake of concreteness we shall assume that it is **64.225.133.254**. You will enter that value later in this article, to create record set for the site or service you are making.
|
||||
|
||||
Step 1 Delegate domain to your registrar’s system[](#step-1-delegate-domain-to-your-registrar-s-system "Permalink to this headline")
|
||||
Step 1 Delegate domain to your registrar’s system[🔗](#step-1-delegate-domain-to-your-registrar-s-system "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The configuration of domain name in your registrar’s system must point to the NS records of CloudFerro name servers. It can be achieved in two ways:
|
||||
@ -102,7 +102,7 @@ Configure glue records for your domain, so that they point to the following IP a
|
||||
| secondary name server | ns2.exampledomain.com | 91.212.141.102 |
|
||||
| secondary name server | ns3.exampledomain.com | 91.212.141.86 |
|
||||
|
||||
Step 2 Zone configuration[](#step-2-zone-configuration "Permalink to this headline")
|
||||
Step 2 Zone configuration[🔗](#step-2-zone-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
Zone configuration is defining parameters for the main domain name you have purchased.
|
||||
@ -121,7 +121,7 @@ Here is what the parameters mean:
|
||||
|
||||
After submitting, your domain should be served by OpenStack.
|
||||
|
||||
Step 3 Checking the presence of the domain on the Internet[](#step-3-checking-the-presence-of-the-domain-on-the-internet "Permalink to this headline")
|
||||
Step 3 Checking the presence of the domain on the Internet[🔗](#step-3-checking-the-presence-of-the-domain-on-the-internet "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
It usually takes from 24 up to 48 hours for the domain name to propagate through the Internet so it will **not** be available right away. Rarely, domain name starts resolving in matters of minutes and hours instead of days, so it pays to try the domain address in your browser an hour or two after configuring the zone for the domain.
|
||||
@ -170,7 +170,7 @@ curl exampledomain.com
|
||||
|
||||
Specify **A** to see the propagation of the domain itself and specify **NS** to see the propagation of nameservers across the Internet.
|
||||
|
||||
Step 4 Adding new record for the domain[](#step-4-adding-new-record-for-the-domain "Permalink to this headline")
|
||||
Step 4 Adding new record for the domain[🔗](#step-4-adding-new-record-for-the-domain "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To add a new record to the domain, click on **Create Record Set** next to the domain name and fill in the required fields. The most important entry is to connect the domain name to the IP address you have. To configure an address of web server in **exampledomain.com**, so that it is resolved to **64.225.133.254** which is a Floating IP address of your server, fill the form as follows:
|
||||
@ -206,7 +206,7 @@ Note
|
||||
Each time a name of domain or a server is added or edited, add dot ‘.’ at the end of the entry.
|
||||
For example: **exampledomain.com.** or **mail.exampledomain.com.**.
|
||||
|
||||
Step 5 Adding records for subdomains[](#step-5-adding-records-for-subdomains "Permalink to this headline")
|
||||
Step 5 Adding records for subdomains[🔗](#step-5-adding-records-for-subdomains "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Defining subdomains is similar except that, normally, the subdomain would propagate within minutes instead of days.
|
||||
@ -225,7 +225,7 @@ www.exampledomain.com. 3600 IN A 64.225.133.254
|
||||
|
||||
```
|
||||
|
||||
Step 6 Managing records[](#step-6-managing-records "Permalink to this headline")
|
||||
Step 6 Managing records[🔗](#step-6-managing-records "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Anytime you want to review, edit or delete records in your domain, visit OpenStack dashboard, **Project** → **DNS** → **Zones**. After clicking the domain name of your interest, choose **Record Sets** tab and see the list of all records:
|
||||
@ -234,7 +234,7 @@ Anytime you want to review, edit or delete records in your domain, visit OpenSta
|
||||
|
||||
From this screen you can update or delete records.
|
||||
|
||||
Limitations[](#limitations "Permalink to this headline")
|
||||
Limitations[🔗](#limitations "Permalink to this headline")
|
||||
---------------------------------------------------------
|
||||
|
||||
There are the following limitations in OpenStack DNSaaS:
|
||||
@ -245,7 +245,7 @@ There are the following limitations in OpenStack DNSaaS:
|
||||
> > + you are unable to delegate subdomains to external servers
|
||||
> * Even though you are able to configure reverse DNS for your domain, this configuration will have no effect since reverse DNS for CloudFerro Cloud IP pools are managed on DNS servers other than OpenStack DNSaaS.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Once an OpenStack object has floating IP address, you can use the DNS service to propagate a domain name and, thus, create a service or a site. There are several situations in which you can create a floating IP address:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Dashboard Overview – Project Quotas And Flavors Limits on CloudFerro Cloud[](#dashboard-overview-project-quotas-and-flavors-limits-on-brand-name "Permalink to this headline")
|
||||
Dashboard Overview – Project Quotas And Flavors Limits on CloudFerro Cloud[🔗](#dashboard-overview-project-quotas-and-flavors-limits-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================================
|
||||
|
||||
While using CloudFerro Cloud platform, one of the first things you will spot is the “Limit Summary”. Each project is restricted by preset quotas. This is preventing system capacities from being exhausted without notification and guaranteeing free resources.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on CloudFerro Cloud[](#how-to-create-a-new-linux-vm-with-nvidia-virtual-gpu-in-the-openstack-dashboard-horizon-on-brand-name "Permalink to this headline")
|
||||
How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on CloudFerro Cloud[🔗](#how-to-create-a-new-linux-vm-with-nvidia-virtual-gpu-in-the-openstack-dashboard-horizon-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================================================================
|
||||
|
||||
You can create Linux virtual machine with NVIDIA RTX A6000 as the additional graphics card. The card contains
|
||||
@ -9,7 +9,7 @@ You can create Linux virtual machine with NVIDIA RTX A6000 as the additional gra
|
||||
|
||||
There are four variants, using 6, 12, 24, or 48 GB of VGPU RAM. You will be able to select the particular model by choosing a proper flavor when creating the instance in Horizon (see below).
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to create an instance with NVIDIA support
|
||||
@ -19,7 +19,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * use the console within Horizon interface and
|
||||
> * verify that you are using the NVIDIA vGPU.
|
||||
|
||||
Step 1 Create New Instance with NVIDIA Image Support[](#step-1-create-new-instance-with-nvidia-image-support "Permalink to this headline")
|
||||
Step 1 Create New Instance with NVIDIA Image Support[🔗](#step-1-create-new-instance-with-nvidia-image-support "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To define a new instance, use the following series of commands:
|
||||
@ -58,7 +58,7 @@ Click on the Next button and get to the following screen:
|
||||
|
||||
You will now choose one of the four models of the RTX A6000 card.
|
||||
|
||||
Step 2 Select Card Model / Flavor[](#step-2-select-card-model-flavor "Permalink to this headline")
|
||||
Step 2 Select Card Model / Flavor[🔗](#step-2-select-card-model-flavor "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
The four available: **RTXA6000-6C**, **RTXA6000-12C**, **RTXA6000-24C**, and **RTXA6000-48C**, are described in this table:
|
||||
@ -77,7 +77,7 @@ Yellow triangles in the listing mean that you cannot select that row as one of t
|
||||
|
||||
In the situation above, select *vm.a6000.2* and continue going through the usual motions of selecting instance elements to finish the procedure.
|
||||
|
||||
Step 3 Finish Creating the Instance[](#step-3-finish-creating-the-instance "Permalink to this headline")
|
||||
Step 3 Finish Creating the Instance[🔗](#step-3-finish-creating-the-instance "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
Click “Networks” and then choose desired networks.
|
||||
@ -100,7 +100,7 @@ Note
|
||||
|
||||
If you want to make your VM accessible from the Internet, see this article: [How to Add or Remove Floating IP’s to your VM on CloudFerro Cloud](../networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
Step 4 Issue Commands from the Console[](#step-4-issue-commands-from-the-console "Permalink to this headline")
|
||||
Step 4 Issue Commands from the Console[🔗](#step-4-issue-commands-from-the-console "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Open the drop-down menu and choose “Console”.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to access the VM from OpenStack console on CloudFerro Cloud[](#how-to-access-the-vm-from-openstack-console-on-brand-name "Permalink to this headline")
|
||||
How to access the VM from OpenStack console on CloudFerro Cloud[🔗](#how-to-access-the-vm-from-openstack-console-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================
|
||||
|
||||
Once you have created a virtual machine in OpenStack, you will need to perform various administrative tasks such as:
|
||||
@ -20,21 +20,21 @@ and so on. There are three ways to enter the back end part of virtual machines:
|
||||
|
||||
As these images are only used for automatic creation of Kubernetes instances, you have to enter them using Kubernetes methods. That boils down to using **kubectl exec** command (see below).
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Use console for Linux based virtual machines
|
||||
> * Use console for Windows based virtual machines
|
||||
> * Use console for Fedora base virtual machines
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
Using console for administrative tasks within Linux based VMs[](#using-console-for-administrative-tasks-within-linux-based-vms "Permalink to this headline")
|
||||
Using console for administrative tasks within Linux based VMs[🔗](#using-console-for-administrative-tasks-within-linux-based-vms "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
1. Go to <https://horizon.cloudferro.com> and select your authentication method:
|
||||
@ -74,7 +74,7 @@ Attention
|
||||
|
||||
Google Chrome seems to work slowly while using the OpenStack console. Firefox works well.
|
||||
|
||||
Using console to perform administrative tasks within Fedora VMs[](#using-console-to-perform-administrative-tasks-within-fedora-vms "Permalink to this headline")
|
||||
Using console to perform administrative tasks within Fedora VMs[🔗](#using-console-to-perform-administrative-tasks-within-fedora-vms "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
For normal VMs, choose either Ubuntu- or CentOS-based images while creating a VM – but not Fedora. It is meant only for automatic creation of instances that belong to Kubernetes clusters. Such instances will have either word *master* or *node* in their names. Here is a typical series of instances that belong to two different clusters, called *vault* and *k8s-23*:
|
||||
@ -102,7 +102,7 @@ This article shows an example of an **exec** command to enter the VM and, later,
|
||||
|
||||
[Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on CloudFerro Cloud OpenStack Magnum](../kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-CloudFerro-Cloud-OpenStack-Magnum.html.md)
|
||||
|
||||
### Performing administrative tasks within Windows based VMs[](#performing-administrative-tasks-within-windows-based-vms "Permalink to this headline")
|
||||
### Performing administrative tasks within Windows based VMs[🔗](#performing-administrative-tasks-within-windows-based-vms "Permalink to this headline")
|
||||
|
||||
In case of **Windows** set a new password for *Administrator* profile.
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to clone existing and configured VMs on CloudFerro Cloud[](#how-to-clone-existing-and-configured-vms-on-brand-name "Permalink to this headline")
|
||||
How to clone existing and configured VMs on CloudFerro Cloud[🔗](#how-to-clone-existing-and-configured-vms-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================
|
||||
|
||||
The simplest way to create the snapshot of your machine is using “Horizon” - graphical interface of OpenStack dashboard.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create Windows VM on OpenStack Horizon and access it via web console on CloudFerro Cloud[](#how-to-create-windows-vm-on-openstack-horizon-and-access-it-via-web-console-on-brand-name "Permalink to this headline")
|
||||
How to create Windows VM on OpenStack Horizon and access it via web console on CloudFerro Cloud[🔗](#how-to-create-windows-vm-on-openstack-horizon-and-access-it-via-web-console-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================================================================
|
||||
|
||||
This article provides a straightforward way of creating a functional Windows VM on CloudFerro Cloud cloud, using the Horizon graphical interface.
|
||||
@ -10,7 +10,7 @@ The idea is to
|
||||
|
||||
all from your Internet browser.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Accessing the Launch Instance menu
|
||||
@ -22,14 +22,14 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
|
||||
> * Launching virtual machine
|
||||
> * Setting **Administrator** password
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
Step 1: Access the Launch Instance menu[](#step-1-access-the-launch-instance-menu "Permalink to this headline")
|
||||
Step 1: Access the Launch Instance menu[🔗](#step-1-access-the-launch-instance-menu "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In the Horizon dashboard, navigate to **Compute -> Instances**. Click the **Launch Instance** at the top of the **Instances** section:
|
||||
@ -40,7 +40,7 @@ You should get the following window:
|
||||
|
||||

|
||||
|
||||
Step 2: Choose the instance name[](#step-2-choose-the-instance-name "Permalink to this headline")
|
||||
Step 2: Choose the instance name[🔗](#step-2-choose-the-instance-name "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------
|
||||
|
||||
In the window which appeared, enter the name you wish to give to your instance in the **Instance Name** text field. In this example, we use **test-windows-vm** as the name:
|
||||
@ -49,7 +49,7 @@ In the window which appeared, enter the name you wish to give to your instance i
|
||||
|
||||
Click **Next >**.
|
||||
|
||||
Step 3: Choose source[](#step-3-choose-source "Permalink to this headline")
|
||||
Step 3: Choose source[🔗](#step-3-choose-source "Permalink to this headline")
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
The default value in the drop-down menu **Select Boot Source** is **Image**, meaning that you will choose from one of the images that are present in your version of Horizon. If another value is selected, revert to **Image** instead.
|
||||
@ -72,7 +72,7 @@ Click **Next >**.
|
||||
|
||||
If you allocate a wrong image by mistake, you can remove it from the **Allocated** section by clicking **↓** next to its name.
|
||||
|
||||
Step 4: Choose flavor[](#step-4-choose-flavor "Permalink to this headline")
|
||||
Step 4: Choose flavor[🔗](#step-4-choose-flavor "Permalink to this headline")
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
In this step you will choose the flavor of your virtual machine. Flavors manage access to resources such as VCPUS, RAM and storage.
|
||||
@ -119,7 +119,7 @@ Note
|
||||
|
||||
In examples that follow, we use two networks, one with name starting with **cloud\_** and the name of the other starting with **eodata\_**. The former network should always be present in the account, but the latter may or may not present. If you do not have network which name starts with **eodata\_**, you may create it or use any other network that you already have and want to use.
|
||||
|
||||
Step 5: Attach networks to your virtual machine[](#step-5-attach-networks-to-your-virtual-machine "Permalink to this headline")
|
||||
Step 5: Attach networks to your virtual machine[🔗](#step-5-attach-networks-to-your-virtual-machine "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The next step contains the list of networks available to you:
|
||||
@ -135,7 +135,7 @@ Allocate both of them and click **Next >**.
|
||||
|
||||
The next step is called **Network Ports**. In it simply quick **Next >** without doing anything else.
|
||||
|
||||
Step 6: Choose security groups[](#step-6-choose-security-groups "Permalink to this headline")
|
||||
Step 6: Choose security groups[🔗](#step-6-choose-security-groups "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
Security groups control Internet traffic for your virtual machine.
|
||||
@ -146,7 +146,7 @@ Group **allow\_ping\_ssh\_icmp\_rdp** exposes your VM to various types of networ
|
||||
|
||||

|
||||
|
||||
Step 7: Launch your virtual machine[](#step-7-launch-your-virtual-machine "Permalink to this headline")
|
||||
Step 7: Launch your virtual machine[🔗](#step-7-launch-your-virtual-machine "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
|
||||
Other steps from the **Launch Instance** window are optional. Once you have done the previous steps of this article, click **Launch Instance** button:
|
||||
@ -159,7 +159,7 @@ Your virtual machine should appear in the **Instances** section of the Horizon d
|
||||
|
||||
Once the **Status** is **Active**, the virtual machine should be running. The next step involves setting access to it.
|
||||
|
||||
Step 8: Set the Administrator password[](#step-8-set-the-administrator-password "Permalink to this headline")
|
||||
Step 8: Set the Administrator password[🔗](#step-8-set-the-administrator-password "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once your instance has **Active** status, click on its name:
|
||||
@ -194,7 +194,7 @@ Click **OK**.
|
||||
|
||||
Wait until you see the standard Windows desktop.
|
||||
|
||||
Step 9: Update Windows[](#step-9-update-windows "Permalink to this headline")
|
||||
Step 9: Update Windows[🔗](#step-9-update-windows "Permalink to this headline")
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
Once the Windows virtual machine is up and running, you should update its operating system to have the latest security fixes. Click **Start**, and then **Settings**:
|
||||
@ -211,7 +211,7 @@ You should now see **Windows Update** screen, which can look like this:
|
||||
|
||||
Follow the appropriate prompts to update your operating system.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
If you want to access your virtual machine remotely using RDP (Remote Desktop Protocol), you should consider increasing its security by using a bastion host. The following article contains more information: [Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on CloudFerro Cloud](../windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create a Linux VM and access it from Linux command line on CloudFerro Cloud[](#how-to-create-a-linux-vm-and-access-it-from-linux-command-line-on-brand-name "Permalink to this headline")
|
||||
How to create a Linux VM and access it from Linux command line on CloudFerro Cloud[🔗](#how-to-create-a-linux-vm-and-access-it-from-linux-command-line-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================================================================
|
||||
|
||||
Creating a virtual machine in a CloudFerro Cloud cloud allows you to perform computations without having to engage your own infrastructure. In this article you shall create a Linux based virtual machine and access it remotely from a Linux command line on a desktop or laptop.
|
||||
@ -9,7 +9,7 @@ Note
|
||||
|
||||
This article only covers the basics of creating a VM - it does not cover topics such as use of NVIDIA hardware or creating a volume during the creation of a VM.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a Linux virtual machine in CloudFerro Cloud cloud using command **Launch Instance** from Horizon Dashboard
|
||||
@ -31,7 +31,7 @@ For external access
|
||||
> * Attach a floating IP to the instance so that it can be found on the Internet and, finally,
|
||||
> * Use SSH to connect to that virtual machine from another Linux based system
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -56,7 +56,7 @@ Alternatively, you can also create a key pair directly in the Horizon:
|
||||
|
||||
[How to create key pair in OpenStack Dashboard on CloudFerro Cloud](How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Options for creation of a Virtual Machine (VM)[](#options-for-creation-of-a-virtual-machine-vm "Permalink to this headline")
|
||||
Options for creation of a Virtual Machine (VM)[🔗](#options-for-creation-of-a-virtual-machine-vm "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Creation of a virtual machine is divided into 11 sections, four of which are mandatory (denoted by an asterisk in the end of the name of the option). In addition to those four (**Details**, **Source**, **Flavor**, and **Networks**), we shall define **Security Groups** and **Key Pairs**. The rest of the options to launch an instance is out of scope of this article.
|
||||
@ -67,7 +67,7 @@ In OpenStack terminology, a *virtual machine* is also an *instance*. *Instance*
|
||||
|
||||
The window to create a virtual machine is called **Launch Instance**. You will enter all the data about an instance into that window and its options.
|
||||
|
||||
Step 1 Start the Launch Instance window and name the virtual machine[](#step-1-start-the-launch-instance-window-and-name-the-virtual-machine "Permalink to this headline")
|
||||
Step 1 Start the Launch Instance window and name the virtual machine[🔗](#step-1-start-the-launch-instance-window-and-name-the-virtual-machine "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In the Horizon dashboard go to **Compute** -> **Instances** and click **Launch Instance**. You should get the following window:
|
||||
@ -78,7 +78,7 @@ Type the name for your virtual machine in the **Instance Name** text field.
|
||||
|
||||
Click **Next** or the **Source** option on the left side menu.
|
||||
|
||||
Step 2 Define the source of the virtual machine[](#step-2-define-the-source-of-the-virtual-machine "Permalink to this headline")
|
||||
Step 2 Define the source of the virtual machine[🔗](#step-2-define-the-source-of-the-virtual-machine "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The **Source** window appears:
|
||||
@ -103,7 +103,7 @@ Also, make sure that in the section **Create New Volume** option **No** is selec
|
||||
|
||||
Click **Next** or click on button **Flavor** to define the flavor of the instance.
|
||||
|
||||
Step 3 Define the flavor of the instance[](#step-3-define-the-flavor-of-the-instance "Permalink to this headline")
|
||||
Step 3 Define the flavor of the instance[🔗](#step-3-define-the-flavor-of-the-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You should now see the following form:
|
||||
@ -134,7 +134,7 @@ Another possible explanation might be that your quota is too low for creating a
|
||||
|
||||
Click **Next** or click **Networks** to define networks.
|
||||
|
||||
Step 4 Define networks for the virtual machine[](#step-4-define-networks-for-the-virtual-machine "Permalink to this headline")
|
||||
Step 4 Define networks for the virtual machine[🔗](#step-4-define-networks-for-the-virtual-machine "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You should now see the following window:
|
||||
@ -149,7 +149,7 @@ Choose that network and also choose any other network that you want to access th
|
||||
|
||||
These were the obligatory options. Since you want to access the instance through an SSH connection, you will need to define **Security Groups** and **Key Pair**.
|
||||
|
||||
Step 5 Define security groups for VM[](#step-5-define-security-groups-for-vm "Permalink to this headline")
|
||||
Step 5 Define security groups for VM[🔗](#step-5-define-security-groups-for-vm "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Security groups control network traffic to and from your virtual machine.
|
||||
@ -165,7 +165,7 @@ By default, you have access to two groups:
|
||||
|
||||
Enable both of these rules. One of the open ports in **allow\_ping\_ssh\_icmp\_rdp** is 22, which is a prerequisite for SSH access.
|
||||
|
||||
Step 6 Create a key pair for SSH access[](#step-6-create-a-key-pair-for-ssh-access "Permalink to this headline")
|
||||
Step 6 Create a key pair for SSH access[🔗](#step-6-create-a-key-pair-for-ssh-access "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To use SSH to connect your local Linux computer to the cloud Linux “computer”, you will need to provide one public and one secret key. (Keys are random strings, usually hundreds of characters long.)
|
||||
@ -184,7 +184,7 @@ If you haven’t created your key pair yet, please follow Prerequisite No. 4.
|
||||
|
||||
Anyways, make sure that your uploaded key is in the **Allocated** section.
|
||||
|
||||
Step 7 Create the instance[](#step-7-create-the-instance "Permalink to this headline")
|
||||
Step 7 Create the instance[🔗](#step-7-create-the-instance "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
Once you have set everything up, click **Launch Instance**.
|
||||
@ -205,7 +205,7 @@ In Step 4 you have attached a network with the name that starts with **cloud\_**
|
||||
|
||||
Just like on the above screenshot, under header **IP Address**, you will see network addresses which both start with **10.**. It means that they are local network addresses. If you want to access your instance remotely, it must have a static IP address. The way to add it is to attach a so-called *floating IP* address to the instance.
|
||||
|
||||
Step 8 Attach a Floating IP to the instance[](#step-8-attach-a-floating-ip-to-the-instance "Permalink to this headline")
|
||||
Step 8 Attach a Floating IP to the instance[🔗](#step-8-attach-a-floating-ip-to-the-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is how to create and attach a floating IP to your instance: [How to Add or Remove Floating IP’s to your VM on CloudFerro Cloud](../networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-CloudFerro-Cloud.html.md).
|
||||
@ -216,7 +216,7 @@ Once you have added the floating IP, you will see it in the Horizon dashboard un
|
||||
|
||||
The floating IP address in that article is **64.225.132.0**. Your address will vary.
|
||||
|
||||
Step 9 Connecting to your virtual machine using SSH[](#step-9-connecting-to-your-virtual-machine-using-ssh "Permalink to this headline")
|
||||
Step 9 Connecting to your virtual machine using SSH[🔗](#step-9-connecting-to-your-virtual-machine-using-ssh "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The following article has information about connecting to a virtual machine using SSH: [How to connect to your virtual machine via SSH in Linux on CloudFerro Cloud](../networking/How-to-connect-to-your-virtual-machine-via-SSH-in-Linux-on-CloudFerro-Cloud.html.md).
|
||||
@ -230,7 +230,7 @@ ssh [email protected]
|
||||
|
||||
The IP address in that article is **64.225.132.99** and is different from the address from the previous article. Instead of IP addresses used in these articles (**64.225.132.99** and **64.225.132.0**), enter the IP address of your instance which you saw after doing Step 8.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
CloudFerro Cloud cloud can be used for general hosting needs, such as
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create a Linux VM and access it from Windows desktop on CloudFerro Cloud[](#how-to-create-a-linux-vm-and-access-it-from-windows-desktop-on-brand-name "Permalink to this headline")
|
||||
How to create a Linux VM and access it from Windows desktop on CloudFerro Cloud[🔗](#how-to-create-a-linux-vm-and-access-it-from-windows-desktop-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================================
|
||||
|
||||
Creating a virtual machine in a CloudFerro Cloud cloud allows you to perform computations without having to engage your own infrastructure. In this article you shall create a Linux based virtual machine and access it remotely using PuTTY on Windows.
|
||||
@ -9,7 +9,7 @@ Note
|
||||
|
||||
This article only covers the basics of creating a VM - it does not cover topics such as use of NVIDIA hardware or creating a volume during the creation of a VM.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a Linux virtual machine in CloudFerro Cloud cloud using command **Launch Instance** from Horizon Dashboard
|
||||
@ -37,7 +37,7 @@ After that, you will connect to a VM using PuTTY:
|
||||
> * Save PuTTY configuration
|
||||
> * Connect to a VM
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -66,7 +66,7 @@ You need to have an SSH key pair. It consists of a public and private key. You c
|
||||
|
||||
This article contains information about configuring PuTTY using one such key pair.
|
||||
|
||||
Options for creation of a Virtual Machine (VM)[](#options-for-creation-of-a-virtual-machine-vm "Permalink to this headline")
|
||||
Options for creation of a Virtual Machine (VM)[🔗](#options-for-creation-of-a-virtual-machine-vm "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Creation of a virtual machine is divided into 11 sections, four of which are mandatory (denoted by an asterisk in the end of the name of the option). In addition to those four (**Details**, **Source**, **Flavor**, and **Networks**), we shall define **Security Groups** and **Key Pairs**. The rest of the options to launch an instance is out of scope of this article.
|
||||
@ -77,7 +77,7 @@ In OpenStack terminology, a *virtual machine* is also an *instance*. *Instance*
|
||||
|
||||
The window to create a virtual machine is called **Launch Instance**. You will enter all the data about an instance into that window.
|
||||
|
||||
Step 1 Start the Launch Instance window and name the virtual machine[](#step-1-start-the-launch-instance-window-and-name-the-virtual-machine "Permalink to this headline")
|
||||
Step 1 Start the Launch Instance window and name the virtual machine[🔗](#step-1-start-the-launch-instance-window-and-name-the-virtual-machine "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In the Horizon dashboard go to **Compute** -> **Instances** and click **Launch Instance**. You should get the following window:
|
||||
@ -88,7 +88,7 @@ Type the name for your virtual machine in the **Instance Name** text field.
|
||||
|
||||
Click **Next** or the **Source** option on the left side menu.
|
||||
|
||||
Step 2 Define the source of the virtual machine[](#step-2-define-the-source-of-the-virtual-machine "Permalink to this headline")
|
||||
Step 2 Define the source of the virtual machine[🔗](#step-2-define-the-source-of-the-virtual-machine "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The **Source** window appears:
|
||||
@ -113,7 +113,7 @@ Also, make sure that in the section **Create New Volume** option **No** is selec
|
||||
|
||||
Click **Next** or click on button **Flavor** to define the flavor of the instance.
|
||||
|
||||
Step 3 Define the flavor of the instance[](#step-3-define-the-flavor-of-the-instance "Permalink to this headline")
|
||||
Step 3 Define the flavor of the instance[🔗](#step-3-define-the-flavor-of-the-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You should now see the following form:
|
||||
@ -146,7 +146,7 @@ Another possible cause might be that your quota is too low for creating a VM wit
|
||||
|
||||
Click **Next** or click **Networks** to define networks.
|
||||
|
||||
Step 4 Define networks for the virtual machine[](#step-4-define-networks-for-the-virtual-machine "Permalink to this headline")
|
||||
Step 4 Define networks for the virtual machine[🔗](#step-4-define-networks-for-the-virtual-machine "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You should now see a window to choose one or several networks that you want your VM to work with:
|
||||
@ -164,7 +164,7 @@ Other networks may be present in the system.
|
||||
|
||||
These were the obligatory options. Since you want to access the instance through an SSH connection, you will need to define **Security Groups** and **Key Pair**.
|
||||
|
||||
Step 5 Define security groups for VM[](#step-5-define-security-groups-for-vm "Permalink to this headline")
|
||||
Step 5 Define security groups for VM[🔗](#step-5-define-security-groups-for-vm "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Security groups control network traffic to and from your virtual machine.
|
||||
@ -180,7 +180,7 @@ By default, you have access to two groups:
|
||||
|
||||
Enable both of these rules. One of the open ports in **allow\_ping\_ssh\_icmp\_rdp** is 22, which is a prerequisite for SSH access.
|
||||
|
||||
Step 6 Create a key pair for SSH access[](#step-6-create-a-key-pair-for-ssh-access "Permalink to this headline")
|
||||
Step 6 Create a key pair for SSH access[🔗](#step-6-create-a-key-pair-for-ssh-access "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To use SSH to connect your local Linux computer to the cloud Linux “computer”, you will need to provide one public and one secret key. (Keys are random strings, usually hundreds of characters long.)
|
||||
@ -199,7 +199,7 @@ If you haven’t created your key pair yet, please follow Prerequisite No. 5.
|
||||
|
||||
Anyways, make sure that your uploaded key is in the **Allocated** section.
|
||||
|
||||
Step 7 Create the instance[](#step-7-create-the-instance "Permalink to this headline")
|
||||
Step 7 Create the instance[🔗](#step-7-create-the-instance "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
Once you have set everything up, click **Launch Instance**.
|
||||
@ -220,7 +220,7 @@ In Step 4 you have attached a network with the name that starts with **cloud\_**
|
||||
|
||||
Just like on the above screenshot, under header **IP Address**, you will see network addresses which both start with **10.**. It means that they are local network addresses. If you want to access your instance remotely, it must have a static IP address. The way to add it is to attach a so-called *floating IP* address to the instance.
|
||||
|
||||
Step 8 Attach a Floating IP to the instance[](#step-8-attach-a-floating-ip-to-the-instance "Permalink to this headline")
|
||||
Step 8 Attach a Floating IP to the instance[🔗](#step-8-attach-a-floating-ip-to-the-instance "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is how to create and attach a floating IP to your instance: [How to Add or Remove Floating IP’s to your VM on CloudFerro Cloud](../networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-CloudFerro-Cloud.html.md).
|
||||
@ -231,7 +231,7 @@ Once you have added the floating IP, you will see it in the Horizon dashboard un
|
||||
|
||||
The floating IP address in that article is **64.225.132.0**. Your address will vary.
|
||||
|
||||
Step 9 Convert your SSH key[](#step-9-convert-your-ssh-key "Permalink to this headline")
|
||||
Step 9 Convert your SSH key[🔗](#step-9-convert-your-ssh-key "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
If you followed Prequisite No. 5, you should have an SSH key pair on your local computer - public and private.
|
||||
@ -274,7 +274,7 @@ Close the **PuTTY Key Generator** window. Your saved file should like like this:
|
||||
|
||||
Of course, your file will probably have a different name than the one on the screenshot above.
|
||||
|
||||
Step 10 Configure PuTTY[](#step-10-configure-putty "Permalink to this headline")
|
||||
Step 10 Configure PuTTY[🔗](#step-10-configure-putty "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Run **PuTTY** from your **Start** menu. The following window should appear:
|
||||
@ -297,7 +297,7 @@ The location of your key should appear in the **Private key file for authenticat
|
||||
|
||||

|
||||
|
||||
Step 11 Save the session settings[](#step-11-save-the-session-settings "Permalink to this headline")
|
||||
Step 11 Save the session settings[🔗](#step-11-save-the-session-settings "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
To save these settings for future use, return to the **Session** category in which you typed the floating IP of your virtual machine. Choose the name of your session and type it in the text field found in the **Load, save or delete a stored session**:
|
||||
@ -308,7 +308,7 @@ Click **Save**. Your saved session should appear on the list:
|
||||
|
||||

|
||||
|
||||
Step 12 Connect to your virtual machine[](#step-12-connect-to-your-virtual-machine "Permalink to this headline")
|
||||
Step 12 Connect to your virtual machine[🔗](#step-12-connect-to-your-virtual-machine "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To connect to your virtual machine, click **Open**. If you are connecting to that machine for the first time, you should receive the following alert:
|
||||
@ -331,7 +331,7 @@ You should now be connected to your virtual machine and be able to execute comma
|
||||
|
||||

|
||||
|
||||
Using your saved PuTTY session to simplify login[](#using-your-saved-putty-session-to-simplify-login "Permalink to this headline")
|
||||
Using your saved PuTTY session to simplify login[🔗](#using-your-saved-putty-session-to-simplify-login "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to use your saved session, open **PuTTY**. In the **Load, save or delete a stored session**, click the name of your saved session from the list and click **Load**.
|
||||
@ -342,7 +342,7 @@ All your settings, including the floating IP of your VM should now be provided:
|
||||
|
||||
You can now start your session as explained in Step 12 above.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
CloudFerro Cloud cloud can be used for general hosting needs, such as
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to create a VM using the OpenStack CLI client on CloudFerro Cloud cloud[](#how-to-create-a-vm-using-the-openstack-cli-client-on-brand-name-cloud "Permalink to this headline")
|
||||
How to create a VM using the OpenStack CLI client on CloudFerro Cloud cloud[🔗](#how-to-create-a-vm-using-the-openstack-cli-client-on-brand-name-cloud "Permalink to this headline")
|
||||
===================================================================================================================================================================================
|
||||
|
||||
This article will cover creating a virtual machine on CloudFerro Cloud cloud using the OpenStack CLI client exclusively. It contains basic information to get you started.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The **openstack** command to create a VM
|
||||
@ -19,7 +19,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Adding a floating IP to the existing VM
|
||||
> * Using SSH to access the VM
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create instance snapshot using Horizon on CloudFerro Cloud[](#how-to-create-instance-snapshot-using-horizon-on-brand-name "Permalink to this headline")
|
||||
How to create instance snapshot using Horizon on CloudFerro Cloud[🔗](#how-to-create-instance-snapshot-using-horizon-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================
|
||||
|
||||
In this article, you will learn how to create instance snapshot on CloudFerro Cloud cloud, using Horizon dashboard.
|
||||
@ -12,7 +12,7 @@ Instance snapshots allow you to archive the state of the virtual machine. You ca
|
||||
|
||||
We cover both types of storage for instances, *ephemeral* and *persistent*.
|
||||
|
||||
The plan[](#the-plan "Permalink to this headline")
|
||||
The plan[🔗](#the-plan "Permalink to this headline")
|
||||
---------------------------------------------------
|
||||
|
||||
In reality, you will be using the procedures described in this article with the already existing instances.
|
||||
@ -25,7 +25,7 @@ It goes without saying that after following a section about one type of virtual
|
||||
|
||||
Or you can keep them and use them to create an instance out of it using one of articles mentioned in What To Do Next.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Create snapshot of instance which uses ephemeral storage
|
||||
@ -44,7 +44,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> + What happens if there are multiple volumes?
|
||||
> * Downloading an instance snapshot
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create key pair in OpenStack Dashboard on CloudFerro Cloud[](#how-to-create-key-pair-in-openstack-dashboard-on-brand-name "Permalink to this headline")
|
||||
How to create key pair in OpenStack Dashboard on CloudFerro Cloud[🔗](#how-to-create-key-pair-in-openstack-dashboard-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================
|
||||
|
||||
Open **Compute -> Key Pairs**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create new Linux VM in OpenStack Dashboard Horizon on CloudFerro Cloud[](#how-to-create-new-linux-vm-in-openstack-dashboard-horizon-on-brand-name "Permalink to this headline")
|
||||
How to create new Linux VM in OpenStack Dashboard Horizon on CloudFerro Cloud[🔗](#how-to-create-new-linux-vm-in-openstack-dashboard-horizon-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================================================================================
|
||||
|
||||
Go to **Project → Compute → Instances**.
|
||||
@ -43,7 +43,7 @@ Open the drop-down menu and choose **“Console”**.
|
||||
|
||||

|
||||
|
||||
Fig. 2 Click on the black terminal area (to activate access to the console). Type: **eoconsole** and hit Enter.[](#id1 "Permalink to this image")
|
||||
Fig. 2 Click on the black terminal area (to activate access to the console). Type: **eoconsole** and hit Enter.[🔗](#id1 "Permalink to this image")
|
||||
|
||||

|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to fix unresponsive console issue on CloudFerro Cloud[](#how-to-fix-unresponsive-console-issue-on-brand-name "Permalink to this headline")
|
||||
How to fix unresponsive console issue on CloudFerro Cloud[🔗](#how-to-fix-unresponsive-console-issue-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================
|
||||
|
||||
When you create a new virtual machine, the first thing you might want to do is to have a look at the console panel and check whether the instance has booted correctly.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to generate and manage EC2 credentials on CloudFerro Cloud[](#how-to-generate-and-manage-ec2-credentials-on-brand-name "Permalink to this headline")
|
||||
How to generate and manage EC2 credentials on CloudFerro Cloud[🔗](#how-to-generate-and-manage-ec2-credentials-on-brand-name "Permalink to this headline")
|
||||
=========================================================================================================================================================
|
||||
|
||||
EC2 credentials are used for accessing private S3 buckets on CloudFerro Cloud cloud. This article covers how to generate and manage a pair of EC2 credentials so that you will be able to mount those buckets both
|
||||
@ -10,7 +10,7 @@ Warning
|
||||
|
||||
A pair of EC2 credentials usually provides access to secret data so share it only with trusted individuals.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to generate or use Application Credentials via CLI on CloudFerro Cloud[](#how-to-generate-or-use-application-credentials-via-cli-on-brand-name "Permalink to this headline")
|
||||
How to generate or use Application Credentials via CLI on CloudFerro Cloud[🔗](#how-to-generate-or-use-application-credentials-via-cli-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================================================
|
||||
|
||||
You can authenticate your applications to *keystone* by creating application credentials for them. It is also possible to delegate a subset of role assignments on a project to an application credential, granting the same or restricted authorization to a project for the app.
|
||||
|
||||
With application credentials, apps authenticate with the “application credential ID” and a “secret” string which is not the user’s password. Thanks to this, the user’s password is not embedded in the application’s configuration, which is especially important for users whose identities are managed by an external system such as LDAP or a single sign-on system.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -40,7 +40,7 @@ jq --version # Check the installed jq version
|
||||
|
||||
```
|
||||
|
||||
Step 1 CLI Commands for Application Credentials[](#step-1-cli-commands-for-application-credentials "Permalink to this headline")
|
||||
Step 1 CLI Commands for Application Credentials[🔗](#step-1-cli-commands-for-application-credentials "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Command
|
||||
@ -75,7 +75,7 @@ Note
|
||||
|
||||
The **--help** option will produce a *vim*-like output, so type **q** on the keyboard to get back to the usual terminal line.
|
||||
|
||||
Step 2 The Simplest Way to Create a New Application Credential[](#step-2-the-simplest-way-to-create-a-new-application-credential "Permalink to this headline")
|
||||
Step 2 The Simplest Way to Create a New Application Credential[🔗](#step-2-the-simplest-way-to-create-a-new-application-credential "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The simplest way to generate a new application credential is just to define the name – the rest of the parameters will be defined automatically for you. The following command uses name **cred2**:
|
||||
@ -89,7 +89,7 @@ The new application credential will be both formed and shown on the screen:
|
||||
|
||||

|
||||
|
||||
Step 3 Using All Parameters to Create a New Application Credential[](#step-3-using-all-parameters-to-create-a-new-application-credential "Permalink to this headline")
|
||||
Step 3 Using All Parameters to Create a New Application Credential[🔗](#step-3-using-all-parameters-to-create-a-new-application-credential "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is the meaning of related parameters:
|
||||
@ -144,7 +144,7 @@ The result is:
|
||||
|
||||
The name of the new application credential will be **foo-dev-member4**, will be used by role **\_member\_** and so on. The part of the command starting with **| jq -r** prints only the values of credentials **id** and **secret** as you have to enter those value into the *clouds.yml* file in order to activate the recognition part of the process.
|
||||
|
||||
Step 4 Enter id and secret into clouds.yml[](#step-4-enter-id-and-secret-into-clouds-yml "Permalink to this headline")
|
||||
Step 4 Enter id and secret into clouds.yml[🔗](#step-4-enter-id-and-secret-into-clouds-yml "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You are now going to store the values of **id** and **secret** that the cloud has sent to you. Once stored, future **openstack** commands will use these value to authenticate to the cloud without using any kind of password.
|
||||
@ -227,7 +227,7 @@ This is how it should look in the editor:
|
||||
|
||||
Save it with **Ctrl**-**X**, then press **Y** and Enter.
|
||||
|
||||
Step 5 Gain access to the cloud by specifying OS\_CLOUD or --os-cloud[](#step-5-gain-access-to-the-cloud-by-specifying-os-cloud-or-os-cloud "Permalink to this headline")
|
||||
Step 5 Gain access to the cloud by specifying OS\_CLOUD or --os-cloud[🔗](#step-5-gain-access-to-the-cloud-by-specifying-os-cloud-or-os-cloud "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Application credentials give access to all of the activated regions and you have to specify which one to use. Specify it as a value of parameter **--os-region**, for instance, WAW3-2, WAW4-1 (or what else have you).
|
||||
@ -268,7 +268,7 @@ If you had two or more clouds defined in the *clouds.yml* file, then using **--o
|
||||
|
||||
In both cases, you can access the cloud without specifying the password, which was the goal in the first place.
|
||||
|
||||
Environment variable-based storage[](#environment-variable-based-storage "Permalink to this headline")
|
||||
Environment variable-based storage[🔗](#environment-variable-based-storage "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can export them as environment variables. This increases security, especially in virtual machines. Also, automation tools can use them dynamically.
|
||||
@ -294,7 +294,7 @@ source ~/.bashrc
|
||||
|
||||
This method is useful for scripted deployments, temporary sessions, and when you don’t want credentials stored in files.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Here are some articles that use application credentials:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to install Python virtualenv or virtualenvwrapper on CloudFerro Cloud[](#how-to-install-python-virtualenv-or-virtualenvwrapper-on-brand-name "Permalink to this headline")
|
||||
How to install Python virtualenv or virtualenvwrapper on CloudFerro Cloud[🔗](#how-to-install-python-virtualenv-or-virtualenvwrapper-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================================
|
||||
|
||||
Virtualenv is a tool with which you are able to create isolated Python environments. It is mainly used to get rid of problems with dependencies and versions.
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
How to start a VM from a snapshot on CloudFerro Cloud[](#how-to-start-a-vm-from-a-snapshot-on-brand-name "Permalink to this headline")
|
||||
How to start a VM from a snapshot on CloudFerro Cloud[🔗](#how-to-start-a-vm-from-a-snapshot-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================================
|
||||
|
||||
a) Volume Snapshot[](#a-volume-snapshot "Permalink to this headline")
|
||||
a) Volume Snapshot[🔗](#a-volume-snapshot "Permalink to this headline")
|
||||
----------------------------------------------------------------------
|
||||
|
||||
1. Choose the desired virtual machine (booted from Volume) and click on the “Create snapshot” button.
|
||||
@ -38,7 +38,7 @@ a) Volume Snapshot[](#a-volume-snapshot "Permalink to this headline")
|
||||
|
||||

|
||||
|
||||
b) Image Snapshot[](#b-image-snapshot "Permalink to this headline")
|
||||
b) Image Snapshot[🔗](#b-image-snapshot "Permalink to this headline")
|
||||
--------------------------------------------------------------------
|
||||
|
||||
1.Choose the desired virtual machine (booted from Glance image) and click on the “Create snapshot” button.
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to start a VM from instance snapshot using Horizon dashboard on CloudFerro Cloud[](#how-to-start-a-vm-from-instance-snapshot-using-horizon-dashboard-on-brand-name "Permalink to this headline")
|
||||
How to start a VM from instance snapshot using Horizon dashboard on CloudFerro Cloud[🔗](#how-to-start-a-vm-from-instance-snapshot-using-horizon-dashboard-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================================================
|
||||
|
||||
In this article, you will learn how to create a virtual machine from an instance snapshot using Horizon dashboard.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to transfer volumes between domains and projects using Horizon dashboard on CloudFerro Cloud[](#how-to-transfer-volumes-between-domains-and-projects-using-horizon-dashboard-on-brand-name "Permalink to this headline")
|
||||
How to transfer volumes between domains and projects using Horizon dashboard on CloudFerro Cloud[🔗](#how-to-transfer-volumes-between-domains-and-projects-using-horizon-dashboard-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================================================================================
|
||||
|
||||
Volumes in OpenStack can be used to store data. They are visible to virtual machines like drives.
|
||||
@ -9,14 +9,14 @@ This article covers changing the assignment of a volume to a project. This allow
|
||||
|
||||
The *source* project and *destination* project must both be on the same cloud (for example WAW3-2). They can (but don’t have to) belong to different users from different domains and organizations.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Initializing transfer of volume
|
||||
> * Accepting transfer of volume
|
||||
> * Cancelling transfer of volume
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -47,7 +47,7 @@ To access each of these projects directly (if possible), depending on the circum
|
||||
|
||||
If you don’t have direct access to any of these projects, you probably can request their members to execute commands mentioned in this article.
|
||||
|
||||
Step 1: Initializing transfer of volume[](#step-1-initializing-transfer-of-volume "Permalink to this headline")
|
||||
Step 1: Initializing transfer of volume[🔗](#step-1-initializing-transfer-of-volume "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Perform this step in the *source* project.
|
||||
@ -86,7 +86,7 @@ Your volume should now have the following **Status**: **Awaiting Transfer**.
|
||||
|
||||
Note that after initializing the transfer, the volume cannot be connected to any virtual machine until the transfer is accepted or cancelled. To learn how to cancel the transfer (if you, say, accidentally chose the wrong volume), see section **Cancelling transfer of volume** near the end of the article.
|
||||
|
||||
Step 2: Accepting transfer of volume[](#step-2-accepting-transfer-of-volume "Permalink to this headline")
|
||||
Step 2: Accepting transfer of volume[🔗](#step-2-accepting-transfer-of-volume "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Perform this step in the *source* project.
|
||||
@ -107,7 +107,7 @@ The volume should now be visible on the list:
|
||||
|
||||

|
||||
|
||||
Cancelling transfer of volume[](#cancelling-transfer-of-volume "Permalink to this headline")
|
||||
Cancelling transfer of volume[🔗](#cancelling-transfer-of-volume "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
If you, say, accidentally initiated transfer for a wrong volume and nobody accepts that transfer, it can be cancelled.
|
||||
@ -138,7 +138,7 @@ After cancelling, your volume should now once again have status **Available**:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now that the volume has been transferred, you might want to connect it to a virtual machine. This article includes information how to do that: [How to move data volume between two VMs using OpenStack Horizon on CloudFerro Cloud](../datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to upload custom image to CloudFerro Cloud cloud using OpenStack Horizon dashboard[](#how-to-upload-custom-image-to-brand-name-cloud-using-openstack-horizon-dashboard "Permalink to this headline")
|
||||
How to upload custom image to CloudFerro Cloud cloud using OpenStack Horizon dashboard[🔗](#how-to-upload-custom-image-to-brand-name-cloud-using-openstack-horizon-dashboard "Permalink to this headline")
|
||||
=========================================================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will upload custom image stored on your local computer to CloudFerro Cloud cloud, using the Horizon Dashboard. The uploaded image will be available within your project alongside default images from CloudFerro Cloud cloud and you will be able to create virtual machines using it.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to check for the presence of image in CloudFerro Cloud cloud
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Example: how to upload image for Debian 11
|
||||
> * What happens if you lose Internet connection during upload
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to upload your custom image using OpenStack CLI on CloudFerro Cloud[](#how-to-upload-your-custom-image-using-openstack-cli-on-brand-name "Permalink to this headline")
|
||||
How to upload your custom image using OpenStack CLI on CloudFerro Cloud[🔗](#how-to-upload-your-custom-image-using-openstack-cli-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will upload custom image stored on your local computer to CloudFerro Cloud cloud, using the OpenStack CLI client. The uploaded image will be available within your project alongside default images from CloudFerro Cloud cloud and you will be able to create virtual machines using it.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to check for the presence of the image in your OpenStack cloud
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Example: how to upload image for Debian 11
|
||||
> * What happens if you lose Internet connection during upload
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to install and use Docker on Ubuntu 24.04[](#how-to-install-and-use-docker-on-ubuntu-24-04 "Permalink to this headline")
|
||||
How to install and use Docker on Ubuntu 24.04[🔗](#how-to-install-and-use-docker-on-ubuntu-24-04 "Permalink to this headline")
|
||||
=============================================================================================================================
|
||||
|
||||
This guide will walk you through
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to Use GUI in Linux VM on CloudFerro Cloud and access it From Local Linux Computer[](#how-to-use-gui-in-linux-vm-on-brand-name-and-access-it-from-local-linux-computer "Permalink to this headline")
|
||||
How to Use GUI in Linux VM on CloudFerro Cloud and access it From Local Linux Computer[🔗](#how-to-use-gui-in-linux-vm-on-brand-name-and-access-it-from-local-linux-computer "Permalink to this headline")
|
||||
=========================================================================================================================================================================================================
|
||||
|
||||
In this article you will learn how to use GUI (graphical user interface) on a Linux virtual machine running on CloudFerro Cloud cloud.
|
||||
@ -7,7 +7,7 @@ For this purpose, you will install and use **X2Go** on your local Linux computer
|
||||
|
||||
This article covers the installation of two desktop environments: MATE and XFCE. Choose the one that suits you best.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Installing X2Go client
|
||||
@ -15,7 +15,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Connecting to your virtual machine using X2Go client
|
||||
> * Basic troubleshooting
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -34,7 +34,7 @@ You need a Linux virtual machine running on CloudFerro Cloud cloud. You need to
|
||||
|
||||
This article was written for virtual machines using a default Ubuntu 20.04 image on cloud. Adjust the instructions from this article accordingly if your virtual machine has a different Linux distribution.
|
||||
|
||||
Step 1: Install X2Go client[](#step-1-install-x2go-client "Permalink to this headline")
|
||||
Step 1: Install X2Go client[🔗](#step-1-install-x2go-client "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
Open the terminal on your local Linux computer and update your packages by executing the following command:
|
||||
@ -51,10 +51,10 @@ sudo apt install x2goclient
|
||||
|
||||
```
|
||||
|
||||
Step 2: Install the desktop environment on your VM[](#step-2-install-the-desktop-environment-on-your-vm "Permalink to this headline")
|
||||
Step 2: Install the desktop environment on your VM[🔗](#step-2-install-the-desktop-environment-on-your-vm "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
### Method 1: Installing MATE[](#method-1-installing-mate "Permalink to this headline")
|
||||
### Method 1: Installing MATE[🔗](#method-1-installing-mate "Permalink to this headline")
|
||||
|
||||
Connect to your VM using SSH. Update your packages there:
|
||||
|
||||
@ -81,7 +81,7 @@ sudo reboot
|
||||
|
||||
```
|
||||
|
||||
### Method 2: Installing XFCE[](#method-2-installing-xfce "Permalink to this headline")
|
||||
### Method 2: Installing XFCE[🔗](#method-2-installing-xfce "Permalink to this headline")
|
||||
|
||||
Connect to your VM using SSH. Update your packages there:
|
||||
|
||||
@ -108,7 +108,7 @@ sudo reboot
|
||||
|
||||
```
|
||||
|
||||
Step 3: Connect to your VM using X2Go[](#step-3-connect-to-your-vm-using-x2go "Permalink to this headline")
|
||||
Step 3: Connect to your VM using X2Go[🔗](#step-3-connect-to-your-vm-using-x2go "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Open X2Go on your local Linux computer. If you haven’t configured any session yet, you should get the window used for creating one:
|
||||
@ -145,7 +145,7 @@ If you, however, chose XFCE, it should look like this:
|
||||
|
||||

|
||||
|
||||
Troubleshooting - Using the terminal emulator on XFCE[](#troubleshooting-using-the-terminal-emulator-on-xfce "Permalink to this headline")
|
||||
Troubleshooting - Using the terminal emulator on XFCE[🔗](#troubleshooting-using-the-terminal-emulator-on-xfce "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
If the button **Terminal Emulator** on your taskbar does not launch your terminal, click the **Applications** menu in the upper left corner of the screen:
|
||||
@ -170,12 +170,12 @@ Click **Close**.
|
||||
|
||||
The button should now launch the terminal emulator correctly.
|
||||
|
||||
Troubleshooting - Keyboard layout[](#troubleshooting-keyboard-layout "Permalink to this headline")
|
||||
Troubleshooting - Keyboard layout[🔗](#troubleshooting-keyboard-layout "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
If you discover that the system does not use the keyboard layout you chose during the installation of the desktop environment, you will need to set it manually. The process differs depending on the desktop environment you chose.
|
||||
|
||||
### MATE[](#mate "Permalink to this headline")
|
||||
### MATE[🔗](#mate "Permalink to this headline")
|
||||
|
||||
Click the **Menu** in the upper left corner of the screen:
|
||||
|
||||
@ -195,7 +195,7 @@ Navigate to the **Layouts** tab:
|
||||
|
||||
Here, you can add or remove keyboard layouts depending on your needs.
|
||||
|
||||
### XFCE[](#xfce "Permalink to this headline")
|
||||
### XFCE[🔗](#xfce "Permalink to this headline")
|
||||
|
||||
From the **Applications** menu in the upper left corner of the screen choose **Settings** -> **Keyboard**. You should get the following window:
|
||||
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to use Security Groups in Horizon on CloudFerro Cloud[](#how-to-use-security-groups-in-horizon-on-brand-name "Permalink to this headline")
|
||||
How to use Security Groups in Horizon on CloudFerro Cloud[🔗](#how-to-use-security-groups-in-horizon-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================
|
||||
|
||||
Security groups in **OpenStack** are used to filter the Internet traffic coming **to** and **from** your virtual machines. They consist of security rules and can be attached to your virtual machines during and after the creation of the machines.
|
||||
|
||||
By default, each instance has a rule which blocks all incoming Internet traffic and allows all outgoing traffic. To modify those settings, you can apply other security groups to it.
|
||||
|
||||
Viewing the security groups[](#viewing-the-security-groups "Permalink to this headline")
|
||||
Viewing the security groups[🔗](#viewing-the-security-groups "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To check your current security groups, please follow these steps:
|
||||
@ -21,7 +21,7 @@ You will see the list of your security groups there. The following groups should
|
||||
|
||||

|
||||
|
||||
Creating a new security group[](#creating-a-new-security-group "Permalink to this headline")
|
||||
Creating a new security group[🔗](#creating-a-new-security-group "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
In order to create a new security group, please follow these steps:
|
||||
@ -46,7 +46,7 @@ If you want to access that screen later, you can click the **Manage Rules** butt
|
||||
|
||||
By default, your new security group should contain two rules seen on the screenshot above - the first one allows all outgoing traffic on IPv4 and the second one allows all outgoing traffic on IPv6.
|
||||
|
||||
Adding security rules to a security group[](#adding-security-rules-to-a-security-group "Permalink to this headline")
|
||||
Adding security rules to a security group[🔗](#adding-security-rules-to-a-security-group "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In the **Manage Security Rules** screen that you entered in the previous step, click the **Add Rule** button.
|
||||
@ -96,12 +96,12 @@ These options apply to all ports of ICMP, TCP and UPD, respectively.
|
||||
|
||||
The drop-down list **Rule** also contains templates for commonly used services like DNS (Domain Name Services), HTTP (Hypertext Transfer Protocol) or SMTP (Simple Mail Transfer Protocol). If you choose one of them, you only have to provide the information about the **Remote** - **CIDR** or **Security Group**. The explanation for those options is in the **Custom TCP Rule** section.
|
||||
|
||||
Adding a Security Group to your VM[](#adding-a-security-group-to-your-vm "Permalink to this headline")
|
||||
Adding a Security Group to your VM[🔗](#adding-a-security-group-to-your-vm "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can apply your security group to your VM either during or after creating it.
|
||||
|
||||
### During its creation[](#during-its-creation "Permalink to this headline")
|
||||
### During its creation[🔗](#during-its-creation "Permalink to this headline")
|
||||
|
||||
During the process of creating your virtual machine you can add security groups to it. This happens during the **Security Groups** step:
|
||||
|
||||
@ -111,7 +111,7 @@ You can add security groups to your VM by using the **↑** button an remove the
|
||||
|
||||

|
||||
|
||||
### After its creation[](#after-its-creation "Permalink to this headline")
|
||||
### After its creation[🔗](#after-its-creation "Permalink to this headline")
|
||||
|
||||
Go to **Compute** > **Instances**. Click the drop-down menu in the row containing information about the to which you wish to apply your rule (column **Actions**). Select **Edit Security Groups**. You should see the window similar to this:
|
||||
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
OpenStack User Roles on CloudFerro Cloud[](#openstack-user-roles-on-brand-name "Permalink to this headline")
|
||||
OpenStack User Roles on CloudFerro Cloud[🔗](#openstack-user-roles-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================
|
||||
|
||||
A **user role** in OpenStack cloud is a set of permissions that govern how members of specific groups interact with system resources, their access scope, and capabilities.
|
||||
|
||||
This guide simplifies OpenStack roles for casual users of CloudFerro Cloud VMs. It focuses on practical use cases and commonly required roles.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Frequently used user roles
|
||||
@ -23,7 +23,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
>
|
||||
> * Dictionary of other roles
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
**1. Account**
|
||||
@ -51,10 +51,10 @@ Ensure you know the following OpenStack commands:
|
||||
|
||||
[How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](../kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
|
||||
|
||||
Frequently used user roles[](#frequently-used-user-roles "Permalink to this headline")
|
||||
Frequently used user roles[🔗](#frequently-used-user-roles "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
### Common user roles[](#common-user-roles "Permalink to this headline")
|
||||
### Common user roles[🔗](#common-user-roles "Permalink to this headline")
|
||||
|
||||
**member**
|
||||
: Grants standard access to project resources.
|
||||
@ -78,7 +78,7 @@ Frequently used user roles[](#frequently-used-user-roles "Permalink to this h
|
||||
* Horizon: **Project** -> **Overview**
|
||||
* CLI: **openstack server list**, **openstack project list**
|
||||
|
||||
### Roles for Kubernetes users[](#roles-for-kubernetes-users "Permalink to this headline")
|
||||
### Roles for Kubernetes users[🔗](#roles-for-kubernetes-users "Permalink to this headline")
|
||||
|
||||
**k8s\_admin**
|
||||
: Administrative access to manage Kubernetes clusters and resources.
|
||||
@ -98,7 +98,7 @@ Frequently used user roles[](#frequently-used-user-roles "Permalink to this h
|
||||
* Horizon: **Kubernetes** -> **Overview**
|
||||
* CLI: **kubectl get pods**, **kubectl describe pod**
|
||||
|
||||
### Roles for Load Balancer users[](#roles-for-load-balancer-users "Permalink to this headline")
|
||||
### Roles for Load Balancer users[🔗](#roles-for-load-balancer-users "Permalink to this headline")
|
||||
|
||||
**load-balancer\_member**
|
||||
: Grants access to deploy applications behind load balancers.
|
||||
@ -112,7 +112,7 @@ Frequently used user roles[](#frequently-used-user-roles "Permalink to this h
|
||||
* Horizon: **Network** -> **Load Balancers**
|
||||
* CLI: **openstack loadbalancer show**, **openstack loadbalancer stats show**
|
||||
|
||||
How to View Roles in Horizon[](#how-to-view-roles-in-horizon "Permalink to this headline")
|
||||
How to View Roles in Horizon[🔗](#how-to-view-roles-in-horizon "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
You can view roles in Horizon by navigating to **Identity** -> **Roles**.
|
||||
@ -125,12 +125,12 @@ Assigning multiple roles is best done during project creation rather than user c
|
||||
|
||||

|
||||
|
||||
Examples of using user roles[](#examples-of-using-user-roles "Permalink to this headline")
|
||||
Examples of using user roles[🔗](#examples-of-using-user-roles "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
The following articles, as one of many steps, describe how to assign a role to the new project, credential, user or group.
|
||||
|
||||
### Using user roles while creating application credential in Horizon[](#using-user-roles-while-creating-application-credential-in-horizon "Permalink to this headline")
|
||||
### Using user roles while creating application credential in Horizon[🔗](#using-user-roles-while-creating-application-credential-in-horizon "Permalink to this headline")
|
||||
|
||||
Normally, you access the cloud via user credentials, which may be one- or two-factor credentials. OpenStack provides a more direct procedure of gaining access to cloud with application credential and you can create a credential with several user roles.
|
||||
|
||||
@ -140,7 +140,7 @@ That S3 article selects user roles when creating an application credential, thro
|
||||
|
||||

|
||||
|
||||
### Using user roles while creating application credential via the CLI[](#using-user-roles-while-creating-application-credential-via-the-cli "Permalink to this headline")
|
||||
### Using user roles while creating application credential via the CLI[🔗](#using-user-roles-while-creating-application-credential-via-the-cli "Permalink to this headline")
|
||||
|
||||
This is the main article about application credentials; it is mostly using CLI:
|
||||
|
||||
@ -150,7 +150,7 @@ Here is how to specify user roles through CLI parameters:
|
||||
|
||||

|
||||
|
||||
### Using user roles while creating a new project[](#using-user-roles-while-creating-a-new-project "Permalink to this headline")
|
||||
### Using user roles while creating a new project[🔗](#using-user-roles-while-creating-a-new-project "Permalink to this headline")
|
||||
|
||||
In article [How to Create and Configure New Openstack Project Through Horizon on CloudFerro Cloud Cloud](../openstackcli/How-To-Create-and-Configure-New-Project-on-CloudFerro-Cloud-Cloud.html.md) we use command **Project Members** to define which users to include into the project:
|
||||
|
||||
@ -167,7 +167,7 @@ You would then continue by defining the roles for each user in the project:
|
||||
|
||||

|
||||
|
||||
### Using member role only while creating a new user[](#using-member-role-only-while-creating-a-new-user "Permalink to this headline")
|
||||
### Using member role only while creating a new user[🔗](#using-member-role-only-while-creating-a-new-user "Permalink to this headline")
|
||||
|
||||
In SLURM article, we first create a new OpenStack Keystone user, with the role of **member**.
|
||||
|
||||
@ -177,7 +177,7 @@ In SLURM article, we first create a new OpenStack Keystone user, with the role o
|
||||
|
||||
That user can login to Horizon and use project resources together with other users which are defined in a similar way.
|
||||
|
||||
Dictionary of other roles[](#dictionary-of-other-roles "Permalink to this headline")
|
||||
Dictionary of other roles[🔗](#dictionary-of-other-roles "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
**admin**
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
Resizing a virtual machine using OpenStack Horizon on CloudFerro Cloud[](#resizing-a-virtual-machine-using-openstack-horizon-on-brand-name "Permalink to this headline")
|
||||
Resizing a virtual machine using OpenStack Horizon on CloudFerro Cloud[🔗](#resizing-a-virtual-machine-using-openstack-horizon-on-brand-name "Permalink to this headline")
|
||||
=========================================================================================================================================================================
|
||||
|
||||
Introduction[](#introduction "Permalink to this headline")
|
||||
Introduction[🔗](#introduction "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
When creating a new virtual machine under OpenStack, one of the options you choose is the *flavor*. A flavor is a predefined combination of CPU, memory and disk size and there usually is a number of such flavors for you to choose from.
|
||||
@ -16,7 +16,7 @@ After the instance is spawned, it is possible to change one flavor for another,
|
||||
|
||||
In this article, we are going to resize VMs using commands in OpenStack Horizon.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -41,7 +41,7 @@ Also:
|
||||
> * A flavor with the desired resource configuration exists.
|
||||
> * Adequate resources are available in your OpenStack environment to accommodate the resize.
|
||||
|
||||
Creating a new VM[](#creating-a-new-vm "Permalink to this headline")
|
||||
Creating a new VM[🔗](#creating-a-new-vm "Permalink to this headline")
|
||||
---------------------------------------------------------------------
|
||||
|
||||
To illustrate the commands in this article, let us create a new VM in order to start with a clean slate. (It goes without saying that you can practice with any of the already existing VMs in your account.)
|
||||
@ -60,7 +60,7 @@ Finish the process of creating a new VM and let it spawn:
|
||||
|
||||
Let us now resize the VM called **Resizing**.
|
||||
|
||||
Steps to Resize the VM[](#steps-to-resize-the-vm "Permalink to this headline")
|
||||
Steps to Resize the VM[🔗](#steps-to-resize-the-vm "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Locate the VM by using Horizon commands **Compute** -> **Instances**.
|
||||
@ -84,7 +84,7 @@ So, select **eo2.xlarge** as the new flavor. This screen shows its parameters:
|
||||
|
||||

|
||||
|
||||
Advanced Options[](#advanced-options "Permalink to this headline")
|
||||
Advanced Options[🔗](#advanced-options "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
**Advanced Options** tab contains two further options for resizing the instance.
|
||||
@ -102,7 +102,7 @@ Server Group
|
||||
|
||||

|
||||
|
||||
Resize the VM[](#resize-the-vm "Permalink to this headline")
|
||||
Resize the VM[🔗](#resize-the-vm "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
Anyways, click on **Resize** to proceed with the resizing of the VM.
|
||||
@ -117,7 +117,7 @@ If you encounter issues, you can choose **Revert Resize** to return the VM to it
|
||||
|
||||
Or, if the resizing is finished, you can again use option **Resize instance** and choose the flavor from which you started (**eo2a.large** in this case). This process of scaling down is much faster than the process of scaling up.
|
||||
|
||||
Troubleshooting[](#troubleshooting "Permalink to this headline")
|
||||
Troubleshooting[🔗](#troubleshooting "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
If any of the flavor parameters does not match up, the resizing will fail.
|
||||
@ -128,7 +128,7 @@ You will then see balloon help in the right upper corner:
|
||||
|
||||
In this case, the sizes of the disk before and after the resizing do not match.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can also resize the virtual machine using only OpenStack CLI. More details here: [Resizing a virtual machine using OpenStack CLI on CloudFerro Cloud](../openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-CloudFerro-Cloud.html.md)
|
||||
@ -1,16 +1,16 @@
|
||||
Spot instances on CloudFerro Cloud[](#spot-instances-on-brand-name "Permalink to this headline")
|
||||
Spot instances on CloudFerro Cloud[🔗](#spot-instances-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================
|
||||
|
||||
Spot instance is resource similar to Amazon EC2 Spot Instances or Google Spot VMs. In short, user is provided with unused computational resources for a discounted price but those resources can be terminated on a short time notice whenever on-demand usage increases. The main use case are ephemeral workflows which can deal with being terminated unexpectedly and/or orchestration platforms which can deal with forced scaling down of available resources e.g. Kubernetes clusters.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to create spot instances
|
||||
> * Additional configuration via tags
|
||||
> * What is the expected behaviour
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Status Power State and dependencies in billing of instance VMs on CloudFerro Cloud[](#status-power-state-and-dependencies-in-billing-of-instance-vms-on-brand-name "Permalink to this headline")
|
||||
Status Power State and dependencies in billing of instance VMs on CloudFerro Cloud[🔗](#status-power-state-and-dependencies-in-billing-of-instance-vms-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================================================================
|
||||
|
||||
In OpenStack, instances have their own Status and Power State:
|
||||
@ -10,7 +10,7 @@ In OpenStack, instances have their own Status and Power State:
|
||||
|
||||
There are six Power states, divided into two groups, depending on whether the VM is running or not.
|
||||
|
||||
Power state while VM is running[](#power-state-while-vm-is-running "Permalink to this headline")
|
||||
Power state while VM is running[🔗](#power-state-while-vm-is-running "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
**NO STATE**
|
||||
@ -22,7 +22,7 @@ Power state while VM is running[](#power-state-while-vm-is-running "Permalink
|
||||
**PAUSED**
|
||||
: VM is frozen and a memory dump is made.
|
||||
|
||||
Power state while VM is turned off[](#power-state-while-vm-is-turned-off "Permalink to this headline")
|
||||
Power state while VM is turned off[🔗](#power-state-while-vm-is-turned-off "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
**SHUT DOWN**
|
||||
@ -34,7 +34,7 @@ Power state while VM is turned off[](#power-state-while-vm-is-turned-off "Per
|
||||
**SUSPENDED**
|
||||
: VM is blocked by system (most likely because of negative credit on account).
|
||||
|
||||
Status and its conditions[](#status-and-its-conditions "Permalink to this headline")
|
||||
Status and its conditions[🔗](#status-and-its-conditions "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
Status may have one of the following conditions:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
VM created with option Create New Volume No on CloudFerro Cloud[](#vm-created-with-option-create-new-volume-no-on-brand-name "Permalink to this headline")
|
||||
VM created with option Create New Volume No on CloudFerro Cloud[🔗](#vm-created-with-option-create-new-volume-no-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================
|
||||
|
||||
During creation of a VM you can select a source. If you choose “Image”, you can then choose **Yes** or **No** for the option “**Create New Volume**”.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
VM created with option Create New Volume Yes on CloudFerro Cloud[](#vm-created-with-option-create-new-volume-yes-on-brand-name "Permalink to this headline")
|
||||
VM created with option Create New Volume Yes on CloudFerro Cloud[🔗](#vm-created-with-option-create-new-volume-yes-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================
|
||||
|
||||
Note
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
What Image Formats are Available in OpenStack CloudFerro Cloud cloud[](#what-image-formats-are-available-in-openstack-brand-name-cloud "Permalink to this headline")
|
||||
What Image Formats are Available in OpenStack CloudFerro Cloud cloud[🔗](#what-image-formats-are-available-in-openstack-brand-name-cloud "Permalink to this headline")
|
||||
=====================================================================================================================================================================
|
||||
|
||||
In CloudFerro Cloud OpenStack ten image format extensions are available:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
What is an OpenStack domain on CloudFerro Cloud[](#what-is-an-openstack-domain-on-brand-name "Permalink to this headline")
|
||||
What is an OpenStack domain on CloudFerro Cloud[🔗](#what-is-an-openstack-domain-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================
|
||||
|
||||
**Domain**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
What is an OpenStack project on CloudFerro Cloud[](#what-is-an-openstack-project-on-brand-name "Permalink to this headline")
|
||||
What is an OpenStack project on CloudFerro Cloud[🔗](#what-is-an-openstack-project-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================
|
||||
|
||||
A **project** is a isolated group of zero or more users who share common access with specific privileges to the software instance in OpenStack. A project is created for each set of instances and networks that are configured as a discrete entity for the project. In Compute, a project owns virtual machines (in Compute) or containers (in Object Storage).
|
||||
|
||||
@ -6,7 +6,7 @@
|
||||
* [DNS as a Service on CloudFerro Cloud Hosting](DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html.md)
|
||||
* [Dashboard Overview Project Quotas And Flavors Limits on CloudFerro Cloud](Dashboard-Overview-Project-Quotas-And-Flavors-Limits-on-CloudFerro-Cloud.html.md)
|
||||
* [How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on CloudFerro Cloud](How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [Performing administrative tasks within Windows based VMs[](#performing-administrative-tasks-within-windows-based-vms "Permalink to this headline")](How-to-access-the-VM-from-OpenStack-console-on-CloudFerro-Cloud.html.md)
|
||||
* [Performing administrative tasks within Windows based VMs[🔗](#performing-administrative-tasks-within-windows-based-vms "Permalink to this headline")](How-to-access-the-VM-from-OpenStack-console-on-CloudFerro-Cloud.html.md)
|
||||
* [How to clone existing and configured VMs on CloudFerro Cloud](How-to-clone-existing-and-configured-VMs-on-CloudFerro-Cloud.html.md)
|
||||
* [How to create Windows VM on OpenStack Horizon and access it via web console on CloudFerro Cloud](How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-CloudFerro-Cloud.html.md)
|
||||
* [How to create a Linux VM and access it from Linux command line on CloudFerro Cloud](How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-CloudFerro-Cloud.html.md)
|
||||
@ -25,9 +25,9 @@
|
||||
* [How to upload custom image to CloudFerro Cloud cloud using OpenStack Horizon dashboard](How-to-upload-custom-image-to-CloudFerro-Cloud-cloud-using-OpenStack-Horizon-dashboard.html.md)
|
||||
* [How to upload your custom image using OpenStack CLI on CloudFerro Cloud](How-to-upload-your-custom-image-using-OpenStack-CLI-on-CloudFerro-Cloud.html.md)
|
||||
* [How to use Docker on CloudFerro Cloud](How-to-use-Docker-on-CloudFerro-Cloud.html.md)
|
||||
* [Method 1: Installing MATE[](#method-1-installing-mate "Permalink to this headline")](How-to-use-GUI-in-Linux-VM-on-CloudFerro-Cloud-and-access-it-from-local-Linux-computer.html.md)
|
||||
* [During its creation[](#during-its-creation "Permalink to this headline")](How-to-use-Security-Groups-in-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [Common user roles[](#common-user-roles "Permalink to this headline")](OpenStack-user-roles-on-CloudFerro-Cloud.html.md)
|
||||
* [Method 1: Installing MATE[🔗](#method-1-installing-mate "Permalink to this headline")](How-to-use-GUI-in-Linux-VM-on-CloudFerro-Cloud-and-access-it-from-local-Linux-computer.html.md)
|
||||
* [During its creation[🔗](#during-its-creation "Permalink to this headline")](How-to-use-Security-Groups-in-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [Common user roles[🔗](#common-user-roles "Permalink to this headline")](OpenStack-user-roles-on-CloudFerro-Cloud.html.md)
|
||||
* [Resizing a virtual machine using OpenStack Horizon on CloudFerro Cloud](Resizing-a-virtual-machine-using-OpenStack-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [Spot instances on CloudFerro Cloud](Spot-instances-on-CloudFerro-Cloud.html.md)
|
||||
* [Status Power State and dependences in billing of instances VMs on CloudFerro Cloud](Status-Power-State-and-dependences-in-billing-of-instances-VMs-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Bootable versus non-bootable volumes on CloudFerro Cloud[](#bootable-versus-non-bootable-volumes-on-brand-name "Permalink to this headline")
|
||||
Bootable versus non-bootable volumes on CloudFerro Cloud[🔗](#bootable-versus-non-bootable-volumes-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================
|
||||
|
||||
Each volume has an indicator called **bootable** which shows whether an operating system can be booted from it or not. That indicator can be set up manually at any time. If you set it up on a volume that does not contain a bootable operating system and later try to boot a VM from it, you will see an error as a response.
|
||||
@ -8,7 +8,7 @@ In this article we will
|
||||
> * explain practical differences between **bootable** and **non-bootable** volumes and
|
||||
> * provide procedures in Horizon and OpenStack CLI to check whether the volume **bootable** or not.
|
||||
|
||||
Bootable vs. non-bootable volumes[](#bootable-vs-non-bootable-volumes "Permalink to this headline")
|
||||
Bootable vs. non-bootable volumes[🔗](#bootable-vs-non-bootable-volumes "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------
|
||||
|
||||
Bootable and non-bootable volumes share the following similarities:
|
||||
@ -26,7 +26,7 @@ On the other hand, non-bootable volumes can
|
||||
> * add more storage space to an instance (especially for applications which require lots of data) and
|
||||
> * separate data from the operating system to make backups and data management easier.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Which volumes appear when creating a virtual machine using Horizon dashboard?
|
||||
@ -36,7 +36,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Modifying bootable status of a volume
|
||||
> * What happens if you launch a virtual machine from a volume which does not have a functional operating system?
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Ephemeral vs Persistent storage option Create New Volume on CloudFerro Cloud[](#ephemeral-vs-persistent-storage-option-create-new-volume-on-brand-name "Permalink to this headline")
|
||||
Ephemeral vs Persistent storage option Create New Volume on CloudFerro Cloud[🔗](#ephemeral-vs-persistent-storage-option-create-new-volume-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================================
|
||||
|
||||
Volumes created in the **Volumes > Volumes** section are *persistent* storage. They can be attached to a virtual machine and then reattached to a different one. They survive the removal of the virtual machine to which they are connected. You can also clone them, which is a simple way of creating a backup. However, if you copy them, you might also be interested in [Volume snapshot inheritance and its consequences on CloudFerro Cloud](Volume-snapshot-inheritance-and-its-consequences-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
@ -1,16 +1,16 @@
|
||||
How To Attach Volume To Windows VM On CloudFerro Cloud[](#how-to-attach-volume-to-windows-vm-on-brand-name "Permalink to this headline")
|
||||
How To Attach Volume To Windows VM On CloudFerro Cloud[🔗](#how-to-attach-volume-to-windows-vm-on-brand-name "Permalink to this headline")
|
||||
=========================================================================================================================================
|
||||
|
||||
In this tutorial, you will attach a volume to your Windows virtual machine. It increases the storage available for your files.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a new volume
|
||||
> * Attaching the new volume to a VM
|
||||
> * Preparing the volume to use with a VM
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -21,7 +21,7 @@ No. 2 **Windows VM**
|
||||
|
||||
You must operate a Microsoft Windows virtual machine running on CloudFerro Cloud cloud. You can access it using the webconsole ([How to access the VM from OpenStack console on CloudFerro Cloud](../cloud/How-to-access-the-VM-from-OpenStack-console-on-CloudFerro-Cloud.html.md)) or through RDP. If you are using RDP, we strongly recommend using a bastion host for your security: [Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on CloudFerro Cloud](../windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Step 1: Create a New Volume[](#step-1-create-a-new-volume "Permalink to this headline")
|
||||
Step 1: Create a New Volume[🔗](#step-1-create-a-new-volume "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
Login to the Horizon panel available at <https://horizon.cloudferro.com>.
|
||||
@ -48,7 +48,7 @@ You should now see the volume you just created. In our case it is called **data*
|
||||
|
||||

|
||||
|
||||
Step 2: Attach the Volume to VM[](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
Step 2: Attach the Volume to VM[🔗](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.
|
||||
@ -69,7 +69,7 @@ Your volume should now be attached to the virtual machine:
|
||||
|
||||

|
||||
|
||||
Step 3: Format the Drive[](#step-3-format-the-drive "Permalink to this headline")
|
||||
Step 3: Format the Drive[🔗](#step-3-format-the-drive "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------
|
||||
|
||||
Start your VM and access it using RDP or the webconsole (see Prerequisite 2). Right-click the Start button and from the context menu select **Disk Management**. You should receive the following window:
|
||||
@ -138,7 +138,7 @@ Your volume should now be mounted. If you chose to assign a drive letter, it sho
|
||||
|
||||
If you want to create more partitions, repeat right-clicking the **Unallocated** space and completing the wizard as previously explained.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Once you have gathered some data on your volume, you can create its backup, as explained in this article:
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to Create Backup of Your Volume From Windows Machine on CloudFerro Cloud[](#how-to-create-backup-of-your-volume-from-windows-machine-on-brand-name "Permalink to this headline")
|
||||
How to Create Backup of Your Volume From Windows Machine on CloudFerro Cloud[🔗](#how-to-create-backup-of-your-volume-from-windows-machine-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================================
|
||||
|
||||
In this tutorial you will learn how create a backup of your volume on CloudFerro Cloud cloud. It allows you to save its state at a certain point in time and, for example, perform some experiments on it. You can then restore the volume to its previous state if you are unhappy with the results.
|
||||
|
||||
Those backups are stored using object storage. Restoring a backup will delete all data added to a volume after backup was created.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Disconnecting the volume from a Windows virtual machine
|
||||
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Restoring a backup of a volume
|
||||
> * Reattaching a volume to your Windows virtual machine
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -28,7 +28,7 @@ No. 3 **Volume**
|
||||
|
||||
A volume must be connected to your Windows virtual machine.
|
||||
|
||||
Disconnecting the volume from a virtual machine[](#disconnecting-the-volume-from-a-virtual-machine "Permalink to this headline")
|
||||
Disconnecting the volume from a virtual machine[🔗](#disconnecting-the-volume-from-a-virtual-machine "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before creating a backup of your volume, disconnect it.
|
||||
@ -71,7 +71,7 @@ The following window should appear:
|
||||
|
||||
Click **Detach Volume** and confirm your choice.
|
||||
|
||||
Creating a Backup of Your Volume[](#creating-a-backup-of-your-volume "Permalink to this headline")
|
||||
Creating a Backup of Your Volume[🔗](#creating-a-backup-of-your-volume "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that you have detached the volume from your virtual machine, you can make its backup by following these steps:
|
||||
@ -91,7 +91,7 @@ Once the process is over, you should see the status **Available** next to your b
|
||||
|
||||

|
||||
|
||||
Restoring the backup[](#restoring-the-backup "Permalink to this headline")
|
||||
Restoring the backup[🔗](#restoring-the-backup "Permalink to this headline")
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
There are two ways of restoring a backup:
|
||||
@ -113,7 +113,7 @@ Once this operation is completed, you should see the status **Available** next t
|
||||
|
||||
You can now reattach the volume to your virtual machine.
|
||||
|
||||
Reattaching the volume to your virtual machine[](#reattaching-the-volume-to-your-virtual-machine "Permalink to this headline")
|
||||
Reattaching the volume to your virtual machine[🔗](#reattaching-the-volume-to-your-virtual-machine "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In the **Volumes** > **Volumes** section of the Horizon dashboard, find the row containing your volume. Choose **Manage Attachments** from the drop-down menu in the **Actions** column for it. You should get the following window:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How many objects can I put into Object Storage container bucket on CloudFerro Cloud[](#how-many-objects-can-i-put-into-object-storage-container-bucket-on-brand-name "Permalink to this headline")
|
||||
How many objects can I put into Object Storage container bucket on CloudFerro Cloud[🔗](#how-many-objects-can-i-put-into-object-storage-container-bucket-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
It is highly advisable to put no more than 1 million (1 000 000) objects into one bucket (container). Having more objects makes listing of them very inefficient. We suggest to create many buckets with a small amount of objects instead of a small amount of buckets with many objects.
|
||||
@ -1,4 +1,4 @@
|
||||
How to attach a volume to VM less than 2TB on Linux on CloudFerro Cloud[](#how-to-attach-a-volume-to-vm-less-than-2tb-on-linux-on-brand-name "Permalink to this headline")
|
||||
How to attach a volume to VM less than 2TB on Linux on CloudFerro Cloud[🔗](#how-to-attach-a-volume-to-vm-less-than-2tb-on-linux-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will create a volume which is smaller than 2 TB. Then, you will attach it to a VM and format it in the appropriate way.
|
||||
@ -7,14 +7,14 @@ Note
|
||||
|
||||
If you want to create and attach a volume that has more than 2 TB of storage, you will need to use different software for its formatting. If this is the case, please visit the following article instead: [How to attach a volume to VM more than 2TB on Linux on CloudFerro Cloud](How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a new volume
|
||||
> * Attaching the new volume to a VM
|
||||
> * Formatting and mounting of the new volume
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -41,7 +41,7 @@ No. 4 **SSH access to the VM**
|
||||
|
||||
[How to connect to your virtual machine via SSH in Linux on CloudFerro Cloud](../networking/How-to-connect-to-your-virtual-machine-via-SSH-in-Linux-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Step 1: Create a Volume[](#step-1-create-a-volume "Permalink to this headline")
|
||||
Step 1: Create a Volume[🔗](#step-1-create-a-volume "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Login to the Horizon panel available at <https://horizon.cloudferro.com>.
|
||||
@ -68,7 +68,7 @@ You should now see the volume you just created. In our case it is called **volum
|
||||
|
||||

|
||||
|
||||
Step 2: Attach the Volume to VM[](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
Step 2: Attach the Volume to VM[🔗](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.
|
||||
@ -91,7 +91,7 @@ Your volume should now be attached to the VM:
|
||||
|
||||

|
||||
|
||||
Step 3: Partition the Volume[](#step-3-partition-the-volume "Permalink to this headline")
|
||||
Step 3: Partition the Volume[🔗](#step-3-partition-the-volume "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------
|
||||
|
||||
It is time to access your virtual machine to prepare the volume for data storage.
|
||||
@ -176,7 +176,7 @@ The device file of the new partition should have the same name as the device fil
|
||||
|
||||

|
||||
|
||||
Step 5: Create the File System[](#step-5-create-the-file-system "Permalink to this headline")
|
||||
Step 5: Create the File System[🔗](#step-5-create-the-file-system "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
In order to save data on this volume, create **ext4** filesystem on it. **ext4** is arguably the most popular filesystem on Linux distributions.
|
||||
@ -192,7 +192,7 @@ Replace **sdb1** with the name of the device file of the partition provided to y
|
||||
|
||||
This process should take less than a minute.
|
||||
|
||||
Step 6: Create the mount point[](#step-6-create-the-mount-point "Permalink to this headline")
|
||||
Step 6: Create the mount point[🔗](#step-6-create-the-mount-point "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
You need to specify the location in the directory structure from which you will access the data stored on that volume. In Linux it is typically done in the **/etc/fstab** config file.
|
||||
@ -269,7 +269,7 @@ sudo chmod 777 /my_volume
|
||||
|
||||
During the next boot of your virtual machine, the volume should be mounted automatically.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You have successfully created a volume and prepared it for use on a Linux virtual machine.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to attach a volume to VM more than 2TB on Linux on CloudFerro Cloud[](#how-to-attach-a-volume-to-vm-more-than-2tb-on-linux-on-brand-name "Permalink to this headline")
|
||||
How to attach a volume to VM more than 2TB on Linux on CloudFerro Cloud[🔗](#how-to-attach-a-volume-to-vm-more-than-2tb-on-linux-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will create a volume which is larger than 2 TB. Then, you will attach it to a VM and format it in the appropriate way.
|
||||
@ -7,14 +7,14 @@ Note
|
||||
|
||||
If you want to create and attach a volume that has less than 2 TB of storage, you will need to use different software for its formatting. If this is the case, please visit the following article instead: [How to attach a volume to VM less than 2TB on Linux on CloudFerro Cloud](How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a new volume
|
||||
> * Attaching the new volume to a VM
|
||||
> * Formatting and mounting of the new volume
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -39,7 +39,7 @@ No. 4 **SSH access to the VM**
|
||||
|
||||
[How to connect to your virtual machine via SSH in Linux on CloudFerro Cloud](../networking/How-to-connect-to-your-virtual-machine-via-SSH-in-Linux-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Step 1: Create a Volume[](#step-1-create-a-volume "Permalink to this headline")
|
||||
Step 1: Create a Volume[🔗](#step-1-create-a-volume "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Login to the Horizon panel available at <https://horizon.cloudferro.com>.
|
||||
@ -66,7 +66,7 @@ You should now see the volume you just created. In our case it is called **my-fi
|
||||
|
||||

|
||||
|
||||
Step 2: Attach the Volume to VM[](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
Step 2: Attach the Volume to VM[🔗](#step-2-attach-the-volume-to-vm "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.
|
||||
@ -89,7 +89,7 @@ Your volume should now be attached to the VM:
|
||||
|
||||

|
||||
|
||||
Step 3: Create the Partition Table[](#step-3-create-the-partition-table "Permalink to this headline")
|
||||
Step 3: Create the Partition Table[🔗](#step-3-create-the-partition-table "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------
|
||||
|
||||
It is time to access your virtual machine to prepare the volume for data storage.
|
||||
@ -188,7 +188,7 @@ The device file of the new partition should have the same name as the device fil
|
||||
|
||||

|
||||
|
||||
Step 5: Create the File System[](#step-5-create-the-file-system "Permalink to this headline")
|
||||
Step 5: Create the File System[🔗](#step-5-create-the-file-system "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
In order to save data on this volume, create **ext4** filesystem on it. **ext4** is arguably the most popular filesystem on Linux distributions.
|
||||
@ -204,7 +204,7 @@ Replace **sdb1** with the name of the device file of the partition provided to y
|
||||
|
||||
This process took less than a minute for a 2,4 terabyte volume.
|
||||
|
||||
Step 6: Create the mount point[](#step-6-create-the-mount-point "Permalink to this headline")
|
||||
Step 6: Create the mount point[🔗](#step-6-create-the-mount-point "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
You need to specify the location in the directory structure from which you will access the data stored on that volume. In Linux it is typically done in the **/etc/fstab** config file.
|
||||
@ -281,7 +281,7 @@ sudo chmod 777 /my_volume
|
||||
|
||||
During the next boot of your virtual machine, the volume should be mounted automatically.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You have successfully created a volume larger than 2 TB and prepared it for use on a Linux virtual machine.
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to create or delete volume snapshot on CloudFerro Cloud[](#how-to-create-or-delete-volume-snapshot-on-brand-name "Permalink to this headline")
|
||||
How to create or delete volume snapshot on CloudFerro Cloud[🔗](#how-to-create-or-delete-volume-snapshot-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================
|
||||
|
||||
Volume snapshot allows you to save the state of volume at a specific point in time. Here is how to create or delete volume snapshot using Horizon dashboard or OpenStack CLI client.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to create volume Snapshot and attach as Volume on Linux or Windows on CloudFerro Cloud[](#how-to-create-volume-snapshot-and-attach-as-volume-on-linux-or-windows-on-brand-name "Permalink to this headline")
|
||||
How to create volume Snapshot and attach as Volume on Linux or Windows on CloudFerro Cloud[🔗](#how-to-create-volume-snapshot-and-attach-as-volume-on-linux-or-windows-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================================================================================
|
||||
|
||||
To create a snapshot of a Volume:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to export a volume over NFS on CloudFerro Cloud[](#how-to-export-a-volume-over-nfs-on-brand-name "Permalink to this headline")
|
||||
How to export a volume over NFS on CloudFerro Cloud[🔗](#how-to-export-a-volume-over-nfs-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================
|
||||
|
||||
**Server configuration**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to export a volume over NFS outside of a project on CloudFerro Cloud[](#how-to-export-a-volume-over-nfs-outside-of-a-project-on-brand-name "Permalink to this headline")
|
||||
How to export a volume over NFS outside of a project on CloudFerro Cloud[🔗](#how-to-export-a-volume-over-nfs-outside-of-a-project-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================================
|
||||
|
||||
**Prerequisites**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How to extend the volume in Linux on CloudFerro Cloud[](#how-to-extend-the-volume-in-linux-on-brand-name "Permalink to this headline")
|
||||
How to extend the volume in Linux on CloudFerro Cloud[🔗](#how-to-extend-the-volume-in-linux-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================================
|
||||
|
||||
It is possible to extend a Volume from the Horizon dashboard.
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to mount object storage in Linux on CloudFerro Cloud[](#how-to-mount-object-storage-in-linux-on-brand-name "Permalink to this headline")
|
||||
How to mount object storage in Linux on CloudFerro Cloud[🔗](#how-to-mount-object-storage-in-linux-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================
|
||||
|
||||
S3 is a protocol for storing and retrieving data on and from remote servers. The user has their own S3 account and is identified by a pair of identifiers, which are called Access Key and Secret Key. These keys act as a username and password for your S3 account.
|
||||
|
||||
Usually, for desktop computers we refer to files within a directory. In S3 terminology, file is called “object” and its name is called “key”. The S3 term for directory (or folder) is “bucket”. To mount object storage in your Linux computer, you will use command **s3fs**.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
Prerequisite No. 1 **Hosting**
|
||||
@ -20,7 +20,7 @@ The Access Key and Secret Key for access to an s3 account are also called the
|
||||
|
||||
At this point, you should have access to the cloud environment, using the OpenStack CLI client. It means that the command **openstack** is operational.
|
||||
|
||||
Check your credentials and save them in a file[](#check-your-credentials-and-save-them-in-a-file "Permalink to this headline")
|
||||
Check your credentials and save them in a file[🔗](#check-your-credentials-and-save-them-in-a-file "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Check your credentials with the following command:
|
||||
@ -50,7 +50,7 @@ chmod 600 .passwd-s3fs
|
||||
|
||||
Code **600** means you can read and write the file or directory but that none of the other users on the local host will have access to it.
|
||||
|
||||
Enable 3fs[](#enable-3fs "Permalink to this headline")
|
||||
Enable 3fs[🔗](#enable-3fs "Permalink to this headline")
|
||||
-------------------------------------------------------
|
||||
|
||||
Uncomment “user\_allow\_other” in *fuse.conf* file as root
|
||||
@ -67,7 +67,7 @@ s3fs w-container-1 /local/mount/point - passwd_file=~/.passwd-s3fs -o url=https:
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
If you want to access s3 files without mounting to the local computer, use command **s3cmd**.
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
How to move data volume between two VMs using OpenStack Horizon on CloudFerro Cloud[](#how-to-move-data-volume-between-two-vms-using-openstack-horizon-on-brand-name "Permalink to this headline")
|
||||
How to move data volume between two VMs using OpenStack Horizon on CloudFerro Cloud[🔗](#how-to-move-data-volume-between-two-vms-using-openstack-horizon-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
Volumes are used to store data and those data can be accessed from a virtual machine to which the volume is attached. To access data stored on a volume from another virtual machine, you need to disconnect that volume from virtual machine to which it is currently connected, and connect it to another instance.
|
||||
|
||||
This article uses the Horizon dashboard to transfer volumes between virtual machines which are in the same project.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -20,7 +20,7 @@ No. 3 **Destination virtual machine**
|
||||
|
||||
We also assume that you want to access the data stored on volume mentioned in Prerequisite No. 2 from another instance which is in the same project - we will call that instance *destination* virtual machine.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Ensure that the transfer is possible
|
||||
@ -37,11 +37,11 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
|
||||
Some parts of some screenshots in this article are greyed out for privacy reasons.
|
||||
|
||||
Ensure that the transfer is possible[](#ensure-that-the-transfer-is-possible "Permalink to this headline")
|
||||
Ensure that the transfer is possible[🔗](#ensure-that-the-transfer-is-possible "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before the actual transfer, you have to examine the state of the volume and of the instances and conclude whether the transfer is possible right away or should you perform other operations first:
|
||||
|
||||
### Projects must be on the same cloud[](#projects-must-be-on-the-same-cloud "Permalink to this headline")
|
||||
### Projects must be on the same cloud[🔗](#projects-must-be-on-the-same-cloud "Permalink to this headline")
|
||||
|
||||
If the projects are not on the same cloud, do not use this article but see one of these articles instead:
|
||||
@ -1,11 +1,11 @@
|
||||
How to restore volume from snapshot on CloudFerro Cloud[](#how-to-restore-volume-from-snapshot-on-brand-name "Permalink to this headline")
|
||||
How to restore volume from snapshot on CloudFerro Cloud[🔗](#how-to-restore-volume-from-snapshot-on-brand-name "Permalink to this headline")
|
||||
===========================================================================================================================================
|
||||
|
||||
In this article, you will learn how to restore volume from volume snapshot using Horizon dashboard or OpenStack CLI client.
|
||||
|
||||
This can be achieved by creating a new volume from existing snapshot. You can then delete the previous snapshot and, optionally, previous volume.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Volume snapshot inheritance and its consequences on CloudFerro Cloud[](#volume-snapshot-inheritance-and-its-consequences-on-brand-name "Permalink to this headline")
|
||||
Volume snapshot inheritance and its consequences on CloudFerro Cloud[🔗](#volume-snapshot-inheritance-and-its-consequences-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================
|
||||
|
||||
Performing a volume snapshot is a common form of securing your data against loss.
|
||||
|
||||
@ -15,6 +15,6 @@
|
||||
* [How to export a volume over NFS outside of a project on CloudFerro Cloud](How-to-export-a-volume-over-NFS-outside-of-a-project-on-CloudFerro-Cloud.html.md)
|
||||
* [How to extend the volume in Linux on CloudFerro Cloud](How-to-extend-the-volume-in-Linux-on-CloudFerro-Cloud.html.md)
|
||||
* [How to mount object storage in Linux on CloudFerro Cloud](How-to-mount-object-storage-in-Linux-on-CloudFerro-Cloud.html.md)
|
||||
* [Projects must be on the same cloud[](#projects-must-be-on-the-same-cloud "Permalink to this headline")](How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [Projects must be on the same cloud[🔗](#projects-must-be-on-the-same-cloud "Permalink to this headline")](How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-CloudFerro-Cloud.html.md)
|
||||
* [How to restore volume from snapshot on CloudFerro Cloud](How-to-restore-volume-from-snapshot-on-CloudFerro-Cloud.html.md)
|
||||
* [Volume snapshot inheritance and its consequences on CloudFerro Cloud](Volume-snapshot-inheritance-and-its-consequences-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum[](#automatic-kubernetes-cluster-upgrade-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
Automatic Kubernetes cluster upgrade on CloudFerro Cloud OpenStack Magnum[🔗](#automatic-kubernetes-cluster-upgrade-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================
|
||||
|
||||
Warning
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum[](#autoscaling-kubernetes-cluster-resources-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum[🔗](#autoscaling-kubernetes-cluster-resources-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=======================================================================================================================================================================================
|
||||
|
||||
When **autoscaling of Kubernetes clusters** is turned on, the system can
|
||||
@ -9,7 +9,7 @@ When **autoscaling of Kubernetes clusters** is turned on, the system can
|
||||
|
||||
This article explains various commands to resize or scale the cluster and will lead to a command to automatically create an autoscalable Kubernetes cluster for OpenStack Magnum.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Definitions of horizontal, vertical and nodes scaling
|
||||
@ -18,7 +18,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Get cluster template labels from Horizon interface
|
||||
> * Get cluster template labels from the CLI
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -43,19 +43,19 @@ Step 2 of article [How to Create a Kubernetes Cluster Using CloudFerro Cloud Ope
|
||||
|
||||
There are three different autoscaling features that a Kubernetes cloud can offer:
|
||||
|
||||
Horizontal Pod Autoscaler[](#horizontal-pod-autoscaler "Permalink to this headline")
|
||||
Horizontal Pod Autoscaler[🔗](#horizontal-pod-autoscaler "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
Scaling Kubernetes cluster horizontally means increasing or decreasing the number of running pods, depending on the actual demands at run time. Parameters to take into account are the usage of CPU and memory, as well as the desired minimum and maximum numbers of pod replicas.
|
||||
|
||||
Horizontal scaling is also known as “scaling out” and is shorthened as HPA.
|
||||
|
||||
Vertical Pod Autoscaler[](#vertical-pod-autoscaler "Permalink to this headline")
|
||||
Vertical Pod Autoscaler[🔗](#vertical-pod-autoscaler "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Vertical scaling (or “scaling up”, VPA) is adding or subtracting resources to and from an existing machine. If more CPUs are needed, add them. When they are not needed, shut some of them down.
|
||||
|
||||
Cluster Autoscaler[](#cluster-autoscaler "Permalink to this headline")
|
||||
Cluster Autoscaler[🔗](#cluster-autoscaler "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
HPA and VPA reorganize the usage of resources and the number of pods, however, there may come a time when the size of the system itself prevents from satisfying the demand. The solution is to autoscale the cluster itself, to increase or decrease the number of nodes on which the pods will run on.
|
||||
@ -64,7 +64,7 @@ Once the number of nodes is adjusted, the pods and other resources need to rebal
|
||||
|
||||
All three models of autoscaling can be combined together.
|
||||
|
||||
Define Autoscaling When Creating a Cluster[](#define-autoscaling-when-creating-a-cluster "Permalink to this headline")
|
||||
Define Autoscaling When Creating a Cluster[🔗](#define-autoscaling-when-creating-a-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can define autoscaling parameters while defining a new cluster, using window called **Size** in the cluster creation wizard:
|
||||
@ -79,7 +79,7 @@ Warning
|
||||
|
||||
If you decide to use NGINX Ingress option while defining a cluster, NGINX ingress will run as 3 replicas on 3 separate nodes. This will override the minimum number of nodes in Magnum autoscaler.
|
||||
|
||||
Autoscaling Node Groups at Run Time[](#autoscaling-node-groups-at-run-time "Permalink to this headline")
|
||||
Autoscaling Node Groups at Run Time[🔗](#autoscaling-node-groups-at-run-time "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The autoscaler in Magnum uses Node Groups. Node groups can be used to create workers with different flavors. The default-worker node group is automatically created when cluster is provisioned. Node groups have lower and upper limits of node count. This is the command to print them out for a given cluster:
|
||||
@ -143,7 +143,7 @@ the result will now be with a corrected value:
|
||||
|
||||
```
|
||||
|
||||
How Autoscaling Detects Upper Limit[](#how-autoscaling-detects-upper-limit "Permalink to this headline")
|
||||
How Autoscaling Detects Upper Limit[🔗](#how-autoscaling-detects-upper-limit "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The first version of Autoscaling would take the current upper limit of autoscaling in variable *node\_count* and add 1 to it. If the command to create a cluster were
|
||||
@ -179,7 +179,7 @@ Any additional node group must include concrete *max\_node\_count* attribute.
|
||||
|
||||
See Prerequisites No. 4 for detailed examples of using the **openstack coe nodegroup** family of commands.
|
||||
|
||||
Autoscaling Labels for Clusters[](#autoscaling-labels-for-clusters "Permalink to this headline")
|
||||
Autoscaling Labels for Clusters[🔗](#autoscaling-labels-for-clusters "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
There are three labels for clusters that influence autoscaling:
|
||||
@ -198,7 +198,7 @@ List clusters with **Container Infra** => **Cluster** and click on the name of t
|
||||
|
||||
If true, it is enabled, the cluster will autoscale.
|
||||
|
||||
Create New Cluster Using CLI With Autoscaling On[](#create-new-cluster-using-cli-with-autoscaling-on "Permalink to this headline")
|
||||
Create New Cluster Using CLI With Autoscaling On[🔗](#create-new-cluster-using-cli-with-autoscaling-on "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The command to create a cluster with CLI must encompass all of the usual parameters as well as **all of the labels** needed for the cluster to function. The peculiarity of the syntax is that label parameters must be one single string, without any blanks inbetween.
|
||||
@ -239,7 +239,7 @@ Three worker node addresses are active: **10.0.0.102**, **10.0.0.27**, and **10.
|
||||
|
||||
There is no traffic to the cluster so the autoscaling immediately kicked in. A minute or two after the creation was finished, the number of worker nodes fell down by one, to addresses **10.0.0.27** and **10.0.0.194** – that is autoscaling at work.
|
||||
|
||||
Nodegroups With Worker Role Will Be Automatically Autoscalled[](#nodegroups-with-worker-role-will-be-automatically-autoscalled "Permalink to this headline")
|
||||
Nodegroups With Worker Role Will Be Automatically Autoscalled[🔗](#nodegroups-with-worker-role-will-be-automatically-autoscalled "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Autoscaler automaticaly detects all new nodegroups with “worker” role assigned.
|
||||
@ -317,7 +317,7 @@ openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role
|
||||
|
||||

|
||||
|
||||
How to Obtain All Labels From Horizon Interface[](#how-to-obtain-all-labels-from-horizon-interface "Permalink to this headline")
|
||||
How to Obtain All Labels From Horizon Interface[🔗](#how-to-obtain-all-labels-from-horizon-interface "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Use **Container Infra** => **Clusters** and click on the cluster name. You will get plain text in browser, just copy the rows under **Labels** and paste them to the text editor of your choice.
|
||||
@ -326,7 +326,7 @@ Use **Container Infra** => **Clusters** and click on the cluster name. You will
|
||||
|
||||
In text editor, manually remove line ends and make one string without breaks and carriage returns, then paste it back to the command.
|
||||
|
||||
How To Obtain All Labels From the CLI[](#how-to-obtain-all-labels-from-the-cli "Permalink to this headline")
|
||||
How To Obtain All Labels From the CLI[🔗](#how-to-obtain-all-labels-from-the-cli "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
There is a special command which will produce labels from a cluster:
|
||||
@ -342,12 +342,12 @@ This is the result:
|
||||
|
||||
That is *yaml* format, as specified by the **-f** parameter. The rows represent label values and your next action is to create one long string without line breaks as in the previous example, then form the CLI command.
|
||||
|
||||
Use Labels String When Creating Cluster in Horizon[](#use-labels-string-when-creating-cluster-in-horizon "Permalink to this headline")
|
||||
Use Labels String When Creating Cluster in Horizon[🔗](#use-labels-string-when-creating-cluster-in-horizon "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The long labels string can also be used when creating the cluster manually, i.e. from the Horizon interface. The place to insert those labels is described in *Step 4 Define Labels* in Prerequisites No. 2.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Autoscaling is similar to autohealing of Kubernetes clusters and both bring automation to the table. They also guarantee that the system will autocorrect as long as it is within its basic parameters. Use autoscaling of cluster resources as much as you can!
|
||||
@ -1,7 +1,7 @@
|
||||
Backup of Kubernetes Cluster using Velero[](#backup-of-kubernetes-cluster-using-velero "Permalink to this headline")
|
||||
Backup of Kubernetes Cluster using Velero[🔗](#backup-of-kubernetes-cluster-using-velero "Permalink to this headline")
|
||||
=====================================================================================================================
|
||||
|
||||
What is Velero[](#what-is-velero "Permalink to this headline")
|
||||
What is Velero[🔗](#what-is-velero "Permalink to this headline")
|
||||
---------------------------------------------------------------
|
||||
|
||||
[Velero](https://velero.io) is the official open source project from VMware. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backed up objects can be restored on the same cluster, or on a new one. Using a package like Velero is essential for any serious development in the Kubernetes cluster.
|
||||
@ -10,7 +10,7 @@ In essence, you create object store under OpenStack, either using Horizon or Swi
|
||||
|
||||
Velero has its own CLI command system so it is possible to automate creation of backups using cron jobs.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Getting EC2 Client Credentials
|
||||
@ -21,7 +21,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Example 1 Basics of Restoring an Application
|
||||
> * Example 2 Snapshot of Restoring an Application
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -79,7 +79,7 @@ Either way, we shall assume that there is a container called “bucketnew”:
|
||||
|
||||
Supply your own unique name while working through this article.
|
||||
|
||||
Before Installing Velero[](#before-installing-velero "Permalink to this headline")
|
||||
Before Installing Velero[🔗](#before-installing-velero "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
We shall install Velero on Ubuntu 22.04; using other Linux distributions would be similar.
|
||||
@ -93,7 +93,7 @@ sudo apt update && sudo apt upgrade
|
||||
|
||||
It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
|
||||
|
||||
### Installation step 1 Getting EC2 client credentials[](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")
|
||||
### Installation step 1 Getting EC2 client credentials[🔗](#installation-step-1-getting-ec2-client-credentials "Permalink to this headline")
|
||||
|
||||
First fetch EC2 credentials from OpenStack. They are necessary to access private bucket (container). Generate them on your own by executing the following commands:
|
||||
|
||||
@ -105,7 +105,7 @@ openstack ec2 credentials list
|
||||
|
||||
Save somewhere the *Access Key* and the *Secret Key*. They will be needed in the next step, in which you set up a Velero configuration file.
|
||||
|
||||
### Installation step 2 Adjust the configuration file - “values.yaml”[](#installation-step-2-adjust-the-configuration-file-values-yaml "Permalink to this headline")
|
||||
### Installation step 2 Adjust the configuration file - “values.yaml”[🔗](#installation-step-2-adjust-the-configuration-file-values-yaml "Permalink to this headline")
|
||||
|
||||
Now create or adjust a configuration file for Velero. Use text editor of your choice to create that file. On MacOS or Linux, for example, you can use **nano**, like this:
|
||||
|
||||
@ -462,7 +462,7 @@ schedules:
|
||||
|
||||
```
|
||||
|
||||
### Installation step 3 Creating namespace[](#installation-step-3-creating-namespace "Permalink to this headline")
|
||||
### Installation step 3 Creating namespace[🔗](#installation-step-3-creating-namespace "Permalink to this headline")
|
||||
|
||||
Velero must be installed in an eponymous namespace, *velero*. This is the command to create it:
|
||||
|
||||
@ -472,7 +472,7 @@ namespace/velero created
|
||||
|
||||
```
|
||||
|
||||
### Installation step 4 Installing Velero with a Helm chart[](#installation-step-4-installing-velero-with-a-helm-chart "Permalink to this headline")
|
||||
### Installation step 4 Installing Velero with a Helm chart[🔗](#installation-step-4-installing-velero-with-a-helm-chart "Permalink to this headline")
|
||||
|
||||
Here are the commands to install Velero by means of a Helm chart:
|
||||
|
||||
@ -538,7 +538,7 @@ velero-1721031498 Opaque 1 3d1h
|
||||
|
||||
```
|
||||
|
||||
### Installation step 5 Installing Velero CLI[](#installation-step-5-installing-velero-cli "Permalink to this headline")
|
||||
### Installation step 5 Installing Velero CLI[🔗](#installation-step-5-installing-velero-cli "Permalink to this headline")
|
||||
|
||||
The final step is to install Velero CLI – Command Line Interface suitable for working from the terminal window on your operating system.
|
||||
|
||||
@ -595,7 +595,7 @@ velero help
|
||||
|
||||
```
|
||||
|
||||
Working with Velero[](#working-with-velero "Permalink to this headline")
|
||||
Working with Velero[🔗](#working-with-velero "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
So far, we have
|
||||
@ -645,7 +645,7 @@ This is the result in terminal window:
|
||||
|
||||

|
||||
|
||||
Example 1 Basics of Restoring an Application[](#example-1-basics-of-restoring-an-application "Permalink to this headline")
|
||||
Example 1 Basics of Restoring an Application[🔗](#example-1-basics-of-restoring-an-application "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let us now demonstrate how to restore a Kubernetes application. Let us first clone one example app from GitHub. Execute this:
|
||||
@ -704,7 +704,7 @@ nginx-backup New 0 0 <nil> n/a <none>
|
||||
|
||||
```
|
||||
|
||||
Example 2 Snapshot of restoring an application[](#example-2-snapshot-of-restoring-an-application "Permalink to this headline")
|
||||
Example 2 Snapshot of restoring an application[🔗](#example-2-snapshot-of-restoring-an-application "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Start the sample nginx app:
|
||||
@ -748,7 +748,7 @@ Run `velero restore describe nginx-backup-20220728015234` or `velero restore log
|
||||
|
||||
```
|
||||
|
||||
Delete a Velero backup[](#delete-a-velero-backup "Permalink to this headline")
|
||||
Delete a Velero backup[🔗](#delete-a-velero-backup "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
There are two ways to delete a backup made by Velero.
|
||||
@ -769,10 +769,10 @@ Delete all data in object/block storage
|
||||
|
||||
will delete the backup resource including all data in object/block storage
|
||||
|
||||
Removing Velero from the cluster[](#removing-velero-from-the-cluster "Permalink to this headline")
|
||||
Removing Velero from the cluster[🔗](#removing-velero-from-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
### Uninstall Velero[](#uninstall-velero "Permalink to this headline")
|
||||
### Uninstall Velero[🔗](#uninstall-velero "Permalink to this headline")
|
||||
|
||||
To uninstall Velero release:
|
||||
|
||||
@ -781,14 +781,14 @@ helm uninstall velero-1721031498 --namespace velero
|
||||
|
||||
```
|
||||
|
||||
### To delete Velero namespace[](#to-delete-velero-namespace "Permalink to this headline")
|
||||
### To delete Velero namespace[🔗](#to-delete-velero-namespace "Permalink to this headline")
|
||||
|
||||
```
|
||||
kubectl delete namespace velero
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now that Velero is up and running, you can integrate it into your routine. It will be useful in all classical backups scenarios – for disaster recovery, cluster and namespace migration, testing and development, application rollbacks, compliance and auditing and so on. Apart from these broad use cases, Velero will help with specific Kubernetes cluster tasks for backing up, such as:
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image[](#ci-cd-pipelines-with-gitlab-on-brand-name-kubernetes-building-a-docker-image "Permalink to this headline")
|
||||
CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image[🔗](#ci-cd-pipelines-with-gitlab-on-brand-name-kubernetes-building-a-docker-image "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
GitLab provides an isolated, private code registry and space for collaboration on code by teams. It also offers a broad range of code deployment automation capabilities. In this article, we will explain how to automate building a Docker image of your app.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Add your public key to GitLab and access GitLab from your command line
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Create pipeline to build your app’s Docker image using Kaniko
|
||||
> * Trigger pipeline build
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -52,7 +52,7 @@ See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud
|
||||
|
||||
Here, we use the key pair to connect to GitLab instance that we previously installed in Prerequisite No. 3.
|
||||
|
||||
Step 1 Add your public key to GitLab and access GitLab from your command line[](#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line "Permalink to this headline")
|
||||
Step 1 Add your public key to GitLab and access GitLab from your command line[🔗](#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to access your GitLab instance from the command line, GitLab uses SSH-based authentication. To ensure your console uses these keys for authentication by default, ensure your keys are stored in the **~/.ssh** folder and are called **id\_rsa** (private key) and **id\_rsa.pub** (public key).
|
||||
@ -78,7 +78,7 @@ You should see an output similar to the following:
|
||||
|
||||

|
||||
|
||||
Step 2 Create project in GitLab and add sample application code[](#step-2-create-project-in-gitlab-and-add-sample-application-code "Permalink to this headline")
|
||||
Step 2 Create project in GitLab and add sample application code[🔗](#step-2-create-project-in-gitlab-and-add-sample-application-code "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We will first add a sample application in GitLab. This is a minimal Python-Flask application, its code can be downloaded from this CloudFerro Cloud [GitHub repository accompanying this Knowledge Base](https://github.com/CloudFerro/K8s-samples/tree/main/HelloWorld-Docker-image-Flask).
|
||||
@ -135,7 +135,7 @@ When we enter GitLab GUI, we can see that our changes are committed:
|
||||
|
||||

|
||||
|
||||
Step 3 Define environment variables with your DockerHub coordinates in GitLab[](#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab "Permalink to this headline")
|
||||
Step 3 Define environment variables with your DockerHub coordinates in GitLab[🔗](#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We want to create a CI/CD pipeline that will, upon a new commit, build a Docker image of our app and push it to Docker Hub container registry. Let us use environment variables in GitLab to enable connection to the Docker registry. Use the following keys and values:
|
||||
@ -172,7 +172,7 @@ Scroll down to the section “Variables”and fill in the respective forms. In t
|
||||
|
||||
Now that the values of variables are set up, we will use them in our CI/CD pipeline.
|
||||
|
||||
Step 4 Create a pipeline to build your app’s Docker image using Kaniko[](#step-4-create-a-pipeline-to-build-your-app-s-docker-image-using-kaniko "Permalink to this headline")
|
||||
Step 4 Create a pipeline to build your app’s Docker image using Kaniko[🔗](#step-4-create-a-pipeline-to-build-your-app-s-docker-image-using-kaniko "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The CI/CD pipeline that we are creating in GitLab will have only one job that
|
||||
@ -216,7 +216,7 @@ Fill in and save the contents of a standardized configuration file
|
||||
Build and publish the container image to DockerHub
|
||||
: The second command builds and publishes the container image to DockerHub.
|
||||
|
||||
Step 5 Trigger pipeline build[](#step-5-trigger-pipeline-build "Permalink to this headline")
|
||||
Step 5 Trigger pipeline build[🔗](#step-5-trigger-pipeline-build "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
A commit triggers the pipeline to run. After adding the file, publish changes to the repository with the following set of commands:
|
||||
@ -236,7 +236,7 @@ Also when browsing our Docker registry, the image is published:
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Add your unit and integration tests to this pipeline. They can be added as additional steps in the **gitlab-ci.yml** file. A complete reference can be found here: <https://docs.gitlab.com/ee/ci/yaml/>
|
||||
@ -1,15 +1,15 @@
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-horizon-and-cli-on-brand-name "Permalink to this headline")
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on CloudFerro Cloud[🔗](#configuring-ip-whitelisting-for-openstack-load-balancer-using-horizon-and-cli-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================================
|
||||
|
||||
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Horizon and CLI commands. The configuration will limit access to your cluster through load balancer.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Prepare Your Environment
|
||||
> * Whitelist the load balancer via the CLI
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -42,11 +42,11 @@ pip install python-octaviaclient
|
||||
|
||||
```
|
||||
|
||||
### Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
|
||||
### Prepare Your Environment[🔗](#prepare-your-environment "Permalink to this headline")
|
||||
|
||||
First of all, you have to find **id** of your load balancer and its listener.
|
||||
|
||||
#### Horizon:[](#horizon "Permalink to this headline")
|
||||
#### Horizon:[🔗](#horizon "Permalink to this headline")
|
||||
|
||||
To find a load balancer **id**, go to **Project** >> **Network** >> **Load Balancers** and find that one which is associated with your cluster (its name will be with prefix of your cluster name).
|
||||
|
||||
@ -56,7 +56,7 @@ Click on load balancer name (in this case `lb-testing-ih347dstxyl2-api_lb_fixed-
|
||||
|
||||

|
||||
|
||||
#### CLI[](#cli "Permalink to this headline")
|
||||
#### CLI[🔗](#cli "Permalink to this headline")
|
||||
|
||||
To use CLI to find the listener, you have to know the following two cluster parameters:
|
||||
|
||||
@ -102,7 +102,7 @@ show 2d6b335f-fb05-4496-8593-887f7e2c49cf \
|
||||
|
||||
```
|
||||
|
||||
### Whitelist the load balancer via the CLI[](#whitelist-the-load-balancer-via-the-cli "Permalink to this headline")
|
||||
### Whitelist the load balancer via the CLI[🔗](#whitelist-the-load-balancer-via-the-cli "Permalink to this headline")
|
||||
|
||||
We now have the listener and the IP addresses which will be whitelisted. This is the command that will set up the whitelisting:
|
||||
|
||||
@ -116,7 +116,7 @@ openstack loadbalancer listener set \
|
||||
|
||||

|
||||
|
||||
State of Security: Before and After[](#state-of-security-before-and-after "Permalink to this headline")
|
||||
State of Security: Before and After[🔗](#state-of-security-before-and-after "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
|
||||
@ -124,7 +124,7 @@ Before implementing IP whitelisting, the load balancer accepts traffic from all
|
||||
> * Only specified IPs can access the load balancer.
|
||||
> * Unauthorized access attempts are denied.
|
||||
|
||||
Verification Tools[](#verification-tools "Permalink to this headline")
|
||||
Verification Tools[🔗](#verification-tools "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
Various tools can ensure the protection is installed and active:
|
||||
@ -141,7 +141,7 @@ curl
|
||||
Wireshark
|
||||
: (free): For packet-level analysis.
|
||||
|
||||
### Testing using curl and livez[](#testing-using-curl-and-livez "Permalink to this headline")
|
||||
### Testing using curl and livez[🔗](#testing-using-curl-and-livez "Permalink to this headline")
|
||||
|
||||
Here is how we could test it:
|
||||
|
||||
@ -209,7 +209,7 @@ curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
Whitelisting prevents traffic from all IP addresses apart from those that are allowed by **--allowed-cidr**.
|
||||
|
||||
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
|
||||
### Testing with nmap[🔗](#testing-with-nmap "Permalink to this headline")
|
||||
|
||||
To test with **nmap**:
|
||||
|
||||
@ -218,7 +218,7 @@ nmap -p <PORT> <LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with curl directly[](#testing-with-curl-directly "Permalink to this headline")
|
||||
### Testing with curl directly[🔗](#testing-with-curl-directly "Permalink to this headline")
|
||||
|
||||
To test with **curl**:
|
||||
|
||||
@ -227,7 +227,7 @@ curl http://<LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can wrap up this procedure with Terraform and apply to a larger number of load balancers. See [Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud](Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud[](#configuring-ip-whitelisting-for-openstack-load-balancer-using-terraform-on-brand-name "Permalink to this headline")
|
||||
Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on CloudFerro Cloud[🔗](#configuring-ip-whitelisting-for-openstack-load-balancer-using-terraform-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================================
|
||||
|
||||
This guide explains how to configure IP whitelisting (**allowed\_cidrs**) on an existing OpenStack Load Balancer using Terraform. The configuration will limit access to your cluster through load balancer.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Get necessary load balancer and cluster data from the Prerequisites
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Run terraform
|
||||
> * Test and verify that protection of load balancer via whitelisting works
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -48,14 +48,14 @@ You can also use Horizon commands **Identity** –> **Application Credentials**
|
||||
|
||||
Log in to your account using this unrestricted credential.
|
||||
|
||||
Prepare Your Environment[](#prepare-your-environment "Permalink to this headline")
|
||||
Prepare Your Environment[🔗](#prepare-your-environment "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
Work through article in Prerequisite No. 2 from which we will derive all the input parameters, using Horizon and CLI commands.
|
||||
|
||||
Also, authenticate through application credential you got from Prerequisite No. 4.
|
||||
|
||||
Configure Terraform for whitelisting[](#configure-terraform-for-whitelisting "Permalink to this headline")
|
||||
Configure Terraform for whitelisting[🔗](#configure-terraform-for-whitelisting "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Instead of performing the whitelisting procedure manually, we can use Terraform and store the procedure in the remote repo.
|
||||
@ -136,7 +136,7 @@ resource "openstack_lb_listener_v2" "k8s_api_listener" {
|
||||
|
||||
```
|
||||
|
||||
Import Existing Load Balancer Listener[](#import-existing-load-balancer-listener "Permalink to this headline")
|
||||
Import Existing Load Balancer Listener[🔗](#import-existing-load-balancer-listener "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Since Terraform 1.5 can import your resource in declarative way.
|
||||
@ -158,7 +158,7 @@ terraform import openstack_lb_listener_v2.k8s_api_listener "<your-listener-id>"
|
||||
|
||||
```
|
||||
|
||||
Run Terraform[](#run-terraform "Permalink to this headline")
|
||||
Run Terraform[🔗](#run-terraform "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
**Terraform Execute**
|
||||
@ -217,7 +217,7 @@ Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
|
||||
|
||||
```
|
||||
|
||||
Tests[](#tests "Permalink to this headline")
|
||||
Tests[🔗](#tests "Permalink to this headline")
|
||||
---------------------------------------------
|
||||
|
||||
By default, Magnum LB does not have any access restrictions.
|
||||
@ -268,7 +268,7 @@ curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Compare with [Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud](Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-CloudFerro-Cloud.html.md)
|
||||
@ -1,11 +1,11 @@
|
||||
Create and access NFS server from Kubernetes on CloudFerro Cloud[](#create-and-access-nfs-server-from-kubernetes-on-brand-name "Permalink to this headline")
|
||||
Create and access NFS server from Kubernetes on CloudFerro Cloud[🔗](#create-and-access-nfs-server-from-kubernetes-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================
|
||||
|
||||
In order to enable simultaneous read-write storage to multiple pods running on a Kubernetes cluster, we can use an NFS server.
|
||||
|
||||
In this guide we will create an NFS server on a virtual machine, create file share on this server and demonstrate accessing it from a Kubernetes pod.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Set up an NFS server on a VM
|
||||
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Make the share available
|
||||
> * Deploy a test pod on the cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -41,7 +41,7 @@ No. 4 **kubectl access to the Kubernetes cloud**
|
||||
|
||||
As usual when working with Kubernetes clusters, you will need to use the **kubectl** command: [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
|
||||
|
||||
1. Set up NFS server on a VM[](#set-up-nfs-server-on-a-vm "Permalink to this headline")
|
||||
1. Set up NFS server on a VM[🔗](#set-up-nfs-server-on-a-vm "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
As a prerequisite to create an NFS server on a VM, first from the Network tab in Horizon create a security group allowing ingress traffic from port **2049**.
|
||||
@ -54,7 +54,7 @@ When the VM is created, you can see that it has private address assigned. For th
|
||||
|
||||
Set up floating IP on the VM server, just to enable SSH to this VM.
|
||||
|
||||
2. Set up a share folder on the NFS server[](#set-up-a-share-folder-on-the-nfs-server "Permalink to this headline")
|
||||
2. Set up a share folder on the NFS server[🔗](#set-up-a-share-folder-on-the-nfs-server "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
SSH to the VM, then run:
|
||||
@ -95,7 +95,7 @@ Edit the */etc/exports* file and add the following line:
|
||||
|
||||
This indicates that all nodes on the cluster network can access this share, with subfolders, in read-write mode.
|
||||
|
||||
3. Make the share available[](#make-the-share-available "Permalink to this headline")
|
||||
3. Make the share available[🔗](#make-the-share-available "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------
|
||||
|
||||
Run the below command to make the share available:
|
||||
@ -114,7 +114,7 @@ sudo systemctl restart nfs-kernel-server
|
||||
|
||||
Exit from the NFS server VM.
|
||||
|
||||
4. Deploy a test pod on the cluster[](#deploy-a-test-pod-on-the-cluster "Permalink to this headline")
|
||||
4. Deploy a test pod on the cluster[🔗](#deploy-a-test-pod-on-the-cluster "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------
|
||||
|
||||
Ensure you can access your cluster with **kubectl**. Have a file *test-pod.yaml* with the following contents:
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#creating-additional-nodegroups-in-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[🔗](#creating-additional-nodegroups-in-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================
|
||||
|
||||
The Benefits of Using Nodegroups[](#the-benefits-of-using-nodegroups "Permalink to this headline")
|
||||
The Benefits of Using Nodegroups[🔗](#the-benefits-of-using-nodegroups "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
A *nodegroup* is a group of nodes from a Kubernetes cluster that have the same configuration and run the user’s containers. One and the same cluster can have various nodegroups within it, so instead of creating several independent clusters, you may create only one and then separate the groups into the nodegroups.
|
||||
@ -18,7 +18,7 @@ Other uses of nodegroup roles also include:
|
||||
> * if your Kubernetes environment is small on resources, you can create a minimal Kubernetes cluster and later on add nodegroups and thus enhance the number of control and worker nodes.
|
||||
> * Nodes in a group can be created, upgraded and deleted individually, without affecting the rest of the cluster.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The structure of command **openstack coe nodelist**
|
||||
@ -31,7 +31,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * How to resize a nodegroup
|
||||
> * The benefits of using nodegroups in Kubernetes clusters
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -50,7 +50,7 @@ No. 4 **Check available quotas**
|
||||
|
||||
Before creating additional node groups check the state of the resources with Horizon commands **Computer** => **Overview**. See [Dashboard Overview – Project Quotas And Flavors Limits on CloudFerro Cloud](../cloud/Dashboard-Overview-Project-Quotas-And-Flavors-Limits-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Nodegroup Subcommands[](#nodegroup-subcommands "Permalink to this headline")
|
||||
Nodegroup Subcommands[🔗](#nodegroup-subcommands "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Once you create a Kubernetes cluster on OpenStack Magnum, there are five *nodegroup* commands at your disposal:
|
||||
@ -70,7 +70,7 @@ openstack coe nodegroup update
|
||||
|
||||
With this, you can repurpose the cluster to include various images, change volume access, set up max and min values for the number of nodes and so on.
|
||||
|
||||
Step 1 Access the Current State of Clusters and Their Nodegroups[](#step-1-access-the-current-state-of-clusters-and-their-nodegroups "Permalink to this headline")
|
||||
Step 1 Access the Current State of Clusters and Their Nodegroups[🔗](#step-1-access-the-current-state-of-clusters-and-their-nodegroups "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is which clusters are available in the system:
|
||||
@ -97,7 +97,7 @@ to list default nodegroups for those two clusters, *kubelbtrue* and *k8s-cluster
|
||||
|
||||
The **default-worker** node group cannot be removed or reconfigured so plan ahead when creating the base cluster.
|
||||
|
||||
Step 2 How to Create a New Nodegroup[](#step-2-how-to-create-a-new-nodegroup "Permalink to this headline")
|
||||
Step 2 How to Create a New Nodegroup[🔗](#step-2-how-to-create-a-new-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you learn about the parameters available for the **nodegroup create** command. This is the general structure:
|
||||
@ -146,7 +146,7 @@ Still in Horizon, click on commands **Contaner Infra** => **Clusters** => **k8s-
|
||||
|
||||

|
||||
|
||||
Step 3 Using **role** to Filter Nodegroups in the Cluster[](#step-3-using-role-to-filter-nodegroups-in-the-cluster "Permalink to this headline")
|
||||
Step 3 Using **role** to Filter Nodegroups in the Cluster[🔗](#step-3-using-role-to-filter-nodegroups-in-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
It is possible to filter node groups according to the role. Here is the command to show only the *test* nodegroup:
|
||||
@ -162,7 +162,7 @@ Several node groups can share the same role name.
|
||||
|
||||
The roles can be used to schedule the nodes when using the **kubectl** command directly on the cluster.
|
||||
|
||||
Step 4 Show Details of the Nodegroup Created[](#step-4-show-details-of-the-nodegroup-created "Permalink to this headline")
|
||||
Step 4 Show Details of the Nodegroup Created[🔗](#step-4-show-details-of-the-nodegroup-created "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Command **show** presents the details of a nodegroup in various formats – *json*, *table*, *shell*, *value* or *yaml*. The default is *table* but use parameter **–max-width** to limit the number of columns in it:
|
||||
@ -174,7 +174,7 @@ openstack coe nodegroup show --max-width 80 k8s-cluster testing
|
||||
|
||||

|
||||
|
||||
Step 5 Delete the Existing Nodegroup[](#step-5-delete-the-existing-nodegroup "Permalink to this headline")
|
||||
Step 5 Delete the Existing Nodegroup[🔗](#step-5-delete-the-existing-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you shall try to create a nodegroup with small footprint:
|
||||
@ -212,7 +212,7 @@ Regardless of the way, the instances will not be deleted immediately, but rather
|
||||
|
||||
The default master and worker node groups cannot be deleted but all the others can.
|
||||
|
||||
Step 6 Update the Existing Nodegroup[](#step-6-update-the-existing-nodegroup "Permalink to this headline")
|
||||
Step 6 Update the Existing Nodegroup[🔗](#step-6-update-the-existing-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you will directly update the existing nodegroup, rather than adding and deleting them in a row. The example command is:
|
||||
@ -226,7 +226,7 @@ Instead of **replace**, it is also possible to use verbs **add** and **delete**.
|
||||
|
||||
In the above example, you are setting up the minimum value of nodes to 1. (Previously it was **0** as parameter **min\_node\_count** was not specified and its default value is **0**.)
|
||||
|
||||
Step 7 Resize the Nodegroup[](#step-7-resize-the-nodegroup "Permalink to this headline")
|
||||
Step 7 Resize the Nodegroup[🔗](#step-7-resize-the-nodegroup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Resizing the *nodegroup* is similar to resizing the cluster, with the addition of parameter **–nodegroup**. Currently, the number of nodes in group *testing* is 2. Make it **1**:
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
Default Kubernetes cluster templates in CloudFerro Cloud Cloud[](#default-kubernetes-cluster-templates-in-brand-name-cloud "Permalink to this headline")
|
||||
Default Kubernetes cluster templates in CloudFerro Cloud Cloud[🔗](#default-kubernetes-cluster-templates-in-brand-name-cloud "Permalink to this headline")
|
||||
=========================================================================================================================================================
|
||||
|
||||
In this article we shall list Kubernetes cluster templates available on CloudFerro Cloud and explain the differences among them.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * List available templates on your cloud
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Overview and benefits of *localstorage* templates
|
||||
> * Example of creating *localstorage* template using HMD and HMAD flavors
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -45,7 +45,7 @@ If template name contains “vgpu”, this template can be used to create so-cal
|
||||
|
||||
To learn how to set up vGPU in Kubernetes clusters on CloudFerro Cloud cloud, see [Deploying vGPU workloads on CloudFerro Cloud Kubernetes](Deploying-vGPU-workloads-on-CloudFerro-Cloud-Kubernetes.html.md).
|
||||
|
||||
Templates available on your cloud[](#templates-available-on-your-cloud "Permalink to this headline")
|
||||
Templates available on your cloud[🔗](#templates-available-on-your-cloud "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.
|
||||
@ -72,7 +72,7 @@ FRA1-2
|
||||
|
||||
The converse is also true, you may want to select the cloud that you want to use according to the type of cluster that you would want to use. For instance, you would have to select WAW3-1 cloud if you wanted to use vGPU on your cluster.
|
||||
|
||||
How to choose a proper template[](#how-to-choose-a-proper-template "Permalink to this headline")
|
||||
How to choose a proper template[🔗](#how-to-choose-a-proper-template "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
**Standard templates**
|
||||
@ -103,17 +103,17 @@ If the application does not require a great many operations, then a standard tem
|
||||
|
||||
You can also dig deeper and choose the template according to the the network plugin used.
|
||||
|
||||
### Network plugins for Kubernetes clusters[](#network-plugins-for-kubernetes-clusters "Permalink to this headline")
|
||||
### Network plugins for Kubernetes clusters[🔗](#network-plugins-for-kubernetes-clusters "Permalink to this headline")
|
||||
|
||||
Kubernetes cluster templates at CloudFerro Cloud cloud use *calico* or *cilium* plugins for controlling network traffic. Both are [CNI](https://www.cncf.io/projects/kubernetes/) compliant. *Calico* is the default plugin, meaning that if the template name does not specify the plugin, the *calico* driver is used. If the template name specifies *cilium* then, of course, the *cilium* driver is used.
|
||||
|
||||
### Calico (the default)[](#calico-the-default "Permalink to this headline")
|
||||
### Calico (the default)[🔗](#calico-the-default "Permalink to this headline")
|
||||
|
||||
[Calico](https://projectcalico.docs.tigera.io/about/about-calico) uses BGP protocol to move network packets towards IP addresses of the pods. *Calico* can be faster then its competitors but its most remarkable feature is support for *network policies*. With those, you can define which pods can send and receive traffic and also manage the security of the network.
|
||||
|
||||
*Calico* can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. *Calico* policies can be used on its own or together with the Kubernetes network policies.
|
||||
|
||||
### Cilium[](#cilium "Permalink to this headline")
|
||||
### Cilium[🔗](#cilium "Permalink to this headline")
|
||||
|
||||
[Cilium](https://cilium.io/) is drawing its power from a technology called *eBPF*. It exposes programmable hooks to the network stack in Linux kernel. *eBPF* uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, *eBPF* enables Linux to watch over Kubernetes and react appropriately.
|
||||
|
||||
@ -125,7 +125,7 @@ With *Cilium*, the relationships amongst various cluster parts are as follows:
|
||||
|
||||
Using *Cilium* especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.
|
||||
|
||||
Overview and benefits of *localstorage* templates[](#overview-and-benefits-of-localstorage-templates "Permalink to this headline")
|
||||
Overview and benefits of *localstorage* templates[🔗](#overview-and-benefits-of-localstorage-templates "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Compared to standard templates, the *localstorage* templates may be a better fit for *resources intensive* apps.
|
||||
@ -153,7 +153,7 @@ You would use an HMD flavor mainly for the master node(s) in the cluster.
|
||||
|
||||
In WAW3-2 cloud, you would use flavors starting with HMAD instead of HMD.
|
||||
|
||||
Example parameters to create a new cluster with localstorage and NVMe[](#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme "Permalink to this headline")
|
||||
Example parameters to create a new cluster with localstorage and NVMe[🔗](#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
For general discussion of parameters, see Prerequisite No. 4. What follows is a simplified example, geared to creation of cluster using *localstorage*.
|
||||
|
||||
@ -1,18 +1,18 @@
|
||||
Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud[](#deploy-keycloak-on-kubernetes-with-a-sample-app-on-brand-name "Permalink to this headline")
|
||||
Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud[🔗](#deploy-keycloak-on-kubernetes-with-a-sample-app-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================
|
||||
|
||||
[Keycloak](https://www.keycloak.org/) is a large Open-Source Identity Management suite capable of handling a wide range of identity-related use cases.
|
||||
|
||||
Using Keycloak, it is straightforward to deploy a robust authentication/authorization solution for your applications. After the initial deployment, you can easily configure it to meet new identity-related requirements, e.g. multi-factor authentication, federation to social-providers, custom password policies, and many others.
|
||||
|
||||
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
|
||||
What We Are Going To Do[🔗](#what-we-are-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
> * Deploy Keycloak on a Kubernetes cluster
|
||||
> * Configure Keycloak: create a realm, client and a user
|
||||
> * Deploy a sample Python web application using Keycloak for authentication
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -31,7 +31,7 @@ No. 4 **Familiarity with OpenID Connect (OIDC) terminology**
|
||||
|
||||
Certain familiarity with OpenID Connect (OIDC) terminology is required. Some key terms will be briefly explained in this article.
|
||||
|
||||
Step 1 Deploy Keycloak on Kubernetes[](#step-1-deploy-keycloak-on-kubernetes "Permalink to this headline")
|
||||
Step 1 Deploy Keycloak on Kubernetes[🔗](#step-1-deploy-keycloak-on-kubernetes "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s first create a dedicated Kubernetes namespace for Keycloak. This is optional, but good practice:
|
||||
@ -73,7 +73,7 @@ This is full screen view of the Keycloak window:
|
||||
|
||||

|
||||
|
||||
Step 2 Create Keycloak realm[](#step-2-create-keycloak-realm "Permalink to this headline")
|
||||
Step 2 Create Keycloak realm[🔗](#step-2-create-keycloak-realm "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
In Keycloak terminology, a *realm* is a dedicated space for managing an isolated subset of users, roles and other related entities. Keycloak has initially a master realm used for administration of Keycloak itself.
|
||||
@ -92,7 +92,7 @@ When the realm is created (and selected), we operate within this realm:
|
||||
|
||||
In the left upper corner, instead of **master** now is the name of the selected realm, **myrealm**.
|
||||
|
||||
Step 3 Create and configure Keycloak client[](#step-3-create-and-configure-keycloak-client "Permalink to this headline")
|
||||
Step 3 Create and configure Keycloak client[🔗](#step-3-create-and-configure-keycloak-client "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Clients are entities in Keycloak that can request Keycloak to authenticate users. In practical terms, they can be thought of as representation of individual applications that want to utilize Keycloak-managed authentication/authorization.
|
||||
@ -132,7 +132,7 @@ Web origins
|
||||
|
||||
After hitting **Save**, your client is created. You can then modify the previously selected settings of the created client, and add new, more specific ones. There are vast possibilities for further customization depending on your app specifics, this is however beyond the scope of this article.
|
||||
|
||||
Step 4 Create a User in Keycloak[](#step-4-create-a-user-in-keycloak "Permalink to this headline")
|
||||
Step 4 Create a User in Keycloak[🔗](#step-4-create-a-user-in-keycloak "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
After creating the Client, we will proceed to creating our first User in Keycloak. In order to do so, click on the Users tab on the left and then **Create New User**:
|
||||
@ -145,7 +145,7 @@ Next, we will set up password credentials for the newly created user. Select **C
|
||||
|
||||

|
||||
|
||||
Step 5 Retrieve client secret from Keycloak[](#step-5-retrieve-client-secret-from-keycloak "Permalink to this headline")
|
||||
Step 5 Retrieve client secret from Keycloak[🔗](#step-5-retrieve-client-secret-from-keycloak "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once we have the Keycloak set up, we will need to extract the client *secret*, so that Keycloak establishes trust with our application.
|
||||
@ -160,7 +160,7 @@ Once in tab **Credentials**, the secret will become accessible through field **C
|
||||
|
||||
For privacy reasons, in the screeshot above, it is painted yellow. In your case, take note of its value, as in the next step you will need to paste it into the application code.
|
||||
|
||||
Step 6 Create a Flask web app utilizing Keycloak authentication[](#step-6-create-a-flask-web-app-utilizing-keycloak-authentication "Permalink to this headline")
|
||||
Step 6 Create a Flask web app utilizing Keycloak authentication[🔗](#step-6-create-a-flask-web-app-utilizing-keycloak-authentication "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To build the app, we will use Flask, which is a lightweight Python-based web framework. Keycloak supports wide range of other technologies as well. We will use Flask-OIDC library, which expands Flask with capability to run OpenID Connect authentication/authorization scenarios.
|
||||
@ -269,7 +269,7 @@ Note that *app.py* creates 3 routes:
|
||||
`/logout`
|
||||
: Entering this route logs the user out.
|
||||
|
||||
Step 7 Test the application[](#step-7-test-the-application "Permalink to this headline")
|
||||
Step 7 Test the application[🔗](#step-7-test-the-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To test the application, execute the following command from the working directory in which file *app.py* is placed:
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud[](#deploying-https-services-on-magnum-kubernetes-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud[🔗](#deploying-https-services-on-magnum-kubernetes-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
======================================================================================================================================================================================
|
||||
|
||||
Kubernetes makes it very quick to deploy and publicly expose an application, for example using the LoadBalancer service type. Sample deployments, which demonstrate such capability, are usually served with HTTP. Deploying a production-ready service, secured with HTTPS, can also be done smoothly, by using additional tools.
|
||||
|
||||
In this article, we show how to deploy a sample HTTPS-protected service on CloudFerro Cloud cloud.
|
||||
|
||||
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We are Going to Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Cert Manager’s Custom Resource Definitions
|
||||
@ -15,7 +15,7 @@ What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Associate the domain with NGINX Ingress
|
||||
> * Create and Deploy an Ingress Resource
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -50,7 +50,7 @@ This is optional. Here is the article with detailed information:
|
||||
|
||||
[DNS as a Service on CloudFerro Cloud Hosting](../cloud/DNS-as-a-Service-on-CloudFerro-Cloud-Hosting.html.md)
|
||||
|
||||
Step 1 Install Cert Manager’s Custom Resource Definitions (CRDs)[](#step-1-install-cert-manager-s-custom-resource-definitions-crds "Permalink to this headline")
|
||||
Step 1 Install Cert Manager’s Custom Resource Definitions (CRDs)[🔗](#step-1-install-cert-manager-s-custom-resource-definitions-crds "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We assume you have your
|
||||
@ -92,7 +92,7 @@ Warning
|
||||
|
||||
Magnum introduces a few pod security policies (PSP) which provide some extra safety precautions for the cluster, but will cause conflict with the CertManager Helm chart. PodSecurityPolicy is deprecated until Kubernetes v. 1.25, but still supported in version of Kubernetes 1.21 to 1.23 available on CloudFerro Cloud cloud. The commands below may produce warnings about deprecation but the installation should continue nevertheless.
|
||||
|
||||
Step 2 Install CertManager Helm chart[](#step-2-install-certmanager-helm-chart "Permalink to this headline")
|
||||
Step 2 Install CertManager Helm chart[🔗](#step-2-install-certmanager-helm-chart "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We assume you have installed Helm according to the article mentioned in Prerequisite No. 5. The result of that article will be file *my-values.yaml* and in order to ensure correct deployment of CertManager Helm chart, we will need to
|
||||
@ -148,7 +148,7 @@ or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
|
||||
|
||||
We see that *cert-manager* is deployed successfully but also get a hint that *ClusterIssuer* or an *Issuer* resource has to be installed as well. Our next step is to install a sample service into the cluster and then continue with creation and deployment of an *Issuer*.
|
||||
|
||||
Step 3 Create a Deployment and a Service[](#step-3-create-a-deployment-and-a-service "Permalink to this headline")
|
||||
Step 3 Create a Deployment and a Service[🔗](#step-3-create-a-deployment-and-a-service "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s deploy NGINX service as a standard example of a Kubernetes app. First we create a standard Kubernetes deployment and then a service of type *NodePort*. Write the following contents to file *my-nginx.yaml* :
|
||||
@ -199,7 +199,7 @@ kubectl apply -f my-nginx.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 4 Create and Deploy an Issuer[](#step-4-create-and-deploy-an-issuer "Permalink to this headline")
|
||||
Step 4 Create and Deploy an Issuer[🔗](#step-4-create-and-deploy-an-issuer "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
Now install an *Issuer*. It is a custom Kubernetes resource and represents Certificate Authority (CA), which ensures that our HTTPS are signed and therefore trusted by the browsers. CertManager supports different issuers, in our example we will use Let’s Encrypt, that uses ACME protocol.
|
||||
@ -236,7 +236,7 @@ kubectl apply -f my-nginx-issuer.yaml
|
||||
|
||||
As a result, the *Issuer* gets deployed, and a *Secret* called *letsencrypt-secret* with a private key is deployed as well.
|
||||
|
||||
Step 5 Associate the Domain with NGINX Ingress[](#step-5-associate-the-domain-with-nginx-ingress "Permalink to this headline")
|
||||
Step 5 Associate the Domain with NGINX Ingress[🔗](#step-5-associate-the-domain-with-nginx-ingress "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To see the site in browser, your HTTPS certificate will need to be associated with a specific domain. To follow along, you should have a real domain already registered at a domain registrar.
|
||||
@ -249,7 +249,7 @@ Now, at your domain registrar you need to associate the A record of the domain w
|
||||
|
||||
You can also use the DNS command in Horizon to connect the domain name you have with the cluster. See Prerequisite No. 7 for additional details.
|
||||
|
||||
Step 6 Create and Deploy an Ingress Resource[](#step-6-create-and-deploy-an-ingress-resource "Permalink to this headline")
|
||||
Step 6 Create and Deploy an Ingress Resource[🔗](#step-6-create-and-deploy-an-ingress-resource "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The final step is to deploy the *Ingress* resource. This will perform the necessary steps to initiate the certificate signing request with the CA and ultimately provide the HTTPS certificate for your service. In order to proceed, place the contents below into file *my-nginx-ingress.yaml*. Replace **mysampledomain.eu** with your domain.
|
||||
@ -296,7 +296,7 @@ If all works well, the effort is complete and after a couple of minutes we shoul
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
The article [Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum](Using-Kubernetes-Ingress-on-CloudFerro-Cloud-OpenStack-Magnum.html.md) shows how to create an HTTP based service or a site.
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud[](#deploying-helm-charts-on-magnum-kubernetes-clusters-on-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud[🔗](#deploying-helm-charts-on-magnum-kubernetes-clusters-on-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
==================================================================================================================================================================================================
|
||||
|
||||
Kubernetes is a robust and battle-tested environment for running apps and services, yet it could be time consuming to manually provision all resources required to run a production-ready deployment. This article introduces [Helm](https://helm.sh/) as a package manager for Kubernetes. With it, you will be able to quickly deploy complex Kubernetes applications, consisting of code, databases, user interfaces and more.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Background - How Helm works
|
||||
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Deploy Helm chart on a cluster
|
||||
> * Customize chart deployment
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -44,7 +44,7 @@ Code samples in this article assume you are running Ubuntu 20.04 LTS or similar
|
||||
|
||||
[How to create a Linux VM and access it from Linux command line on CloudFerro Cloud](../cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
Background - How Helm works[](#background-how-helm-works "Permalink to this headline")
|
||||
Background - How Helm works[🔗](#background-how-helm-works "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
A usual sequence of deploying an application on Kubernetes entails:
|
||||
@ -60,7 +60,7 @@ For each standard deployment of an application on Kubernetes (e.g. a database, a
|
||||
|
||||
Helm charts are designed to cover a broad set of use cases required for deploying an application. The application can be then simply launched on a cluster with a few commands within seconds. Some specific customizations for an individual deployment can be then easily adjusted by overriding the default *values.yaml* file.
|
||||
|
||||
Install Helm[](#install-helm "Permalink to this headline")
|
||||
Install Helm[🔗](#install-helm "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
You can install Helm on your own development machine. To install, download the installer file from the Helm release page, change file permission, and run the installation:
|
||||
@ -81,7 +81,7 @@ $ helm version
|
||||
|
||||
For other operating systems, use the [link to download Helm installation files](https://phoenixnap.com/kb/install-helm) and proceed analogously.
|
||||
|
||||
Add a Helm repository[](#add-a-helm-repository "Permalink to this headline")
|
||||
Add a Helm repository[🔗](#add-a-helm-repository "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Helm charts are distributed using repositories. For example, a single repository can host several Helm charts from a certain provider. For the purpose of this article, we will add the Bitnami repository that contains their versions of multiple useful Helm charts e.g. Redis, Grafana, Elasticsearch, or others. You can run it using the following command:
|
||||
@ -102,7 +102,7 @@ The following image shows just a start of all the available apps from *bitnami*
|
||||
|
||||

|
||||
|
||||
Helm chart repositories[](#helm-chart-repositories "Permalink to this headline")
|
||||
Helm chart repositories[🔗](#helm-chart-repositories "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
In the above example, we knew where to find a repository with Helm charts. There are other repositories and they are usually hosted on GitHub or ArtifactHub. Let us have a look at the [apache page in ArtifactHUB](https://artifacthub.io/packages/helm/bitnami/apache):
|
||||
@ -115,7 +115,7 @@ Click on the DEFAULT VALUES option (yellow highlight) and see contents of the de
|
||||
|
||||
In this file (or in additional tabular information on the chart page), you can check which parameters are enabled for customization, and which are their default values.
|
||||
|
||||
Check whether kubectl has access to the cluster[](#check-whether-kubectl-has-access-to-the-cluster "Permalink to this headline")
|
||||
Check whether kubectl has access to the cluster[🔗](#check-whether-kubectl-has-access-to-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To proceed further, verify that you have your KUBECONFIG environment variable exported and pointing to a running cluster’s *kubeconfig* file (see Prerequisite No. 4). If there is need, export this environment variable:
|
||||
@ -134,7 +134,7 @@ kubectl get nodes
|
||||
|
||||
That will serve as the confirmation that you have access to the cluster.
|
||||
|
||||
Deploy a Helm chart on a cluster[](#deploy-a-helm-chart-on-a-cluster "Permalink to this headline")
|
||||
Deploy a Helm chart on a cluster[🔗](#deploy-a-helm-chart-on-a-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Now that we know where to find repositories with hundreds of charts to choose from, let’s deploy one of them to our cluster.
|
||||
@ -177,7 +177,7 @@ Note that the floating IP generation can take a couple of minutes to appear. Aft
|
||||
|
||||

|
||||
|
||||
Customizing the chart deployment[](#customizing-the-chart-deployment "Permalink to this headline")
|
||||
Customizing the chart deployment[🔗](#customizing-the-chart-deployment "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
We just saw how quick it was to deploy a Helm chart with the default settings. Usually, before running the chart in production, you will need to adjust a few settings to meet your requirements.
|
||||
@ -229,7 +229,7 @@ We can see that the application is now exposed to a new port 8080, which can be
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Deploy other useful services using Helm charts: [Argo Workflows](https://artifacthub.io/packages/helm/bitnami/argo-workflows), [JupyterHub](https://artifacthub.io/packages/helm/jupyterhub/jupyterhub), [Vault](https://artifacthub.io/packages/helm/hashicorp/vault) amongst many others that are available.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Deploying vGPU workloads on CloudFerro Cloud Kubernetes[](#deploying-vgpu-workloads-on-brand-name-kubernetes "Permalink to this headline")
|
||||
Deploying vGPU workloads on CloudFerro Cloud Kubernetes[🔗](#deploying-vgpu-workloads-on-brand-name-kubernetes "Permalink to this headline")
|
||||
===========================================================================================================================================
|
||||
|
||||
Utilizing GPU (Graphical Processing Units) presents a highly efficient alternative for fast, highly parallel processing of demanding computational tasks such as image processing, machine learning and many others.
|
||||
@ -7,7 +7,7 @@ In cloud environment, virtual GPU units (vGPU) are available with certain Virtua
|
||||
|
||||
We will present three alternative ways for adding vGPU capability to your Kubernetes cluster, based on your required scenario. For each, you should be able to verify the vGPU installation and test it by running vGPU workload.
|
||||
|
||||
What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
What Are We Going To Cover[🔗](#what-are-we-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * **Scenario No. 1** - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created **after** June 21st 2023
|
||||
@ -17,7 +17,7 @@ What Are We Going To Cover[](#what-are-we-going-to-cover "Permalink to this h
|
||||
> * Test vGPU workload
|
||||
> * Add non-GPU nodegroup to a GPU-first cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -44,7 +44,7 @@ No. 4 **Familiarity with the notion of nodegroups**
|
||||
|
||||
[Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum](Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-CloudFerro-Cloud-OpenStack-Magnum.html.md).
|
||||
|
||||
vGPU flavors per cloud[](#vgpu-flavors-per-cloud "Permalink to this headline")
|
||||
vGPU flavors per cloud[🔗](#vgpu-flavors-per-cloud "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Below is the list of GPU flavors in each cloud, applicable for using with Magnum Kubernetes service.
|
||||
@ -80,7 +80,7 @@ FRA1-2
|
||||
> | **vm.l40s.2** | 8 | 29.8 GB | 80 GB | Yes |
|
||||
> | **vm.l40s.8** | 32 | 119.22 GB | 320 GB | Yes |
|
||||
|
||||
Hardware comparison between RTX A6000 and NVIDIA L40S[](#hardware-comparison-between-rtx-a6000-and-nvidia-l40s "Permalink to this headline")
|
||||
Hardware comparison between RTX A6000 and NVIDIA L40S[🔗](#hardware-comparison-between-rtx-a6000-and-nvidia-l40s "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The NVIDIA L40S is designed for 24x7 enterprise data center operations and optimized to deploy at scale. As compared to A6000, NVIDIA L40S is better for
|
||||
@ -90,7 +90,7 @@ The NVIDIA L40S is designed for 24x7 enterprise data center operations and optim
|
||||
> * real-time ray tracing applications and is
|
||||
> * faster for in memory-intensive tasks.
|
||||
|
||||
Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[](#id1 "Permalink to this table")
|
||||
Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[🔗](#id1 "Permalink to this table")
|
||||
|
||||
| Specification | NVIDIA RTX A60001 | NVIDIA L40S1 |
|
||||
| --- | --- | --- |
|
||||
@ -103,7 +103,7 @@ Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S[](#id1 "Permalink to th
|
||||
| **Performance** | Strong performance for diverse workloads | Superior AI and machine learning performance |
|
||||
| **Use Cases** | 3D rendering, video editing, AI development | Data center, large-scale AI, enterprise applications |
|
||||
|
||||
Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023[](#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023 "Permalink to this headline")
|
||||
Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023[🔗](#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023 "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to create a new nodegroup, called **gpu**, with one node vGPU flavor, say, **vm.a6000.2**, we can use the following Magnum CLI command:
|
||||
@ -140,7 +140,7 @@ We get:
|
||||
|
||||
The result is that a new nodegroup called **gpu** is created in the cluster and that it is using the GPU flavor.
|
||||
|
||||
Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023[](#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023 "Permalink to this headline")
|
||||
Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023[🔗](#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023 "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The instructions are the same as in the previous scenario, with the exception of adding an additional label:
|
||||
@ -194,7 +194,7 @@ openstack coe nodegroup list $CLUSTER_ID_OLDER --max-width 120
|
||||
|
||||
```
|
||||
|
||||
Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup[](#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup "Permalink to this headline")
|
||||
Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup[🔗](#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To create a new vGPU-enabled cluster, you can use the usual Horizon commands, selecting one of the existing templates with **vgu** in their names:
|
||||
@ -226,7 +226,7 @@ openstack coe cluster create k8s-gpu-with_template \
|
||||
|
||||
```
|
||||
|
||||
### Verify the vGPU installation[](#verify-the-vgpu-installation "Permalink to this headline")
|
||||
### Verify the vGPU installation[🔗](#verify-the-vgpu-installation "Permalink to this headline")
|
||||
|
||||
You can verify that vGPU-enabled nodes were properly added to your cluster, by checking the **nvidia-device-plugin** deployed in the cluster, to the **nvidia-device-plugin** namespace. The command to list the contents of the **nvidia** namespace is:
|
||||
|
||||
@ -282,7 +282,7 @@ kubectl describe node k8s-gpu-with-template-lfs5335ymxcn-node-0 | grep 'Taints'
|
||||
|
||||
```
|
||||
|
||||
### Run test vGPU workload[](#run-test-vgpu-workload "Permalink to this headline")
|
||||
### Run test vGPU workload[🔗](#run-test-vgpu-workload "Permalink to this headline")
|
||||
|
||||
We can run a sample workload on vGPU. To do so, create a YAML manifest file **vgpu-pod.yaml**, with the following contents:
|
||||
|
||||
@ -339,7 +339,7 @@ Done
|
||||
|
||||
```
|
||||
|
||||
Add non-GPU nodegroup to a GPU-first cluster[](#add-non-gpu-nodegroup-to-a-gpu-first-cluster "Permalink to this headline")
|
||||
Add non-GPU nodegroup to a GPU-first cluster[🔗](#add-non-gpu-nodegroup-to-a-gpu-first-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We refer to GPU-first clusters as the ones created with **worker\_type=gpu** flag. For example, in cluster created with Scenario No. 3, the default nodegroup consists of vGPU nodes.
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster[](#enable-kubeapps-app-launcher-on-brand-name-magnum-kubernetes-cluster "Permalink to this headline")
|
||||
Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster[🔗](#enable-kubeapps-app-launcher-on-brand-name-magnum-kubernetes-cluster "Permalink to this headline")
|
||||
=================================================================================================================================================================================
|
||||
|
||||
[Kubeapps](https://kubeapps.dev/) app-launcher enables quick deployments of applications on your Kubernetes cluster, with convenient graphical user interface. In this article we provide guidelines for creating Kubernetes cluster with Kubeapps feature enabled, and deploying sample applications.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Brief background - deploying applications on Kubernetes
|
||||
@ -12,7 +12,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Launch sample application from Kubeapps
|
||||
> * Current limitations
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -37,14 +37,14 @@ No. 5 **Access to CloudFerro clouds**
|
||||
|
||||
Kubeapps is available on one of the clouds: WAW3-2, FRA1-2, WAW3-1.
|
||||
|
||||
Background[](#background "Permalink to this headline")
|
||||
Background[🔗](#background "Permalink to this headline")
|
||||
-------------------------------------------------------
|
||||
|
||||
Deploying complex applications on Kubernetes becomes notably more efficient and convenient with Helm. Adding to this convenience, **Kubeapps**, an app-launcher with Graphical User Interface (GUI), provides a user-friendly starting point for application management. This GUI allows to deploy and manage applications on your K8s cluster, limiting the need for deep command-line expertise.
|
||||
|
||||
Kubeapps app-launcher can be enabled during cluster creation time. It will run as a local service, accessible from browser.
|
||||
|
||||
Create Kubernetes cluster with Kubeapps quick-launcher enabled[](#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled "Permalink to this headline")
|
||||
Create Kubernetes cluster with Kubeapps quick-launcher enabled[🔗](#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Creating Kubernetes cluster with Kubeapps enabled, follows the generic guideline described in Prerequisite No. 2.
|
||||
@ -67,7 +67,7 @@ Inserting these labels is shown in the image below:
|
||||
|
||||

|
||||
|
||||
Access Kubeapps service locally from your browser[](#access-kubeapps-service-locally-from-your-browser "Permalink to this headline")
|
||||
Access Kubeapps service locally from your browser[🔗](#access-kubeapps-service-locally-from-your-browser "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once the cluster is created, access the Linux console. You should have **kubectl** command line tool available, as specified in Prerequisite No. 3.
|
||||
@ -100,7 +100,7 @@ You can now operate Kubeapps:
|
||||
|
||||

|
||||
|
||||
Launch sample application from Kubeapps[](#launch-sample-application-from-kubeapps "Permalink to this headline")
|
||||
Launch sample application from Kubeapps[🔗](#launch-sample-application-from-kubeapps "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Clicking on “Catalog” exposes a long list of applications available for downloads from Kubeapps app-store.
|
||||
@ -136,7 +136,7 @@ The results will be similar to this:
|
||||
|
||||

|
||||
|
||||
Current limitations[](#current-limitations "Permalink to this headline")
|
||||
Current limitations[🔗](#current-limitations "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
Both Kubeapps and Helm charts deployed by this launcher are open-source projects, which are continuously evolving. The versions installed on CloudFerro Cloud cloud provide a snapshot of this development, as a convenience feature.
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
GitOps with Argo CD on CloudFerro Cloud Kubernetes[](#gitops-with-argo-cd-on-brand-name-kubernetes "Permalink to this headline")
|
||||
GitOps with Argo CD on CloudFerro Cloud Kubernetes[🔗](#gitops-with-argo-cd-on-brand-name-kubernetes "Permalink to this headline")
|
||||
=================================================================================================================================
|
||||
|
||||
Argo CD is a continuous deployment tool for Kubernetes, designed with GitOps and Infrastructure as Code (IaC) principles in mind. It automatically ensures that the state of applications deployed on a Kubernetes cluster is always in sync with a dedicated Git repository where we define such desired state.
|
||||
|
||||
In this article we will demonstrate installing Argo CD on a Kubernetes cluster and deploying an application using this tool.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Argo CD
|
||||
@ -14,7 +14,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Create and deploy Argo CD application resource
|
||||
> * View the deployed resources
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -47,7 +47,7 @@ No. 7 **Access to exemplary Flask application**
|
||||
|
||||
You should have access to the [example Flask application](https://github.com/CloudFerro/K8s-samples/tree/main/Flask-K8s-deployment), to be downloaded from GitHub in the article. It will serve as an example of a minimal application and by changing it, we will demonstrate that Argo CD is capturing those changes in a continual manner.
|
||||
|
||||
Step 1 Install Argo CD[](#step-1-install-argo-cd "Permalink to this headline")
|
||||
Step 1 Install Argo CD[🔗](#step-1-install-argo-cd "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Let’s install Argo CD first, under the following assumptions:
|
||||
@ -74,7 +74,7 @@ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/st
|
||||
|
||||
```
|
||||
|
||||
Step 2 Access Argo CD from your browser[](#step-2-access-argo-cd-from-your-browser "Permalink to this headline")
|
||||
Step 2 Access Argo CD from your browser[🔗](#step-2-access-argo-cd-from-your-browser "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The Argo CD web application by default is not accessible from the browser. To enable this, change the applicable service from **ClusterIP** to **LoadBalancer** type with the command:
|
||||
@ -110,7 +110,7 @@ After typing in your credentials to the login form, you get transferred to the f
|
||||
|
||||

|
||||
|
||||
Step 3 Create a Git repository[](#step-3-create-a-git-repository "Permalink to this headline")
|
||||
Step 3 Create a Git repository[🔗](#step-3-create-a-git-repository "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
You need to create a git repository first. The state of the application on your Kubernetes cluster will be synced to the state of this repo. It is recommended that it is a separate repository from your application code, to avoid triggering the CI pipelines whenever we change the configuration.
|
||||
@ -123,7 +123,7 @@ Create the repository first, we call ours **argocd-sample**. While filling in th
|
||||
|
||||
In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance. In the place denoted with a blue rectangle, you should enter your user name; usually, it will be **root** but can be anything else. If there already are some users defined in GitLab, their names will appear in a drop-down menu.
|
||||
|
||||
Step 4 Download Flask application[](#step-4-download-flask-application "Permalink to this headline")
|
||||
Step 4 Download Flask application[🔗](#step-4-download-flask-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
The next goal is to download two yaml files to a folder called **ArgoCD-sample** and its subfolder **deployment**.
|
||||
@ -146,7 +146,7 @@ rm K8s-samples/ -rf
|
||||
|
||||
Files **deployment.yaml** and **service.yaml** deploy a sample Flask application on Kubernetes and expose it as a service. These are typical minimal examples for deployment and service and can be obtained from the CloudFerro Kubernetes samples repository.
|
||||
|
||||
Step 5 Push your app deployment configurations[](#step-5-push-your-app-deployment-configurations "Permalink to this headline")
|
||||
Step 5 Push your app deployment configurations[🔗](#step-5-push-your-app-deployment-configurations "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Then you need to upload files **deployment.yaml** and **service.yaml** files to the remote repository. Since you are using git, you perform the upload by *syncing* your local repo with the remote. First initiate the repo locally, then push the files to your remote with the following commands (replace to your own git repository instance):
|
||||
@ -165,7 +165,7 @@ As a result, at this point, we have the two files available in remote repository
|
||||
|
||||

|
||||
|
||||
Step 6 Create Argo CD application resource[](#step-6-create-argo-cd-application-resource "Permalink to this headline")
|
||||
Step 6 Create Argo CD application resource[🔗](#step-6-create-argo-cd-application-resource "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Argo CD configuration for a specific application is defined using an application custom resource. Such resource connects a Kubernetes cluster with a repository where deployment configurations are stored.
|
||||
@ -227,7 +227,7 @@ spec.destination.server
|
||||
spec.destination.namespace
|
||||
: The namespace in the cluster where the application will be deployed.
|
||||
|
||||
Step 7 Deploy Argo CD application[](#step-7-deploy-argo-cd-application "Permalink to this headline")
|
||||
Step 7 Deploy Argo CD application[🔗](#step-7-deploy-argo-cd-application "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
After we created the **application.yaml** file, the next step is to commit it and push to the remote repo. We can do this with the following commands:
|
||||
@ -246,7 +246,7 @@ kubectl apply -f application.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 8 View the deployed resources[](#step-8-view-the-deployed-resources "Permalink to this headline")
|
||||
Step 8 View the deployed resources[🔗](#step-8-view-the-deployed-resources "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
After performing the steps above, switch views to the Argo CD UI. We can see that our application appears on the list of applications and that the state to be applied on the cluster was properly captured from the Git repo. It will take a few minutes to complete the deployment of resources on the cluster:
|
||||
@ -263,7 +263,7 @@ After clicking on the application’s box, we can also see the details of all th
|
||||
|
||||
With the default settings, Argo CD will poll the Git repository every 3 minutes to capture the desired state of the cluster. If any changes in the repo are detected, the applications on the cluster will be automatically relaunched with the new configuration applied.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
* test applying changes to the deployment in the repository (e.g. commit a deployment with different image in the container spec), verify ArgoCD capturing the change and changing the cluster state
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud[](#http-request-based-autoscaling-on-k8s-using-prometheus-and-keda-on-brand-name "Permalink to this headline")
|
||||
HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud[🔗](#http-request-based-autoscaling-on-k8s-using-prometheus-and-keda-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================================================================================
|
||||
|
||||
Kubernetes pod autoscaler (HPA) natively utilizes CPU and RAM metrics as the default triggers for increasing or decreasing number of pods. While this is often sufficient, there can be use cases where scaling on custom metrics is preferred.
|
||||
@ -11,7 +11,7 @@ Note
|
||||
|
||||
We will use *NGINX web server* to demonstrate the app, and *NGINX ingress* to deploy it and collect metrics. Note that *NGINX web server* and *NGINX ingress* are two separate pieces of software, with two different purposes.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install NGINX ingress on Magnum cluster
|
||||
@ -23,7 +23,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Deploy KEDA ScaledObject
|
||||
> * Test with Locust
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -47,7 +47,7 @@ This article will introduce you to Helm charts on Kubernetes:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html.md)
|
||||
|
||||
Install NGINX ingress on Magnum cluster[](#install-nginx-ingress-on-magnum-cluster "Permalink to this headline")
|
||||
Install NGINX ingress on Magnum cluster[🔗](#install-nginx-ingress-on-magnum-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Please type in the following commands to download the *ingress-nginx* Helm repo and then install the chart. Note we are using a custom namespace *ingress-nginx* as well as setting the options to enable Prometheus metrics.
|
||||
@ -77,7 +77,7 @@ ingress-nginx-controller LoadBalancer 10.254.118.18 64.225.135.67 80:315
|
||||
|
||||
We get **64.225.135.67**. Instead of that value, use the EXTERNAL-IP value you get in your terminal after running the above command.
|
||||
|
||||
Install Prometheus[](#install-prometheus "Permalink to this headline")
|
||||
Install Prometheus[🔗](#install-prometheus "Permalink to this headline")
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
In order to install Prometheus, please apply the following command on your cluster:
|
||||
@ -89,7 +89,7 @@ kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
|
||||
|
||||
Note that this is Prometheus installation customized for NGINX Ingress and already installs to the *ingress-nginx* namespace by default, so no need to provide the namespace flag or create one.
|
||||
|
||||
Install Keda[](#install-keda "Permalink to this headline")
|
||||
Install Keda[🔗](#install-keda "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
With below steps, create a separate namespace for Keda artifacts, download the repo and install the Keda-Core chart:
|
||||
@ -104,7 +104,7 @@ helm install keda kedacore/keda --version 2.3.0 --namespace keda
|
||||
|
||||
```
|
||||
|
||||
Deploy a sample app[](#deploy-a-sample-app "Permalink to this headline")
|
||||
Deploy a sample app[🔗](#deploy-a-sample-app "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
With the above steps completed, we can deploy a simple application. It will be an NGINX web server, serving a simple “Welcome to nginx!” page. Note, we create a deployment and then expose this deployment as a service of type ClusterIP. Create a file *app-deployment.yaml* in your favorite editor:
|
||||
@ -154,7 +154,7 @@ kubectl apply -f app-deployment.yaml -n ingress-nginx
|
||||
|
||||
We are deploying this application into the *ingress-nginx* namespace where also the ingress installation and Prometheus is hosted. For production scenarios, you might want to have better isolation of application vs. infrastructure, this is however beyond the scope of this article.
|
||||
|
||||
Deploy our app ingress[](#deploy-our-app-ingress "Permalink to this headline")
|
||||
Deploy our app ingress[🔗](#deploy-our-app-ingress "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Our application is already running and exposed in our cluster, but we want to also expose it publicly. For this purpose we will use NGINX ingress, which will also act as a proxy to register the request metrics. Create a file *app-ingress.yaml* with the following contents:
|
||||
@ -204,7 +204,7 @@ After typing the IP address with the prefix (replace with your own floating IP w
|
||||
|
||||

|
||||
|
||||
Access Prometheus dashboard[](#access-prometheus-dashboard "Permalink to this headline")
|
||||
Access Prometheus dashboard[🔗](#access-prometheus-dashboard "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
To access Prometheus dashboard we can port-forward the running prometheus-server to our localhost. This could be useful for troubleshooting. We have the *prometheus-server* running as a *NodePort* service, which can be verified per below:
|
||||
@ -231,7 +231,7 @@ Then enter *localhost:9090* in your browser, you will see the Prometheus dashboa
|
||||
|
||||

|
||||
|
||||
Deploy KEDA ScaledObject[](#deploy-keda-scaledobject "Permalink to this headline")
|
||||
Deploy KEDA ScaledObject[🔗](#deploy-keda-scaledobject "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
Keda ScaledObject is a custom resource which will enable scaling our application based on custom metrics. In the YAML manifest we define what will be scaled (the nginx deployment), what are the conditions for scaling, and the definition and configuration of the trigger, in this case Prometheus. Prepare a file *scaled-object.yaml* with the following contents:
|
||||
@ -271,7 +271,7 @@ kubectl apply -f scaled-object.yaml -n ingress-nginx
|
||||
|
||||
```
|
||||
|
||||
Test with Locust[](#test-with-locust "Permalink to this headline")
|
||||
Test with Locust[🔗](#test-with-locust "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
We can now test whether the scaling works as expected. We will use *Locust* for this, which is a load testing tool. To quickly deploy *Locust* as LoadBalancer service type, enter the following commands:
|
||||
@ -321,7 +321,7 @@ nginx-85b98978db-6zcdw 1/1 Running 0
|
||||
|
||||
```
|
||||
|
||||
Cooling down[](#cooling-down "Permalink to this headline")
|
||||
Cooling down[🔗](#cooling-down "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
After hitting “Stop” in Locust, the pods will scale down to one replica, in line with the value of *coolDownPeriod* parameter, which is defined in the Keda ScaledObject. Its default value is 300 seconds. If you want to change it, use command
|
||||
|
||||
@ -1,15 +1,15 @@
|
||||
How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum[](#how-to-access-kubernetes-cluster-post-deployment-using-kubectl-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum[🔗](#how-to-access-kubernetes-cluster-post-deployment-using-kubectl-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===================================================================================================================================================================================================================================
|
||||
|
||||
In this tutorial, you start with a freshly installed Kubernetes cluster on Cloudferro OpenStack server and connect the main Kubernetes tool, **kubectl** to the cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to connect **kubectl** to the OpenStack Magnum server
|
||||
> * How to access clusters with **kubectl**
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -53,7 +53,7 @@ No. 4 **Connect openstack client to the cloud**
|
||||
|
||||
Prepare **openstack** and **magnum** clients by executing *Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud* from article [How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon](How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-CloudFerro-Cloud-Horizon.html.md).
|
||||
|
||||
The Plan[](#the-plan "Permalink to this headline")
|
||||
The Plan[🔗](#the-plan "Permalink to this headline")
|
||||
---------------------------------------------------
|
||||
|
||||
> * Follow up the steps listed in Prerequisite No. 2 and install **kubectl** on the platform of your choice.
|
||||
@ -62,7 +62,7 @@ The Plan[](#the-plan "Permalink to this headline")
|
||||
|
||||
You are then going to connect **kubectl** to the Cloud.
|
||||
|
||||
Step 1 Create directory to download the certificates[](#step-1-create-directory-to-download-the-certificates "Permalink to this headline")
|
||||
Step 1 Create directory to download the certificates[🔗](#step-1-create-directory-to-download-the-certificates "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Create a new directory called *k8sdir* into which the certificates will be downloaded:
|
||||
@ -90,7 +90,7 @@ Note
|
||||
|
||||
In Linux, a file may or may not have an extension, while on Windows, it must have an extension.
|
||||
|
||||
Step 2A Download Certificates From the Server using the CLI commands[](#step-2a-download-certificates-from-the-server-using-the-cli-commands "Permalink to this headline")
|
||||
Step 2A Download Certificates From the Server using the CLI commands[🔗](#step-2a-download-certificates-from-the-server-using-the-cli-commands "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You will use command
|
||||
@ -158,7 +158,7 @@ This is the entire procedure in terminal window:
|
||||
|
||||

|
||||
|
||||
Step 2B Download Certificates From the Server using Horizon commands[](#step-2b-download-certificates-from-the-server-using-horizon-commands "Permalink to this headline")
|
||||
Step 2B Download Certificates From the Server using Horizon commands[🔗](#step-2b-download-certificates-from-the-server-using-horizon-commands "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You can download the config file from Horizon directly to your computer. First list the clusters with command **Container Infra** -> **Clusters**, find the cluster and click on the rightmost drop-down menu in its column:
|
||||
@ -185,7 +185,7 @@ export KUBECONFIG=/home/dusko/k8sdir/k8s-cluster_config-1.yaml
|
||||
|
||||
Depending on your environment, you may need to open a new terminal window to make the above command work.
|
||||
|
||||
Step 3 Verify That kubectl Has Access to the Cloud[](#step-3-verify-that-kubectl-has-access-to-the-cloud "Permalink to this headline")
|
||||
Step 3 Verify That kubectl Has Access to the Cloud[🔗](#step-3-verify-that-kubectl-has-access-to-the-cloud "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
See basic data about the cluster with the following command:
|
||||
@ -219,7 +219,7 @@ kubectl options
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
With **kubectl** operational, you can
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[](#how-to-create-api-server-loadbalancer-for-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum[🔗](#how-to-create-api-server-loadbalancer-for-kubernetes-cluster-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
===============================================================================================================================================================================================================================
|
||||
|
||||
Load balancer can be understood both as
|
||||
@ -8,7 +8,7 @@ Load balancer can be understood both as
|
||||
|
||||
There is an option to create load balancer while creating the Kubernetes cluster but you can also create it without. This article will show you how to access the cluster even if you did not specify load balancer at the creation time.
|
||||
|
||||
What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headline")
|
||||
What We Are Going To Do[🔗](#what-we-are-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
> * Create a cluster called NoLoadBalancer with one master node and no load balancer
|
||||
@ -18,7 +18,7 @@ What We Are Going To Do[](#what-we-are-going-to-do "Permalink to this headlin
|
||||
> * Use parameter **–insecure-skip-tls-verify=true** to override server security
|
||||
> * Verify that **kubectl** is working normally, which means that you have full access to the Kubernetes cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -37,7 +37,7 @@ No. 4 **Connect to the Kubernetes Cluster in Order to Use kubectl**
|
||||
|
||||
Article [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md) will show you how to connect your local machine to the existing Kubernetes cluster.
|
||||
|
||||
How To Enable or Disable Load Balancer for Master Nodes[](#how-to-enable-or-disable-load-balancer-for-master-nodes "Permalink to this headline")
|
||||
How To Enable or Disable Load Balancer for Master Nodes[🔗](#how-to-enable-or-disable-load-balancer-for-master-nodes "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A default state for the Kubernetes cluster in CloudFerro Cloud OpenStack Magnum hosting is to have no load balancer set up in advance. You can decide to have a load balancer created together with the basic Kubernetes cluster by checking on option **Enable Load Balancer for Master Nodes** in window **Network** when creating a cluster through Horizon interface. (See **Prerequisite No. 3** for the complete procedure.)
|
||||
@ -58,7 +58,7 @@ Regardless of the number of master nodes you have specified, checking this field
|
||||
|
||||
If you accept the default state of **unchecked**, no load balancer will be created. However, without any load balancer “in front” of the cluster, the cluster API is being exposed only within the Kubernetes network. You save on the existence of the load balancer but the direct connection from local machine to the cluster is lost.
|
||||
|
||||
One Master Node, No Load Balancer and the Problem It All Creates[](#one-master-node-no-load-balancer-and-the-problem-it-all-creates "Permalink to this headline")
|
||||
One Master Node, No Load Balancer and the Problem It All Creates[🔗](#one-master-node-no-load-balancer-and-the-problem-it-all-creates "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To show exactly what the problem is, use
|
||||
@ -75,7 +75,7 @@ kubectl get nodes
|
||||
|
||||
but it will not work. If there were load balancer “in front of the cluster”, it would work, but here there isn’t so it won’t. The rest of this article will show you how to still make it work, using the fact that the master node of the cluster has its own load balancer for kube-api.
|
||||
|
||||
Step 1 Create a Cluster With One Master Node and No Load Balancer[](#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer "Permalink to this headline")
|
||||
Step 1 Create a Cluster With One Master Node and No Load Balancer[🔗](#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Create cluster *NoLoadBalancer* as explained in Prerequisite No. 3. Let there be
|
||||
@ -107,7 +107,7 @@ Addresses starting with 10.0… are usually reserved for local networks, meaning
|
||||
|
||||

|
||||
|
||||
Step 2 Create Floating IP for Master Node[](#step-2-create-floating-ip-for-master-node "Permalink to this headline")
|
||||
Step 2 Create Floating IP for Master Node[🔗](#step-2-create-floating-ip-for-master-node "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here are the instances that serve as nodes for that cluster:
|
||||
@ -126,7 +126,7 @@ This is the result:
|
||||
|
||||
The IP number is **64.225.135.112** – you are going to use it later on, to change *config* file for access to the Kubernetes cluster.
|
||||
|
||||
Step 3 **Create config File for Kubernetes Cluster**[](#step-3-create-config-file-for-kubernetes-cluster "Permalink to this headline")
|
||||
Step 3 **Create config File for Kubernetes Cluster**[🔗](#step-3-create-config-file-for-kubernetes-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
You are now going to connect to *NoLoadBalancer* cluster in spite of it not having a load balancer from the very start. To that end, create a config file to connect to the cluster, with the following command:
|
||||
@ -163,7 +163,7 @@ server: https://10.0.0.54:6443
|
||||
|
||||
```
|
||||
|
||||
Step 4 Swap Existing Floating IP Address for the Network Address[](#step-4-swap-existing-floating-ip-address-for-the-network-address "Permalink to this headline")
|
||||
Step 4 Swap Existing Floating IP Address for the Network Address[🔗](#step-4-swap-existing-floating-ip-address-for-the-network-address "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Now back to Horizon interface and execute commands **Compute** -> **Instances** to see the addresses for master node of the *NoLoadBalancer* cluster:
|
||||
@ -193,7 +193,7 @@ The line should look like this:
|
||||
|
||||
Save the edited file. In case of **nano**, those will be commands `Control-x`, `Y` and pressing `Enter` on the keyboard.
|
||||
|
||||
Step 4 Add Parameter –insecure-skip-tls-verify=true to Make kubectl Work[](#step-4-add-parameter-insecure-skip-tls-verify-true-to-make-kubectl-work "Permalink to this headline")
|
||||
Step 4 Add Parameter –insecure-skip-tls-verify=true to Make kubectl Work[🔗](#step-4-add-parameter-insecure-skip-tls-verify-true-to-make-kubectl-work "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Try again to activate kubectl and again it will fail. To make it work, add parameter **–insecure-skip-tls-verify=true**:
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon[](#how-to-install-openstack-and-magnum-clients-for-command-line-interface-to-brand-name-horizon "Permalink to this headline")
|
||||
How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon[🔗](#how-to-install-openstack-and-magnum-clients-for-command-line-interface-to-brand-name-horizon "Permalink to this headline")
|
||||
=================================================================================================================================================================================================================================
|
||||
|
||||
How To Issue Commands to the OpenStack and Magnum Servers[](#how-to-issue-commands-to-the-openstack-and-magnum-servers "Permalink to this headline")
|
||||
How To Issue Commands to the OpenStack and Magnum Servers[🔗](#how-to-issue-commands-to-the-openstack-and-magnum-servers "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
There are three ways of working with Kubernetes clusters within Openstack Magnum and Horizon modules:
|
||||
@ -18,14 +18,14 @@ CLI commands are issued from desktop computer or server in the cloud. This appro
|
||||
|
||||
Both the Horizon and the CLI use HTTPS requests internally and in an interactive manner. You can, however, write your own software to automate and/or change the state of the server, in real time.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * How to install the CLI – OpenStack and Magnum clients
|
||||
> * How to connect the CLI to the Horizon server
|
||||
> * Basic examples of using OpenStack and Magnum clients
|
||||
|
||||
Notes On Python Versions and Environments for Installation[](#notes-on-python-versions-and-environments-for-installation "Permalink to this headline")
|
||||
Notes On Python Versions and Environments for Installation[🔗](#notes-on-python-versions-and-environments-for-installation "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack is written in Python so you need to first install a Python working environment and then install the OpenStack clients. Officially, OpenStack runs only on Python 2.7 but you will most likely only be able to install a version 3.x of Python. During the installation, adjust accordingly the numbers of Python versions mentioned in the documentation.
|
||||
@ -42,7 +42,7 @@ Note
|
||||
|
||||
If you decide to install Python and the OpenStack clients on a virtual machine, you will need SSH keys in order to be able to enter the working environment. See [How to create key pair in OpenStack Dashboard on CloudFerro Cloud](../cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-CloudFerro-Cloud.html.md).
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -71,7 +71,7 @@ No. 5 **Connect openstack command to the cloud**
|
||||
|
||||
After the successful installation of **openstack** command, it should be connected to the cloud. Follow this article for technical details: [How to activate OpenStack CLI access to CloudFerro Cloud cloud using one- or two-factor authentication](../accountmanagement/How-to-activate-OpenStack-CLI-access-to-CloudFerro-Cloud-cloud-using-one-or-two-factor-authentication.html.md).
|
||||
|
||||
Step 1 Install the CLI for Kubernetes on OpenStack Magnum[](#step-1-install-the-cli-for-kubernetes-on-openstack-magnum "Permalink to this headline")
|
||||
Step 1 Install the CLI for Kubernetes on OpenStack Magnum[🔗](#step-1-install-the-cli-for-kubernetes-on-openstack-magnum "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you are going to install clients for commands **openstack** and **coe**, from modules OpenStack and Magnum, respectively.
|
||||
@ -92,7 +92,7 @@ pip install python-magnumclient
|
||||
|
||||
```
|
||||
|
||||
Step 2 How to Use the OpenStack Client[](#step-2-how-to-use-the-openstack-client "Permalink to this headline")
|
||||
Step 2 How to Use the OpenStack Client[🔗](#step-2-how-to-use-the-openstack-client "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step, you are going to start using the OpenStack client you have installed and connected to the cloud.
|
||||
@ -109,7 +109,7 @@ The preferred way, however, is typing the keyword **openstack**, followed by par
|
||||
|
||||
Openstack commands may have dozens of parameters so it is better to compose the command in an independent text editor and then copy and paste it into the terminal.
|
||||
|
||||
The Help Command[](#the-help-command "Permalink to this headline")
|
||||
The Help Command[🔗](#the-help-command "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
To learn about the available commands and their parameters, type **–help** after the command. If applied to the keyword **openstack** itself, it will write out a very long list of commands, which may come useful as an orientation. It may start out like this:
|
||||
@ -144,7 +144,7 @@ openstack network list
|
||||
|
||||

|
||||
|
||||
Step 4 How to Use the Magnum Client[](#step-4-how-to-use-the-magnum-client "Permalink to this headline")
|
||||
Step 4 How to Use the Magnum Client[🔗](#step-4-how-to-use-the-magnum-client "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpensStack command for the server is **openstack** but for Magnum, the command is not **magnum** as one would expect, but **coe**, for *container orchestration engine*. Therefore, the commands for clusters will always start with **openstack coe**.
|
||||
@ -177,7 +177,7 @@ after clicking on **Container Infra** => **Clusters**.
|
||||
|
||||
Prerequisite No. 5 offers more technical info about the Magnum client.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this tutorial you have
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum[](#how-to-use-command-line-interface-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum[🔗](#how-to-use-command-line-interface-for-kubernetes-clusters-on-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=========================================================================================================================================================================================================================
|
||||
|
||||
In this article you shall use Command Line Interface (CLI) to speed up testing and creation of Kubernetes clusters on OpenStack Magnum servers.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * The advantages of using CLI over the Horizon graphical interface
|
||||
@ -13,7 +13,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Reasons why the cluster may fail to create
|
||||
> * CLI commands to delete a cluster
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -46,12 +46,12 @@ No. 7 **Autohealing of Kubernetes Clusters**
|
||||
|
||||
To learn more about autohealing of Kubernetes clusters, follow this official article [What is Magnum Autohealer?](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/magnum-auto-healer/using-magnum-auto-healer.md).
|
||||
|
||||
The Advantages of Using the CLI[](#the-advantages-of-using-the-cli "Permalink to this headline")
|
||||
The Advantages of Using the CLI[🔗](#the-advantages-of-using-the-cli "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
You can use the CLI and Horizon interface interchangeably, but there are at least three advantages in using CLI.
|
||||
|
||||
### Reproduce Commands Through Cut & Paste[](#reproduce-commands-through-cut-paste "Permalink to this headline")
|
||||
### Reproduce Commands Through Cut & Paste[🔗](#reproduce-commands-through-cut-paste "Permalink to this headline")
|
||||
|
||||
Here is a command to list flavors in the system
|
||||
|
||||
@ -72,7 +72,7 @@ and only then get the list of flavors to choose from:
|
||||
|
||||
A bonus is that keeping commands in a text editor automatically creates documentation for the server and cluster.
|
||||
|
||||
### CLI Commands Can Be Automated[](#cli-commands-can-be-automated "Permalink to this headline")
|
||||
### CLI Commands Can Be Automated[🔗](#cli-commands-can-be-automated "Permalink to this headline")
|
||||
|
||||
You can use available automation. The result of the following Ubuntu pipeline is the url for communication from **kubectl** to the Kubernetes cluster:
|
||||
|
||||
@ -106,11 +106,11 @@ awk '/ api_address /{print $4}')
|
||||
|
||||
is searching for the line starting with *api\_address* and extracting its value *https://64.225.132.135:6443*. The final result is exported to the system variable KUBERNETES\_URL, thus automatically setting it up for use by Kubernetes cluster command **kubectl** when accessing the cloud.
|
||||
|
||||
### CLI Yields Access to All of the Existing OpenStack and Magnum Parameters[](#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters "Permalink to this headline")
|
||||
### CLI Yields Access to All of the Existing OpenStack and Magnum Parameters[🔗](#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters "Permalink to this headline")
|
||||
|
||||
CLI commands offer access to a larger set of parameters than is available through Horizon. For instance, in Horizon, the default length of time allowed for creation of a cluster is 60 minutes while in CLI, you can set it to other values of choice.
|
||||
|
||||
### Debugging OpenStack and Magnum Commands[](#debugging-openstack-and-magnum-commands "Permalink to this headline")
|
||||
### Debugging OpenStack and Magnum Commands[🔗](#debugging-openstack-and-magnum-commands "Permalink to this headline")
|
||||
|
||||
To see what is actually happening behind the scenes, when executing client commands, add parameter **–debug**:
|
||||
|
||||
@ -121,7 +121,7 @@ openstack coe cluster list --debug
|
||||
|
||||
The output will be several screens long, consisting of GET and POST web calls, with dozens of parameters shown on screen. (The output is too voluminous to reproduce here.)
|
||||
|
||||
How to Enter OpenStack Commands[](#how-to-enter-openstack-commands "Permalink to this headline")
|
||||
How to Enter OpenStack Commands[🔗](#how-to-enter-openstack-commands "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
Note
|
||||
@ -183,7 +183,7 @@ Warning
|
||||
If you are new to Kubernetes please, at first, create clusters only directly using the default cluster template.
|
||||
Once you get more experience, you can start creating your own cluster templates and here is how to do it using CLI.
|
||||
|
||||
OpenStack Command for Creation of Cluster[](#openstack-command-for-creation-of-cluster "Permalink to this headline")
|
||||
OpenStack Command for Creation of Cluster[🔗](#openstack-command-for-creation-of-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this step you can create a new cluster using either the default cluster template or any of the templates that you have already created.
|
||||
@ -265,7 +265,7 @@ Copy and paste the above command into the terminal where OpenStack and Magnum cl
|
||||
|
||||

|
||||
|
||||
How To Check Upon the Status of the Cluster[](#how-to-check-upon-the-status-of-the-cluster "Permalink to this headline")
|
||||
How To Check Upon the Status of the Cluster[🔗](#how-to-check-upon-the-status-of-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The command to show the status of clusters is
|
||||
@ -302,7 +302,7 @@ Note
|
||||
|
||||
It is out of scope of this article to describe how to delete elements through Horizon interface. Make sure that quotas are available before new cluster creation.
|
||||
|
||||
Failure to Create a Cluster[](#failure-to-create-a-cluster "Permalink to this headline")
|
||||
Failure to Create a Cluster[🔗](#failure-to-create-a-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
There are many reasons why a cluster may fail to create. Maybe the state of system quotas is not optimal, maybe there is a mismatch between the parameters of the cluster and the parameters in the rest of the cloud. For example, if you base the creation of cluster on the default cluster template, it will use Fedora distribution and require 10 GiB of memory. It may clash with *–docker-volume-size* if that was set up to be larger then 10 GiB.
|
||||
@ -319,7 +319,7 @@ If the creation process failed prematurely, then
|
||||
> * change parameters and
|
||||
> * run the cluster creation command again.
|
||||
|
||||
CLI Commands to Delete a Cluster[](#cli-commands-to-delete-a-cluster "Permalink to this headline")
|
||||
CLI Commands to Delete a Cluster[🔗](#cli-commands-to-delete-a-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
If the cluster failed to create, it is still taking up system resources. Delete it with command such as
|
||||
@ -360,7 +360,7 @@ Deleting clusters that were not installed properly has freed up a significant am
|
||||
|
||||
In this step you have successfuly deleted the clusters whose creation has stopped prematurely, thus paving the way to the creation of the next cluster under slightly different circumstances.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this tutorial, you have used the CLI commands to generate cluster templates as well as clusters themselves. Also, if the cluster process failed, how to free up the system resources and try again.
|
||||
|
||||
@ -1,15 +1,15 @@
|
||||
How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum[](#how-to-create-a-kubernetes-cluster-using-brand-name-openstack-magnum "Permalink to this headline")
|
||||
How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum[🔗](#how-to-create-a-kubernetes-cluster-using-brand-name-openstack-magnum "Permalink to this headline")
|
||||
=================================================================================================================================================================================
|
||||
|
||||
In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Creating a new Kubernetes cluster using one of the default cluster templates
|
||||
> * Visual interpretation of created networks and Kubernetes cluster nodes
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -28,7 +28,7 @@ An SSH key-pair created in OpenStack dashboard. To create it, follow this articl
|
||||
|
||||
The key pair created in that article is called “sshkey”. You will use it as one of the parameters for creation of the Kubernetes cluster.
|
||||
|
||||
Step 1 Create New Cluster Screen[](#step-1-create-new-cluster-screen "Permalink to this headline")
|
||||
Step 1 Create New Cluster Screen[🔗](#step-1-create-new-cluster-screen "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Click on **Container Infra** and then on **Clusters**.
|
||||
@ -85,7 +85,7 @@ This is what the screen looks like when all the data have been entered:
|
||||
|
||||
Click on lower right button **Next** or on option **Size** from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.
|
||||
|
||||
Step 2 Define Master and Worker Nodes[](#step-2-define-master-and-worker-nodes "Permalink to this headline")
|
||||
Step 2 Define Master and Worker Nodes[🔗](#step-2-define-master-and-worker-nodes "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In general terms, *master nodes* are used to host the internal infrastructure of the cluster, while the *worker nodes* are used to host the K8s applications.
|
||||
@ -130,7 +130,7 @@ Here is what the screen **Size** looks like when all the data are entered:
|
||||
|
||||
To proceed, click on lower right button **Next** or on option **Network** from the left main menu.
|
||||
|
||||
Step 3 Defining Network and LoadBalancer[](#step-3-defining-network-and-loadbalancer "Permalink to this headline")
|
||||
Step 3 Defining Network and LoadBalancer[🔗](#step-3-defining-network-and-loadbalancer "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This is the last of mandatory screens and the blue **Submit** button in the lower right corner is now active. (If it is not, use screen button **Back** to fix values in previous screens.)
|
||||
@ -171,7 +171,7 @@ Use of ingress is a more advanced feature, related to load balancing the traffic
|
||||
|
||||
If you are just starting with Kubernetes, you will rather not require this feature immediately, so you could leave this option out.
|
||||
|
||||
Step 4 Advanced options[](#step-4-advanced-options "Permalink to this headline")
|
||||
Step 4 Advanced options[🔗](#step-4-advanced-options "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
**Option Management**
|
||||
@ -194,7 +194,7 @@ Labels can change how the cluster creation is performed. There is a set of label
|
||||
|
||||
If you **turn on** the field **I do want to override Template and Workflow Labels** and if you use any of the *Template and Workflow Labels* by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.
|
||||
|
||||
Step 5 Forming of the Cluster[](#step-5-forming-of-the-cluster "Permalink to this headline")
|
||||
Step 5 Forming of the Cluster[🔗](#step-5-forming-of-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Once you click on **Submit** button, OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.
|
||||
@ -213,7 +213,7 @@ Click on the name of the cluster, *Kubernetes*, and see what it will look like i
|
||||
|
||||

|
||||
|
||||
Step 6 Review cluster state[](#step-6-review-cluster-state "Permalink to this headline")
|
||||
Step 6 Review cluster state[🔗](#step-6-review-cluster-state "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Here is what OpenStack Magnum created for you as the result of filling in the data in those three screens:
|
||||
@ -238,7 +238,7 @@ Node names start with *kubernetes* because that is the name of the cluster in lo
|
||||
|
||||
Resources tied up from one attempt of creating a cluster are **not** automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You now have a fully operational Kubernetes cluster. You can
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
How to create Kubernetes cluster using Terraform on CloudFerro Cloud[](#how-to-create-kubernetes-cluster-using-terraform-on-brand-name "Permalink to this headline")
|
||||
How to create Kubernetes cluster using Terraform on CloudFerro Cloud[🔗](#how-to-create-kubernetes-cluster-using-terraform-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================
|
||||
|
||||
In this article we demonstrate using [Terraform](https://www.terraform.io/) to deploy an OpenStack Magnum Kubernetes cluster on CloudFerro Cloud cloud.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting account**
|
||||
@ -40,7 +40,7 @@ Have Terraform installed locally or on a cloud VM - installation guidelines alon
|
||||
|
||||
After you finish working through that article, you will have access to the cloud via an active **openstack** command. Also, special environmental (**env**) variables (**OS\_USERNAME**, **OS\_PASSWORD**, **OS\_AUTH\_URL** and others) will be set up so that various programs can use them – Terraform being the prime target here.
|
||||
|
||||
Define provider for Terraform[](#define-provider-for-terraform "Permalink to this headline")
|
||||
Define provider for Terraform[🔗](#define-provider-for-terraform "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
Terraform uses the notion of *provider*, which represents your concrete cloud environment and covers authentication. CloudFerro Cloud clouds are built complying with OpenStack technology and OpenStack is one of the standard types of providers for Terraform.
|
||||
@ -80,7 +80,7 @@ The **auth\_url** is the only configuration option that shall be provided in the
|
||||
|
||||
Having this provider spec allows us to create a cluster in the following steps, but can also be reused to create other resources in your OpenStack environment e.g. virtual machines, volumes and many others.
|
||||
|
||||
Define cluster resource in Terraform[](#define-cluster-resource-in-terraform "Permalink to this headline")
|
||||
Define cluster resource in Terraform[🔗](#define-cluster-resource-in-terraform "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
The second step is to define the exact specification of a resource that we want to create with Terraform. In our case we want to create a OpenStack Magnum cluster. In Terraform terminology, it will be an instance of **openstack\_containerinfra\_cluster\_v1** resource type. To proceed, create file **cluster.tf** which contains the specification of our cluster:
|
||||
@ -132,7 +132,7 @@ In our example we operate on WAW3-2 cloud, where flavor **hmad.medium** is avail
|
||||
|
||||
The above configuration reflects a cluster where *loadbalancer* is placed in front of the master nodes, and where this loadbalancer’s flavor is **HA-large**. Customizing this default, similarly as with other more advanced defaults, would require creating a custom Magnum template, which is beyond the scope of this article.
|
||||
|
||||
Apply the configurations and create the cluster[](#apply-the-configurations-and-create-the-cluster "Permalink to this headline")
|
||||
Apply the configurations and create the cluster[🔗](#apply-the-configurations-and-create-the-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once both Terraform configurations described in previous steps are defined, we can apply them to create our cluster.
|
||||
@ -174,7 +174,7 @@ The final lines of the output after successfully provisioning the cluster, shoul
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Terraform can be used also to deploy additional applications to our cluster e.g. using Helm provider for Terraform. Check Terraform documentation for more details.
|
||||
@ -1,4 +1,4 @@
|
||||
How to install Rancher RKE2 Kubernetes on CloudFerro Cloud[](#how-to-install-rancher-rke2-kubernetes-on-brand-name "Permalink to this headline")
|
||||
How to install Rancher RKE2 Kubernetes on CloudFerro Cloud[🔗](#how-to-install-rancher-rke2-kubernetes-on-brand-name "Permalink to this headline")
|
||||
=================================================================================================================================================
|
||||
|
||||
[RKE2](https://docs.rke2.io/) - Rancher Kubernetes Engine version 2 - is a Kubernetes distribution provided by SUSE. Running a self-managed RKE2 cluster in CloudFerro Cloud cloud is a viable option, especially for those seeking smooth integration with Rancher platform and customization options.
|
||||
@ -11,7 +11,7 @@ An RKE2 cluster can be provisioned from Rancher GUI. However, in this article we
|
||||
|
||||
We also illustrate the coding techniques used, in case you want to enhance the RKE2 implementation further.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Perform the preliminary setup
|
||||
@ -29,7 +29,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
|
||||
The code is tested on Ubuntu 22.04.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -96,7 +96,7 @@ One of the files downloaded from the above link will be **variables.tf**. It con
|
||||
|
||||

|
||||
|
||||
Step 1 Perform the preliminary setup[](#step-1-perform-the-preliminary-setup "Permalink to this headline")
|
||||
Step 1 Perform the preliminary setup[🔗](#step-1-perform-the-preliminary-setup "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
Our objective is to create a Kubernetes cluster, which runs in the cloud environment. RKE2 software packages will be installed on cloud virtual machines playing roles of Kubernetes master and worker nodes. Also, several other OpenStack resources will be created along.
|
||||
@ -110,7 +110,7 @@ As part of the preliminary setup to provision these resources we will:
|
||||
|
||||
We here provide the instruction to install the project, credentials, key pair and source locally the RC file.
|
||||
|
||||
### Preparation step 1 Create new project[](#preparation-step-1-create-new-project "Permalink to this headline")
|
||||
### Preparation step 1 Create new project[🔗](#preparation-step-1-create-new-project "Permalink to this headline")
|
||||
|
||||
First step is to create a new project use Horizon UI. Click on Identity → Projects. Fill in the name of the project on the first tab:
|
||||
|
||||
@ -124,7 +124,7 @@ Then click on “Create Project”. Once the project is created, switch to the c
|
||||
|
||||

|
||||
|
||||
### Preparation step 2 Create application credentials[](#preparation-step-2-create-application-credentials "Permalink to this headline")
|
||||
### Preparation step 2 Create application credentials[🔗](#preparation-step-2-create-application-credentials "Permalink to this headline")
|
||||
|
||||
The next step is to create an application credential that will be used to authenticate the OpenStack Cloud Controller Manager (used for automated load balancer provisioning). To create one, go to menu **Identity** → **Application Credentials**. Fill in the form as per the below example, passing all available roles (“member”, “load-balancer\_member”, “creator”, “reader”) roles to this credential. Set the expiry date to a date in the future.
|
||||
|
||||
@ -136,15 +136,15 @@ After clicking on **Create Application Credential**, copy both application ID an
|
||||
|
||||
Prerequisite No. 7 contains a complete guide to application credentials.
|
||||
|
||||
### Preparation step 3 Keypair operational[](#preparation-step-3-keypair-operational "Permalink to this headline")
|
||||
### Preparation step 3 Keypair operational[🔗](#preparation-step-3-keypair-operational "Permalink to this headline")
|
||||
|
||||
Before continuing, ensure you have a keypair available. If you already had a keypair in your main project, this keypair will be available also for the newly created project. If you do not have one yet, create it from the left menu **Project** → **Compute** → **Key Pairs**. For additional details, visit Prerequisite No. 6.
|
||||
|
||||
### Preparation step 4 Authenticate to the newly formed project[](#preparation-step-4-authenticate-to-the-newly-formed-project "Permalink to this headline")
|
||||
### Preparation step 4 Authenticate to the newly formed project[🔗](#preparation-step-4-authenticate-to-the-newly-formed-project "Permalink to this headline")
|
||||
|
||||
Lastly, download the RC file corresponding to the new project from Horizon GUI, then source this file in your local Linux terminal. See Prerequisite No. 4.
|
||||
|
||||
Step 2 Use Terraform configuration for RKE2 from CloudFerro’s GitHub repository[](#step-2-use-terraform-configuration-for-rke2-from-cloudferro-s-github-repository "Permalink to this headline")
|
||||
Step 2 Use Terraform configuration for RKE2 from CloudFerro’s GitHub repository[🔗](#step-2-use-terraform-configuration-for-rke2-from-cloudferro-s-github-repository "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We added folder **rke2-terraform** to CloudFerro’s [K8s-samples GitHub repository](https://github.com/CloudFerro/K8s-samples/tree/main/rke2-terraform), from Prerequisite No. 11. This project includes configuration files to provision an RKE2 cluster on CloudFerro clouds and can be used as a starter pack for further customizations to your specific requirements.
|
||||
@ -178,7 +178,7 @@ cloud-init-workers.yml.tpl
|
||||
|
||||
One of the primary functions of each *cloud-init* file is to install rke2 on both master and worker nodes.
|
||||
|
||||
Step 3 Provision an RKE2 cluster[](#step-3-provision-an-rke2-cluster "Permalink to this headline")
|
||||
Step 3 Provision an RKE2 cluster[🔗](#step-3-provision-an-rke2-cluster "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Let’s provision an RKE2 Kubernetes cluster now. This will consist of the following steps:
|
||||
@ -208,7 +208,7 @@ Note
|
||||
|
||||
Highly available control plane is currently not covered by this repository. Also, setting number of master nodes to a value other than 1 is **not** supported.
|
||||
|
||||
### Enter data in file terraform.tfvars[](#enter-data-in-file-terraform-tfvars "Permalink to this headline")
|
||||
### Enter data in file terraform.tfvars[🔗](#enter-data-in-file-terraform-tfvars "Permalink to this headline")
|
||||
|
||||
The next step is to create file **terraform.tfvars**, with the following contents:
|
||||
|
||||
@ -236,7 +236,7 @@ Get application\_credential\_id
|
||||
Get application\_credential\_secret
|
||||
: The same, only for secret.
|
||||
|
||||
### Run Terraform to provision RKE2 cluster[](#run-terraform-to-provision-rke2-cluster "Permalink to this headline")
|
||||
### Run Terraform to provision RKE2 cluster[🔗](#run-terraform-to-provision-rke2-cluster "Permalink to this headline")
|
||||
|
||||
This completes the set up part. We can now run the standard Terraform commands - **init**, **plan** and **apply** - to create our RKE2 cluster. The commands should be executed in the order provided below. Type **yes** when required to reconfirm the steps planned by Terraform.
|
||||
|
||||
@ -269,7 +269,7 @@ We can see that the cluster is provisioned correctly in our case, with both mast
|
||||
|
||||

|
||||
|
||||
Step 4 Demonstrate cloud-native integration covered by the repo[](#step-4-demonstrate-cloud-native-integration-covered-by-the-repo "Permalink to this headline")
|
||||
Step 4 Demonstrate cloud-native integration covered by the repo[🔗](#step-4-demonstrate-cloud-native-integration-covered-by-the-repo "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We can verify the automated provisioning of load balancers and public Floating IP by exposing a service of type LoadBalancer. The following **kubectl** commands will deploy and expose an **nginx** server in our RKE2 cluster’s default namespace:
|
||||
@ -303,7 +303,7 @@ Ultimately, we can check the service is running as a public service in our brows
|
||||
|
||||

|
||||
|
||||
Implementation details[](#implementation-details "Permalink to this headline")
|
||||
Implementation details[🔗](#implementation-details "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Explaining all of the techniques that went into production of RKE2 repository from Prerequisite No. 11 is out of scope of this article. However, here is an illustration of how at least one feature was implemented.
|
||||
@ -366,7 +366,7 @@ openstack-cloud-controller-manager-bz7zt 1/1 Running 1 (4
|
||||
|
||||
```
|
||||
|
||||
Further customization[](#further-customization "Permalink to this headline")
|
||||
Further customization[🔗](#further-customization "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Depending on your use case, further customization to the provided sample repository will be required to tune the Terraform configurations to provision an RKE2 cluster. We suggest evaluating the following enhancements:
|
||||
@ -379,7 +379,7 @@ Depending on your use case, further customization to the provided sample reposit
|
||||
|
||||
To implement these features, you would need to simultaneously adjust definitions for both Terraform and Kubernetes resources. Covering those steps is, therefore, outside of scope of this article.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In this article, you have created a proper Kubernetes solution using RKE2 cluster as a foundation.
|
||||
|
||||
@ -1,17 +1,17 @@
|
||||
Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud[](#implementing-ip-whitelisting-for-load-balancers-with-security-groups-on-brand-name "Permalink to this headline")
|
||||
Implementing IP Whitelisting for Load Balancers with Security Groups on CloudFerro Cloud[🔗](#implementing-ip-whitelisting-for-load-balancers-with-security-groups-on-brand-name "Permalink to this headline")
|
||||
=============================================================================================================================================================================================================
|
||||
|
||||
In this article we describe how to use commands in Horizon, CLI and Terraform to secure load balancers for Kubernetes clusters in OpenStack by implementing IP whitelisting.
|
||||
|
||||
What Are We Going To Do[](#what-are-we-going-to-do "Permalink to this headline")
|
||||
What Are We Going To Do[🔗](#what-are-we-going-to-do "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Introduction[](#introduction "Permalink to this headline")
|
||||
Introduction[🔗](#introduction "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
Load balancers without proper restrictions are vulnerable to unauthorized access. By implementing IP whitelisting, only specified IP addresses are permitted to access the load balancer. You decide from which IP address it is possible to access the load balancers in particular and the Kubernetes cluster in general.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -61,7 +61,7 @@ For complete introduction and installation of Terrafom on OpenStack see article
|
||||
|
||||
To use Terraform in this capacity, you will need to authenticate to the cloud using application credentials with **unrestricted** access. Check article [How to generate or use Application Credentials via CLI on CloudFerro Cloud](../cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-CloudFerro-Cloud.html.md)
|
||||
|
||||
Horizon: Whitelisting Load Balancers[](#horizon-whitelisting-load-balancers "Permalink to this headline")
|
||||
Horizon: Whitelisting Load Balancers[🔗](#horizon-whitelisting-load-balancers "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------
|
||||
|
||||
We will whitelist load balancers by restricting the relevant ports in their security groups. In Horizon, use command **Network** –> **Load Balancers** to see the list of load balancers:
|
||||
@ -94,7 +94,7 @@ Choose which one you are going to edit; alternatively, you can create a new secu
|
||||
|
||||
Save and apply the changes.
|
||||
|
||||
### Verification[](#verification "Permalink to this headline")
|
||||
### Verification[🔗](#verification "Permalink to this headline")
|
||||
|
||||
To confirm the configuration:
|
||||
|
||||
@ -102,7 +102,7 @@ To confirm the configuration:
|
||||
2. View the security groups applied to the load balancers’ associated instances.
|
||||
3. Ensure the newly added rule is visible.
|
||||
|
||||
CLI: Whitelisting Load Balancers[](#cli-whitelisting-load-balancers "Permalink to this headline")
|
||||
CLI: Whitelisting Load Balancers[🔗](#cli-whitelisting-load-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------
|
||||
|
||||
The OpenStack CLI provides a command-line method for implementing IP whitelisting.
|
||||
@ -159,7 +159,7 @@ openstack server add security group <INSTANCE_ID> <SECURITY_GROUP_NAME>
|
||||
|
||||
```
|
||||
|
||||
### Verification[](#id1 "Permalink to this headline")
|
||||
### Verification[🔗](#id1 "Permalink to this headline")
|
||||
|
||||
Verify the applied security group rules:
|
||||
|
||||
@ -175,7 +175,7 @@ openstack server show <INSTANCE_ID>
|
||||
|
||||
```
|
||||
|
||||
Terraform: Whitelisting Load Balancers[](#terraform-whitelisting-load-balancers "Permalink to this headline")
|
||||
Terraform: Whitelisting Load Balancers[🔗](#terraform-whitelisting-load-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Terraform is an Infrastructure as Code (IaC) tool that can automate the process of configuring IP whitelisting.
|
||||
@ -243,7 +243,7 @@ openstack security group show <SECURITY_GROUP_ID>
|
||||
|
||||
```
|
||||
|
||||
State of Security: Before and after whitelisting the balancers[](#state-of-security-before-and-after-whitelisting-the-balancers "Permalink to this headline")
|
||||
State of Security: Before and after whitelisting the balancers[🔗](#state-of-security-before-and-after-whitelisting-the-balancers "Permalink to this headline")
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:
|
||||
@ -251,7 +251,7 @@ Before implementing IP whitelisting, the load balancer accepts traffic from all
|
||||
> * Only specified IPs can access the load balancer.
|
||||
> * Unauthorized access attempts are denied.
|
||||
|
||||
### Verification Tools[](#verification-tools "Permalink to this headline")
|
||||
### Verification Tools[🔗](#verification-tools "Permalink to this headline")
|
||||
|
||||
Various tools can ensure the protection is installed and active:
|
||||
|
||||
@ -267,21 +267,21 @@ curl
|
||||
Wireshark
|
||||
: (free): For packet-level analysis.
|
||||
|
||||
### Testing with nmap[](#testing-with-nmap "Permalink to this headline")
|
||||
### Testing with nmap[🔗](#testing-with-nmap "Permalink to this headline")
|
||||
|
||||
```
|
||||
nmap -p <PORT> <LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with http and curl[](#testing-with-http-and-curl "Permalink to this headline")
|
||||
### Testing with http and curl[🔗](#testing-with-http-and-curl "Permalink to this headline")
|
||||
|
||||
```
|
||||
curl http://<LOAD_BALANCER_IP>
|
||||
|
||||
```
|
||||
|
||||
### Testing with curl and livez[](#testing-with-curl-and-livez "Permalink to this headline")
|
||||
### Testing with curl and livez[🔗](#testing-with-curl-and-livez "Permalink to this headline")
|
||||
|
||||
This would be a typical response before changes:
|
||||
|
||||
@ -329,7 +329,7 @@ curl: (28) Connection timed out after 5000 milliseconds
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Compare with articles:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Install GitLab on CloudFerro Cloud Kubernetes[](#install-gitlab-on-brand-name-kubernetes "Permalink to this headline")
|
||||
Install GitLab on CloudFerro Cloud Kubernetes[🔗](#install-gitlab-on-brand-name-kubernetes "Permalink to this headline")
|
||||
=======================================================================================================================
|
||||
|
||||
Source control is essential for building professional software. Git has become synonym of a modern source control system and GitLab is one of most popular tools based on Git.
|
||||
@ -7,7 +7,7 @@ GitLab can be deployed as your local instance to ensure privacy of the stored ar
|
||||
|
||||
In this article, we will install GitLab on a Kubernetes cluster in CloudFerro Cloud cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Create a Floating IP and associate the A record in DNS
|
||||
@ -15,7 +15,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Install GitLab Helm chart
|
||||
> * Verify the installation
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -55,7 +55,7 @@ No. 5 **Proof of concept vs. production ready version of GitLab client**
|
||||
|
||||
In Step 3 below, you will create file **my-values-gitlab.yaml** to define the default configuration of the GitLab client. The values chosen there will provide for a solid quick-start, perhaps in the “proof of concept” phase of development. To customize for production, this reference will come handy: <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v7.11.1/values.yaml?ref_type=tags>
|
||||
|
||||
Step 1 Create a Floating IP and associate the A record in DNS[](#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns "Permalink to this headline")
|
||||
Step 1 Create a Floating IP and associate the A record in DNS[🔗](#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Our GitLab client will run web application (GUI) exposed as a Kubernetes service. We will use GitLab’s Helm chart, which will, as part of GitLab’s installation,
|
||||
@ -71,7 +71,7 @@ After closing the form, your new floating IP will appear on the list and let us
|
||||
|
||||

|
||||
|
||||
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
Step 2 Apply preliminary configuration[🔗](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A condition to ensure compatibility with Kubernetes setup on CloudFerro Cloud clouds is to enable the Service Accounts provisioned by GitLab Helm chart to have sufficient access to reading scaling metrics. This can be done by creating an appropriate *rolebinding*.
|
||||
@ -109,7 +109,7 @@ kubectl apply -f gitlab-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 3 Install GitLab Helm chart[](#step-3-install-gitlab-helm-chart "Permalink to this headline")
|
||||
Step 3 Install GitLab Helm chart[🔗](#step-3-install-gitlab-helm-chart "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Now let’s download GitLab’s Helm repository with the following two commands:
|
||||
@ -164,7 +164,7 @@ After this step, there will be several Kubernetes resources created.
|
||||
|
||||

|
||||
|
||||
Step 4 Verify the installation[](#step-4-verify-the-installation "Permalink to this headline")
|
||||
Step 4 Verify the installation[🔗](#step-4-verify-the-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
After a short while, when all the pods are up, we can access Gitlab’s service entering the address: **gitlab.<yourdomain>**:
|
||||
@ -182,7 +182,7 @@ This takes us to the following screen. From there we can utilize various feature
|
||||
|
||||

|
||||
|
||||
Errors during the installation[](#errors-during-the-installation "Permalink to this headline")
|
||||
Errors during the installation[🔗](#errors-during-the-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
In case you encounter errors during installation, which you cannot recover, it might be worth to start with fresh installation. Here is the command to delete the chart:
|
||||
@ -194,7 +194,7 @@ helm uninstall gitlab -n gitlab
|
||||
|
||||
After that, you can restart the procedure from Step 2.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You now have a local instance of GitLab at your disposal. As next steps you could:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[](#install-and-run-argo-workflows-on-brand-name-cloud-name-magnum-kubernetes "Permalink to this headline")
|
||||
Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[🔗](#install-and-run-argo-workflows-on-brand-name-cloud-name-magnum-kubernetes "Permalink to this headline")
|
||||
================================================================================================================================================================================
|
||||
|
||||
[Argo Workflows](https://argoproj.github.io/argo-workflows/) enable running complex job workflows on Kubernetes. It can
|
||||
@ -11,7 +11,7 @@ Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes[](#insta
|
||||
|
||||
Argo applies a microservice-oriented, container-native approach, where each step of a workflow runs as a container.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Authenticate to the cluster
|
||||
@ -21,7 +21,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Run Argo Workflows locally
|
||||
> * Run sample workflow with two tasks
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -30,7 +30,7 @@ No. 1 **Account**
|
||||
No. 2 **kubectl pointed to the Kubernetes cluster**
|
||||
: If you are creating a new cluster, for the purposes of this article, call it *argo-cluster*. See [How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum](How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-CloudFerro-Cloud-OpenStack-Magnum.html.md)
|
||||
|
||||
Authenticate to the cluster[](#authenticate-to-the-cluster "Permalink to this headline")
|
||||
Authenticate to the cluster[🔗](#authenticate-to-the-cluster "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Let us authenticate to *argo-cluster*. Run from your local machine the following command to create a config file in the present working directory:
|
||||
@ -49,7 +49,7 @@ export KUBECONFIG=/home/eouser/config
|
||||
|
||||
Run this command.
|
||||
|
||||
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
|
||||
Apply preliminary configuration[🔗](#apply-preliminary-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. Argo Workflows will require some additional privileges in order to run correctly.
|
||||
@ -89,7 +89,7 @@ kubectl apply -f argo-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Install Argo Workflows[](#install-argo-workflows "Permalink to this headline")
|
||||
Install Argo Workflows[🔗](#install-argo-workflows "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
In order to deploy Argo on the cluster, run the following command:
|
||||
@ -101,7 +101,7 @@ kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/dow
|
||||
|
||||
There is also an Argo CLI available for running jobs from command line. Installing it is outside of scope of this article.
|
||||
|
||||
Run Argo Workflows from the cloud[](#run-argo-workflows-from-the-cloud "Permalink to this headline")
|
||||
Run Argo Workflows from the cloud[🔗](#run-argo-workflows-from-the-cloud "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
Normally, you would need to authenticate to the server via a UI login. Here, we are going to switch authentication mode by applying the following patch to the deployment. (For production, you might need to incorporate a proper authentication mechanism.) Submit the following command:
|
||||
@ -149,7 +149,7 @@ Argo is by default served on HTTPS with a self-signed certificate, on port **274
|
||||
|
||||

|
||||
|
||||
Run sample workflow with two tasks[](#run-sample-workflow-with-two-tasks "Permalink to this headline")
|
||||
Run sample workflow with two tasks[🔗](#run-sample-workflow-with-two-tasks "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to run a sample workflow, first close the initial pop-ups in the UI. Then go to the top-left icon “Workflows” and click on it, then you might need to press “Continue” in the following pop-up.
|
||||
@ -211,7 +211,7 @@ The results show that indeed the message “Files processed” was printed in th
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
For production, consider alternative authentication mechanism and replacing self-signed HTTPS certificates with the ones generated by a Certificate Authority.
|
||||
@ -1,4 +1,4 @@
|
||||
Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud[](#install-and-run-dask-on-a-kubernetes-cluster-in-brand-name-cloud "Permalink to this headline")
|
||||
Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud[🔗](#install-and-run-dask-on-a-kubernetes-cluster-in-brand-name-cloud "Permalink to this headline")
|
||||
=========================================================================================================================================================================
|
||||
|
||||
[Dask](https://www.dask.org/) enables scaling computation tasks either as multiple processes on a single machine, or on Dask clusters that consist of multiple worker machines. Dask provides a scalable alternative to popular Python libraries e.g. Numpy, Pandas or SciKit Learn, but still using a compact and very similar API.
|
||||
@ -7,7 +7,7 @@ Dask scheduler, once presented with a computation task, splits it into smaller t
|
||||
|
||||
In this article you will install a Dask cluster on Kubernetes and run Dask worker nodes as Kubernetes pods. As part of the installation, you will get access to a Jupyter instance, where you can run the sample code.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install Dask on Kubernetes
|
||||
@ -16,7 +16,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Configure Dask cluster on Kubernetes from Python
|
||||
> * Resolving errors
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -43,7 +43,7 @@ No. 6 **Basic familiarity with Jupyter and Python scientific libraries**
|
||||
|
||||
> We will use [Pandas](https://pandas.pydata.org/docs/user_guide/index.html#user-guide) as an example.
|
||||
|
||||
Step 1 Install Dask on Kubernetes[](#step-1-install-dask-on-kubernetes "Permalink to this headline")
|
||||
Step 1 Install Dask on Kubernetes[🔗](#step-1-install-dask-on-kubernetes "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
|
||||
To install Dask as a Helm chart, first download the Dask Helm repository:
|
||||
@ -83,7 +83,7 @@ helm install dask dask/dask -n dask --create-namespace -f dask-values.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 2 Access Jupyter and Dask Scheduler dashboard[](#step-2-access-jupyter-and-dask-scheduler-dashboard "Permalink to this headline")
|
||||
Step 2 Access Jupyter and Dask Scheduler dashboard[🔗](#step-2-access-jupyter-and-dask-scheduler-dashboard "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
After the installation step, you can access Dask services:
|
||||
@ -110,7 +110,7 @@ Similarly, with the Scheduler Dashboard, paste the floating IP to the browser to
|
||||
|
||||

|
||||
|
||||
Step 3 Run a sample computing task[](#step-3-run-a-sample-computing-task "Permalink to this headline")
|
||||
Step 3 Run a sample computing task[🔗](#step-3-run-a-sample-computing-task "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
The installed Jupyter instance already contains Dask and other useful Python libraries installed. To run a sample job, first activate the notebook by clicking on icon named **NoteBook** → **Python3(ipykernel)** on the right hand side of the Jupyter instance browser screen.
|
||||
@ -161,7 +161,7 @@ Computation time Dask: 0.07 seconds.
|
||||
|
||||
Note these results are not deterministic and simple Pandas could also perform better case by case. The overhead to distribute and collect results from Dask workers needs to be also taken into account. Further tuning the performance of Dask is beyond the scope of this article.
|
||||
|
||||
Step 4 Configure Dask cluster on Kubernetes from Python[](#step-4-configure-dask-cluster-on-kubernetes-from-python "Permalink to this headline")
|
||||
Step 4 Configure Dask cluster on Kubernetes from Python[🔗](#step-4-configure-dask-cluster-on-kubernetes-from-python "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
For managing the Dask cluster on Kubernetes we can use a dedicated Python library *dask-kubernetes*. Using this library, we can reconfigure certain parameters of our Dask cluster.
|
||||
@ -216,7 +216,7 @@ Or, you can see the current number of worker nodes in the Dask Scheduler dashboa
|
||||
|
||||
Note that the functionalities of *dask-kubernetes* should be possible to achieve using just Kubernetes API directly, the choice will depend on your personal preference.
|
||||
|
||||
Resolving errors[](#resolving-errors "Permalink to this headline")
|
||||
Resolving errors[🔗](#resolving-errors "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
When running command
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud[](#install-and-run-noobaa-on-kubernetes-cluster-in-single-and-multicloud-environment-on-brand-name "Permalink to this headline")
|
||||
Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud[🔗](#install-and-run-noobaa-on-kubernetes-cluster-in-single-and-multicloud-environment-on-brand-name "Permalink to this headline")
|
||||
========================================================================================================================================================================================================================================
|
||||
|
||||
[NooBaa](https://www.noobaa.io/) enables creating an abstracted S3 backend on Kubernetes. Such backend can be connected to multiple S3 backing stores e.g. in a multi-cloud setup, allowing for storage expandability or High Availability among other beneficial features.
|
||||
@ -9,7 +9,7 @@ In this article you will learn the basics of using NooBaa
|
||||
> * how to create a NooBaa bucket backed by S3 object storage in the CloudFerro Cloud cloud
|
||||
> * how to create a NooBaa bucket mirroring data on two different clouds
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install NooBaa in local environment
|
||||
@ -22,7 +22,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Testing access to the bucket
|
||||
> * Create mirroring on clouds WAW3-1 and WAW3-2
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Hosting**
|
||||
@ -55,7 +55,7 @@ No. 7 **Access to WAW3-2 cloud**
|
||||
|
||||
To mirror data on WAW3-1 and WAW3-2, you will need access to those two clouds.
|
||||
|
||||
Install NooBaa in local environment[](#install-noobaa-in-local-environment "Permalink to this headline")
|
||||
Install NooBaa in local environment[🔗](#install-noobaa-in-local-environment "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
The first step to work with NooBaa is to install it on our local system. We will download the installer, make it executable and move it to the system path:
|
||||
@ -80,7 +80,7 @@ This will result in an output similar to the below:
|
||||
|
||||

|
||||
|
||||
Apply preliminary configuration[](#apply-preliminary-configuration "Permalink to this headline")
|
||||
Apply preliminary configuration[🔗](#apply-preliminary-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
We will need to apply additional configuration on a Magnum cluster to avoid PodSecurityPolicy exception. For a refresher, see article [Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud](Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-CloudFerro-Cloud-cloud.html.md).
|
||||
@ -120,7 +120,7 @@ kubectl apply -f noobaa-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Install NooBaa on the Kubernetes cluster[](#install-noobaa-on-the-kubernetes-cluster "Permalink to this headline")
|
||||
Install NooBaa on the Kubernetes cluster[🔗](#install-noobaa-on-the-kubernetes-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
We already have NooBaa available in our local environment, but we still need to install NooBaa on our Kubernetes cluster. NooBaa will use the context of the KUBECONFIG by **kubectl** (as activated in Prerequisite No. 4), so install NooBaa in the dedicated namespace:
|
||||
@ -144,10 +144,10 @@ It outputs several useful insights about the NooBaa installation, with the “ke
|
||||
|
||||
For the purpose of this article, we will not use the default backing store, but rather learn to create a new backing store based on cloud S3 object storage. Such setup can be then easily extended so that we can end up with separate backing stores for different clouds. In the second part of this article you will create one store on WAW3-1 cloud, another one on WAW3-2 cloud and they will be available through one abstracted S3 bucket in NooBaa.
|
||||
|
||||
Create a NooBaa backing store[](#create-a-noobaa-backing-store "Permalink to this headline")
|
||||
Create a NooBaa backing store[🔗](#create-a-noobaa-backing-store "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
### Step 1. Create object storage bucket on WAW3-1[](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")
|
||||
### Step 1. Create object storage bucket on WAW3-1[🔗](#step-1-create-object-storage-bucket-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Now create an object storage bucket on WAW3-1 cloud:
|
||||
|
||||
@ -162,7 +162,7 @@ Note
|
||||
|
||||
You need to create a bucket with a different name and use this generated name to follow along.
|
||||
|
||||
### Step 2. Set up EC2 credentials[](#step-2-set-up-ec2-credentials "Permalink to this headline")
|
||||
### Step 2. Set up EC2 credentials[🔗](#step-2-set-up-ec2-credentials "Permalink to this headline")
|
||||
|
||||
If you have properly set up the EC2 (S3) keys for your WAW3-1 object storage, take note of them with the following command:
|
||||
|
||||
@ -171,7 +171,7 @@ openstack ec2 credentials list
|
||||
|
||||
```
|
||||
|
||||
### Step 3. Create a new NooBaa backing store[](#step-3-create-a-new-noobaa-backing-store "Permalink to this headline")
|
||||
### Step 3. Create a new NooBaa backing store[🔗](#step-3-create-a-new-noobaa-backing-store "Permalink to this headline")
|
||||
|
||||
With the above in place, we can create a new NooBaa backing store called *custom-bs* by running the command below. Make sure to replace the access-key XXXXXX and the secret-key YYYYYYY with your own EC2 keys and the *bucket* with your own bucket name:
|
||||
|
||||
@ -195,7 +195,7 @@ Also, when viewing the bucket in Horizon (backing store), we can see NooBaa popu
|
||||
|
||||

|
||||
|
||||
### Step 4. Create a Bucket Class[](#step-4-create-a-bucket-class "Permalink to this headline")
|
||||
### Step 4. Create a Bucket Class[🔗](#step-4-create-a-bucket-class "Permalink to this headline")
|
||||
|
||||
When we have the backing store, the next step is to create a BucketClass (BC). Such BucketClass serves as a blueprint for NooBaa buckets: it defines
|
||||
|
||||
@ -232,7 +232,7 @@ kubectl apply -f custom-bc.yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 5. Create an ObjectBucketClaim[](#step-5-create-an-objectbucketclaim "Permalink to this headline")
|
||||
### Step 5. Create an ObjectBucketClaim[🔗](#step-5-create-an-objectbucketclaim "Permalink to this headline")
|
||||
|
||||
As the last step, we create an *ObjectBucketClaim*. This bucket claim utilizes the *noobaa.noobaa.io* storage class which got deployed with NooBaa, and references the *custom-bc* bucket class created in the previous step. Create a file called *custom-obc.yaml*:
|
||||
|
||||
@ -259,7 +259,7 @@ kubectl apply -f custom-obc.yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 6. Obtain name of the NooBaa bucket[](#step-6-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
### Step 6. Obtain name of the NooBaa bucket[🔗](#step-6-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
As a result, besides the *ObjectBucket* claim resource, also a configmap and a secret with the same name *custom-obc* got created in NooBaa. Let’s view the configmap with:
|
||||
|
||||
@ -286,7 +286,7 @@ metadata:
|
||||
|
||||
We can see the name of the NooBaa bucket *my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf*, which is backing up our “physical” WAW3-1 bucket. Store this name for later use in this article.
|
||||
|
||||
### Step 7. Obtain secret for the NooBaa bucket[](#step-7-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
### Step 7. Obtain secret for the NooBaa bucket[🔗](#step-7-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
The secret is also relevant for us as we need to extract the S3 keys to the NooBaa bucket. The access and secret key are base64 encoded in the secret, we can retrieve them decoded with the following commands:
|
||||
|
||||
@ -298,7 +298,7 @@ kubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KE
|
||||
|
||||
Take note of access and secret keys, as we will use them in the next step.
|
||||
|
||||
### Step 8. Connect to NooBaa bucket from S3cmd[](#step-8-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
### Step 8. Connect to NooBaa bucket from S3cmd[🔗](#step-8-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
|
||||
Noobaa created a few services when it got deployed, which we can verify with the command below:
|
||||
|
||||
@ -320,7 +320,7 @@ sts LoadBalancer 10.254.23.154 64.225.135.92 443:31374/TCP
|
||||
|
||||
The “s3” service provides the endpoint that can be used to access Nooba storage (backed by the actual storage in WAW3-1). In our case, this endpoint URL is **64.225.133.81**. Replace it with the value you get from the above command, when working through this article.
|
||||
|
||||
### Step 9. Configure S3cmd to access NooBaa[](#step-9-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
### Step 9. Configure S3cmd to access NooBaa[🔗](#step-9-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
|
||||
Now that we have both the endpoint and the keys, we can configure **s3cmd** to access the bucket created by NooBaa. Create a configuration file *noobaa.s3cfg* with the following contents:
|
||||
|
||||
@ -362,7 +362,7 @@ Configuration saved to 'noobaa.s3cfg'
|
||||
|
||||
```
|
||||
|
||||
### Step 10. Testing access to the bucket[](#step-10-testing-access-to-the-bucket "Permalink to this headline")
|
||||
### Step 10. Testing access to the bucket[🔗](#step-10-testing-access-to-the-bucket "Permalink to this headline")
|
||||
|
||||
We can upload a test file to NooBaa. In our case, we upload a simple text file *xyz.txt* with text content “xyz”, using the following command:
|
||||
|
||||
@ -381,7 +381,7 @@ upload: 'xyz.txt' -> 's3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf/xyz.tx
|
||||
|
||||
We can also see in Horizon that a few new folders and files were added to NooBaa. However, we will not see the *xyz.txt* file directly there, because NooBaa applies its own fragmentation techniques on the data.
|
||||
|
||||
Connect NooBaa in a multi-cloud setup[](#connect-noobaa-in-a-multi-cloud-setup "Permalink to this headline")
|
||||
Connect NooBaa in a multi-cloud setup[🔗](#connect-noobaa-in-a-multi-cloud-setup "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------
|
||||
|
||||
NooBaa can be used to create an abstracted S3 endpoint, connected to two or more cloud S3 endpoints. This can be helpful in scenarios of e.g. replicating the same data in multiple clouds or combining the storage of multiple clouds.
|
||||
@ -394,19 +394,19 @@ To illustrate the process, we are going create a new set of resources, new S3 bu
|
||||
|
||||
To proceed, first create two additional buckets from the Horizon interface. Replace the further commands and file contents in this section to reflect these bucket names.
|
||||
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-1[](#step-1-multi-cloud-create-bucket-on-waw3-1 "Permalink to this headline")
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-1[🔗](#step-1-multi-cloud-create-bucket-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Go to WAW3-1 Horizon interface and create a bucket we call *noobaamirror-waw3-1* (supply your own bucket name here and adhere to it in the rest of the article). It will be the available on endpoint <https://s3.waw3-1.cloudferro.com>.
|
||||
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-2[](#step-1-multi-cloud-create-bucket-on-waw3-2 "Permalink to this headline")
|
||||
### Step 1 Multi-cloud. Create bucket on WAW3-2[🔗](#step-1-multi-cloud-create-bucket-on-waw3-2 "Permalink to this headline")
|
||||
|
||||
Next, go to WAW3-2 Horizon interface and create a bucket we call *noobaamirror-waw3-2* (again, supply your own bucket name here and adhere to it in the rest of the article). It will be available on endpoint <https://s3.waw3-2.cloudferro.com>
|
||||
|
||||
### Step 2 Multi-cloud. Set up EC2 credentials[](#step-2-multi-cloud-set-up-ec2-credentials "Permalink to this headline")
|
||||
### Step 2 Multi-cloud. Set up EC2 credentials[🔗](#step-2-multi-cloud-set-up-ec2-credentials "Permalink to this headline")
|
||||
|
||||
Use the existing pair of EC2 credentials or first create a new pair and then use them in the next step.
|
||||
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1[](#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1 "Permalink to this headline")
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1[🔗](#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1 "Permalink to this headline")
|
||||
|
||||
Apply the following command to create *mirror-bs1* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
|
||||
|
||||
@ -415,7 +415,7 @@ noobaa -n noobaa backingstore create s3-compatible mirror-bs1 --endpoint https:/
|
||||
|
||||
```
|
||||
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2[](#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2 "Permalink to this headline")
|
||||
### Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2[🔗](#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2 "Permalink to this headline")
|
||||
|
||||
Apply the following command to create *mirror-bs2* backing store (change names of: bucket name, S3 access key, S3 secret key to your own):
|
||||
|
||||
@ -424,7 +424,7 @@ noobaa -n noobaa backingstore create s3-compatible mirror-bs2 --endpoint https:/
|
||||
|
||||
```
|
||||
|
||||
### Step 4 Multi-cloud. Create a Bucket Class[](#step-4-multi-cloud-create-a-bucket-class "Permalink to this headline")
|
||||
### Step 4 Multi-cloud. Create a Bucket Class[🔗](#step-4-multi-cloud-create-a-bucket-class "Permalink to this headline")
|
||||
|
||||
To create a BucketClass called *bc-mirror*, create a file called *bc-mirror.yaml* with the following contents:
|
||||
|
||||
@ -459,7 +459,7 @@ Note
|
||||
|
||||
The mirroring is implemented by listing **two** backing stores, *mirror-bs1* and *mirror-bs1*, under the *tiers* option.
|
||||
|
||||
### Step 5 Multi-cloud. Create an ObjectBucketClaim[](#step-5-multi-cloud-create-an-objectbucketclaim "Permalink to this headline")
|
||||
### Step 5 Multi-cloud. Create an ObjectBucketClaim[🔗](#step-5-multi-cloud-create-an-objectbucketclaim "Permalink to this headline")
|
||||
|
||||
Again, create file *obc-mirror.yaml* for ObjectBucketClaim *obc-mirror*:
|
||||
|
||||
@ -486,7 +486,7 @@ kubectl apply -f obc-mirror
|
||||
|
||||
```
|
||||
|
||||
### Step 6 Multi-cloud. Obtain name of the NooBaa bucket[](#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
### Step 6 Multi-cloud. Obtain name of the NooBaa bucket[🔗](#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
Extract bucket name from the configmap:
|
||||
|
||||
@ -495,7 +495,7 @@ kubectl get configmap obc-mirror -n noobaa -o yaml
|
||||
|
||||
```
|
||||
|
||||
### Step 7 Multi-cloud. Obtain secret for the NooBaa bucket[](#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
### Step 7 Multi-cloud. Obtain secret for the NooBaa bucket[🔗](#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket "Permalink to this headline")
|
||||
|
||||
Extract S3 keys from the created secret:
|
||||
|
||||
@ -505,7 +505,7 @@ kubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KE
|
||||
|
||||
```
|
||||
|
||||
### Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd[](#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
### Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd[🔗](#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd "Permalink to this headline")
|
||||
|
||||
Create additional config file for s3cmd e.g. *noobaa-mirror.s3cfg* and update the access key, the secret key and the bucket name to the ones retrieved above:
|
||||
|
||||
@ -514,7 +514,7 @@ s3cmd --configure -c noobaa-mirror.s3cfg
|
||||
|
||||
```
|
||||
|
||||
### Step 9 Multi-cloud. Configure S3cmd to access NooBaa[](#step-9-multi-cloud-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
### Step 9 Multi-cloud. Configure S3cmd to access NooBaa[🔗](#step-9-multi-cloud-configure-s3cmd-to-access-noobaa "Permalink to this headline")
|
||||
|
||||
To test, upload the *xyz.txt* file, which behind the scenes uploads a copy to both clouds. Be sure to change the bucket name *my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff* to the one retrieved from the configmap:
|
||||
|
||||
@ -523,7 +523,7 @@ s3cmd put xyz.txt s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-
|
||||
|
||||
```
|
||||
|
||||
### Step 10 Multi-cloud. Testing access to the bucket[](#step-10-multi-cloud-testing-access-to-the-bucket "Permalink to this headline")
|
||||
### Step 10 Multi-cloud. Testing access to the bucket[🔗](#step-10-multi-cloud-testing-access-to-the-bucket "Permalink to this headline")
|
||||
|
||||
To verify, delete the “physical” bucket on one of the clouds (e.g. from WAW3-1) from the Horizon interface. With the **s3cmd** command below you can see that NooBaa will still hold the copy from WAW3-2 cloud:
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Installing HashiCorp Vault on CloudFerro Cloud Magnum[](#installing-hashicorp-vault-on-brand-name-cloud-name-magnum "Permalink to this headline")
|
||||
Installing HashiCorp Vault on CloudFerro Cloud Magnum[🔗](#installing-hashicorp-vault-on-brand-name-cloud-name-magnum "Permalink to this headline")
|
||||
==================================================================================================================================================
|
||||
|
||||
In Kubernetes, a *Secret* is an object that contains passwords, tokens, keys or any other small pieces of data. Using *Secrets* ensures that the probability of exposing confidential data while creating, running and editing Pods is much smaller. The main problem is that *Secrets* are stored unencrypted in *etcd* so anyone with
|
||||
@ -20,7 +20,7 @@ You can apply a number of strategies to improve the security of the cluster or y
|
||||
|
||||
In this article, we shall install HashiCorp Vault within a Magnum Kubernetes cluster, on CloudFerro Cloud cloud.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Install self-signed TLS certificates with CFSSL
|
||||
@ -33,7 +33,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Return livenessProbe to production value
|
||||
> * Troubleshooting
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -50,7 +50,7 @@ This article will introduce you to Helm charts on Kubernetes:
|
||||
|
||||
[Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud](Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-CloudFerro-Cloud-Cloud.html.md)
|
||||
|
||||
Step 1 Install CFSSL[](#step-1-install-cfssl "Permalink to this headline")
|
||||
Step 1 Install CFSSL[🔗](#step-1-install-cfssl "Permalink to this headline")
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
To ensure that Vault communication with the cluster is encrypted, we need to provide TLS certificates.
|
||||
@ -76,7 +76,7 @@ sudo mv cfssl cfssljson /usr/local/bin
|
||||
|
||||
```
|
||||
|
||||
Step 2 Generate TLS certificates[](#step-2-generate-tls-certificates "Permalink to this headline")
|
||||
Step 2 Generate TLS certificates[🔗](#step-2-generate-tls-certificates "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
Before we start, let’s create a dedicated namespace where all Vault-related Kubernetes resources will live:
|
||||
@ -194,7 +194,7 @@ kubectl -n vault create secret tls tls-server --cert ./vault.pem --key ./vault-k
|
||||
|
||||
The naming of those secrets reflects the Vault Helm chart default names.
|
||||
|
||||
Step 3 Install Consul Helm chart[](#step-3-install-consul-helm-chart "Permalink to this headline")
|
||||
Step 3 Install Consul Helm chart[🔗](#step-3-install-consul-helm-chart "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
The Consul backend will ensure High Availability of our Vault installation. Consul will live in a namespace that we have already created, **vault**.
|
||||
@ -257,7 +257,7 @@ kubectl get pods -n vault
|
||||
|
||||
Wait until all of the pods are **Running** and then proceed with the next step.
|
||||
|
||||
Step 4 Install Vault Helm chart[](#step-4-install-vault-helm-chart "Permalink to this headline")
|
||||
Step 4 Install Vault Helm chart[🔗](#step-4-install-vault-helm-chart "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
We are now ready to install Vault.
|
||||
@ -372,7 +372,7 @@ vault-agent-injector-6c7cfc768-kv968 1/1 Running 0
|
||||
|
||||
```
|
||||
|
||||
Sealing and unsealing the Vault[](#sealing-and-unsealing-the-vault "Permalink to this headline")
|
||||
Sealing and unsealing the Vault[🔗](#sealing-and-unsealing-the-vault "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
Right after the installation, Vault server starts in a *sealed* state. It knows where and how to access the physical storage but, by design, it is lacking the key to decrypt any of it. The only operations you can do when Vault is sealed are to
|
||||
@ -393,7 +393,7 @@ You will have a limited but sufficient amount of time to enter the keys; the val
|
||||
|
||||
At the end of the article we show how to interactively set it to **60** seconds, so that the cluster can check health of the pods more frequently.
|
||||
|
||||
Step 5 Unseal Vault[](#step-5-unseal-vault "Permalink to this headline")
|
||||
Step 5 Unseal Vault[🔗](#step-5-unseal-vault "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
Three nodes in the Kubernetes cluster represent Vault and are named *vault-0*, *vault-1*, *vault-2*. To make the Vault functional, you will have to unseal all three of them.
|
||||
@ -463,7 +463,7 @@ kubectl -n vault exec -it vault-1 -- sh
|
||||
|
||||
and unseal it by entering at least three keys. Then the similar procedure for *vault-2*. Only when all three pods are unsealed will the Vault become active.
|
||||
|
||||
Step 6 Run Vault UI[](#step-6-run-vault-ui "Permalink to this headline")
|
||||
Step 6 Run Vault UI[🔗](#step-6-run-vault-ui "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
With our configuration, Vault UI is exposed on port 8200 of a dedicated LoadBalancer that got created.
|
||||
@ -492,7 +492,7 @@ You can now start using the Vault.
|
||||
|
||||

|
||||
|
||||
Return livenessProbe to production value[](#return-livenessprobe-to-production-value "Permalink to this headline")
|
||||
Return livenessProbe to production value[🔗](#return-livenessprobe-to-production-value "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
*livenessProbe* in Kubernetes is time in which the system checks the health of the nodes. That would normally not be a concern of yours but if you do not unseal the Vault within that amount of time, the unsealing won’t work. Under normal circumstances, the value would be **60** seconds so that in case of any disturbance, the system would react within one minute instead of six. But it is very hard to copy and enter three strings under one minute as would happen if the value of **60** were present in file **vault-values.yaml**. You would almost inevitably see Kubernetes error **137**, meaning that you did not perform the required operations in time.
|
||||
@ -520,7 +520,7 @@ You can now access the equivalent of file **vault-values.yaml** inside the Kuber
|
||||
|
||||
When done, save and leave Vim with the standard **:w** and **:q** syntax.
|
||||
|
||||
Troubleshooting[](#troubleshooting "Permalink to this headline")
|
||||
Troubleshooting[🔗](#troubleshooting "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Check the events, which can point out hints of what needs to be improved:
|
||||
@ -540,7 +540,7 @@ kubectl delete MutatingWebhookConfiguration vault-agent-injector-cfg
|
||||
|
||||
```
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now you have Vault server as a part of the cluster and you can also use it from the IP address it got installed to.
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud[](#installing-jupyterhub-on-magnum-kubernetes-cluster-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud[🔗](#installing-jupyterhub-on-magnum-kubernetes-cluster-in-brand-name-cloud-name-cloud "Permalink to this headline")
|
||||
================================================================================================================================================================================================
|
||||
|
||||
Jupyter notebooks are a popular method of presenting application code, as well as running exploratory experiments and analysis, conveniently, from a web browser. From a Jupyter notebook, one can run code, see the generated results in attractive visual form, and often also interactively interact with the generated output.
|
||||
@ -7,7 +7,7 @@ JupyterHub is an open-source service that creates cloud-based Jupyter notebook s
|
||||
|
||||
It is straightforward to quickly deploy JupyterHub using Magnum Kubernetes service, which we present in this article.
|
||||
|
||||
What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
What We are Going to Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Authenticate to the cluster
|
||||
@ -15,7 +15,7 @@ What We are Going to Cover[](#what-we-are-going-to-cover "Permalink to this h
|
||||
> * Retrieve details of Jupyterhub service
|
||||
> * Run Jupyterhub on HTTPS
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
Prerequisites[🔗](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
@ -36,7 +36,7 @@ No. 4 **A registered domain name available**
|
||||
|
||||
To see the results of the installation, you should have a registered domain of your own. You will use it in Step 5 to run JupyterHub on HTTPS in a browser.
|
||||
|
||||
Step 1 Authenticate to the cluster[](#step-1-authenticate-to-the-cluster "Permalink to this headline")
|
||||
Step 1 Authenticate to the cluster[🔗](#step-1-authenticate-to-the-cluster "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
First of all, we need to authenticate to the cluster. It may so happen that you already have a cluster at your disposal and that the config file is already in place. In other words, you are able to execute the **kubectl** command immediately.
|
||||
@ -57,7 +57,7 @@ export KUBECONFIG=/home/eouser/config
|
||||
|
||||
Run this command.
|
||||
|
||||
Step 2 Apply preliminary configuration[](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
Step 2 Apply preliminary configuration[🔗](#step-2-apply-preliminary-configuration "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------
|
||||
|
||||
OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with “least privileges” practice. JupyterHub will require some additional privileges in order to run correctly.
|
||||
@ -97,7 +97,7 @@ kubectl apply -f jupyterhub-rolebinding.yaml
|
||||
|
||||
```
|
||||
|
||||
Step 3 Run Jupyterhub Helm chart installation[](#step-3-run-jupyterhub-helm-chart-installation "Permalink to this headline")
|
||||
Step 3 Run Jupyterhub Helm chart installation[🔗](#step-3-run-jupyterhub-helm-chart-installation "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
To install Helm chart with the default settings use the below set of commands. This will
|
||||
@ -116,7 +116,7 @@ This is the result of successful Helm chart installation:
|
||||
|
||||

|
||||
|
||||
Step 4 Retrieve details of your service[](#step-4-retrieve-details-of-your-service "Permalink to this headline")
|
||||
Step 4 Retrieve details of your service[🔗](#step-4-retrieve-details-of-your-service "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Once all the Helm resources get deployed to the *jupyterhub* namespace, we can view their state and definitions using standard **kubectl** commands.
|
||||
@ -151,7 +151,7 @@ Warning
|
||||
|
||||
If in the next step you start running a JupyterHub on HTTPS, you will not be able to run it as a HTTP service unless it has been relaunched.
|
||||
|
||||
Step 5 Run on HTTPS[](#step-5-run-on-https "Permalink to this headline")
|
||||
Step 5 Run on HTTPS[🔗](#step-5-run-on-https "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
JupyterHub Helm chart enables HTTPS deployments natively. Once we deployed the chart above, we can simply upgrade the chart to enable serving it on HTTPS. Under the hood, it will generate the certificates using Let’s Encrypt certificate authority.
|
||||
@ -182,7 +182,7 @@ As noted in Prerequisite No. 4, you should have an available registered domain s
|
||||
|
||||

|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
For the production environment: replace the dummy authenticator with an alternative authentication mechanism, ensure persistence by e.g. connecting to a Postgres database. These steps are beyond the scope of this article.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user